*626Opinion
DIBIASO, Acting P. J.The jury found defendant and appellant Frederick Ray Brown guilty of forcible rape (Pen. Code, § 261, subd. (a)(2); count l)1 and incest (§ 285; count 2). The trial court found true the special allegations that Brown had suffered two prior serious felony convictions within the meaning of section 667, subdivision (a), and the “Three Strikes” law (§ 1170.12), and had served five prior prison terms (§667.5, subd. (b)). Brown was sentenced to 38 years to life in prison.
We affirm the judgment. For no purpose other than to expand the universe of reported decisions that deal with DNA2 evidence, we publish the portion of this opinion which addresses Brown’s contention the trial court erred by admitting DNA evidence because (a) the database used to calculate the probability of his genetic profile was for various reasons inadequate, and (b) the prosecution failed to call one of the DNA analysts to testify as to her testing procedures.
Discussion
I.
Brown contends the trial court committed reversible error by admitting DNA evidence, which established he could not be excluded as a source of the perpetrator’s DNA.3 He raises his contentions under the third prong of People v. Kelly (1976) 17 Cal.3d 24 [130 Cal.Rptr. 144, 549 P.2d 1240], in which the Supreme Court articulated this three-step test for the admission of evidence generated by a new scientific technique: (1) the reliability of the technique must be sufficiently established to have gained general acceptance in the relevant scientific community; (2) the witness providing the evidence must be properly qualified as an expert; and (3) the evidence must establish that, in the particular case, the correct and accepted scientific technique was actually followed. (People v. Kelly, supra, 17 Cal.3d at p. 30; People v. Soto (1999) 21 Cal.4th 512, 518-519 [88 Cal.Rptr.2d 34, 981 P.2d 958]; People v. Venegas (1998) 18 Cal.4th 47, 81 [74 Cal.Rptr.2d 262, 954 P.2d 525].)
Brown stipulated at trial to the scientific acceptance of the PCR (polymerase chain reaction) techniques used in this case, the first Kelly *627prong,4 and he has not challenged the finding that his DNA profile matched that of the perpetrator. However, he contends use of the Cellmark African-American database and calculations of genotypic frequencies derived from that database did not amount to correct and accepted procedure under Kelly’s third prong. Specifically, he claims the database was inadequate because (1) it was not in Hardy-Weinberg equilibrium; (2) it was not in linkage equilibrium; (3) it was subject to population substructure because the samples were not randomly selected; and (4) it was too small. Brown also contends, again under the third Kelly prong, that the prosecution failed to establish that the DNA analysis itself was conducted in a proper and acceptable manner because only one of the two Cellmark DNA scientists who performed the analysis was called to testify, thereby precluding determination of whether the other analyst followed the proper procedures, and violating Brown’s Sixth Amendment right to confront his accuser.
A. DNA Evidence
1. Genetic Profiling
We begin with some simplified biology. The genetics of a human cell can be compared to a library, the genome, composed of 46 “books,” each a single chromosome. The “text” contained in the books is written in DNA, the chemical language of genetics. The “library” is compiled by the owner’s parents, each of whom contributes 23 books, which are then matched up and arranged together in 23 paired sets inside the sacrosanct edifice of the nucleus. During embryonic development, the original library is copied millions of times so that each cell in the human body contains a copy of the entire library.5
Twenty-two of the twenty-three paired sets of books are entitled “Chromosome 1” through “Chromosome 22”; externally, the two paired books of each set appear to be identical in size and shape. However, the 23d set, which contains information on gender, consists of one book entitled “Chromosome X” (given by the mother) and one book entitled either “Chromosome X” or “Chromosome Y” (given by the father and determining the sex *628of the library’s owner). The 22 sets comprising “Chromosome 1” through “Chromosome 22” address an enormous variety of topics describing the composition, appearance, and function of the owner’s body. In addition, they include a considerable amount of what appears to be nonsense. The two paired books of each set, one book from each parent, address identical topics, but may contain slightly different information on those topics. Thus, two paired books opened to the same page contain corresponding “paragraphs,” but the text within those corresponding paragraphs may vary between the two books. For example, within the paragraph addressing eye color, one book may describe blue eyes while the other book of the set may describe brown eyes.6
The two corresponding, but potentially variant, paragraphs in the two paired books are called alleles. If, for a particular topic (i.e., at a particular region or locus on the DNA), the allele from the mother is A and the corresponding allele from the father is B, the genotype at that locus is designated AB. The text of two corresponding alleles at any locus may be identical (a homozygous genotype, e.g., AA) or different (a heterozygous genotype, e.g., AB). Regardless, one person’s genetic text is, in general, extremely similar to another person’s; indeed, viewed in its vast entirety, the genetic text of one human library is 99.9 percent identical to all others. As a result, the text of most corresponding paragraphs varies only slightly among members of the population.
Certain alleles, however, have been found to contain highly variable text. For example, alleles are composed of highly variable text when they describe structures requiring enormous variability. Also, some alleles appear to contain gibberish that varies greatly, or repeated strings of text that vary not in text but in repeat number. These variants (polymorphisms) found at certain loci render each person’s library unique7 and provide forensic scientists a method of differentiating between libraries (people) through the use of forensic techniques that rely on the large number of variant alleles possible at each variable locus. For example, the combined libraries of the human population may contain two variant alleles at a particular locus, three at another, nine at another, and so on. Since each person receives two alleles for each locus, the number of possible combinations is further increased.
When a sample of DNA—usually in the form of hair, blood, saliva, or semen—is left at the crime scene by a perpetrator, a forensic genetic analysis *629is conducted. First, DNA analysts create a genetic “profile” or “type” of the perpetrator’s DNA by determining which variants or alleles exist at several variable loci. Second, the defendant’s DNA is analyzed in exactly the same manner to create a profile for comparison with the perpetrator’s profile. If the defendant’s DNA produces a different profile than the perpetrator’s, even by only one allele, the defendant could not have been the source of the crime scene DNA, and he or she is absolutely exonerated.8 If, on the other hand, the defendant’s DNA produces exactly the same genetic profile, the defendant could have been the source of the perpetrator’s DNA—but so could any other person with the same genetic profile. Third, when the perpetrator’s and the defendant’s profiles are found to match, the statistical significance of the match must be explained in terms of the rarity or commonness of that profile within a particular population—that is, the number of people within a population expected to possess that particular genetic profile, or, put another way, the probability that a randomly chosen person in that population possesses that particular genetic profile.9 Only then can the jury weigh the value of the profile match. (People v. Venegas, supra, 18 Cal.4th at p. 82.)10
2. Statistical Interpretation
Performing this last step—the determination of the profile’s rarity— requires information about the relevant population. For example, if the victim reports that the perpetrator had blue eyes and abnormally short fingers (brachydactyly), forensic scientists will need to know how rare the combination of blue eyes and brachydactyly is in the population. That determination requires knowledge of the separate frequencies of these two traits in the population—how many people have blue eyes and how many people have brachydactyly. But it is impractical to actually examine the entire population to count every person with blue eyes and every person with brachydactyly; instead, scientists create a database of randomly selected people, and use the frequencies of the traits of that group of people to represent the entire population. If among the people used to compile the database the occurrence of blue eyes is fairly common and the occurrence of brachydactyly is very uncommon, then the probability of the two traits *630occurring together will be extremely rare. That determination, derived from the database, is presumed to apply to the entire population the database was created to represent. Therefore, the reasoning goes, if very few people are expected to have both traits—that is, if the profile is rare—the probability is greater that a defendant who possesses both traits is in fact the perpetrator.
In reality, forensically important alleles do not manifest themselves in obvious physical traits, but the idea is the same. Because allele frequencies cannot be determined from external appearances, preparation of a database requires collection of DNA samples (usually blood) from unrelated individuals in the relevant population, genetic analysis of each DNA sample to determine the alleles present at each locus tested, tally of the various alleles at each locus, and statistical analysis of the tallied results to determine the frequency of each allele (the allele frequency) and then the frequency of every possible corresponding set of two alleles (the genotype frequency) at each locus.11 These database frequencies become standard values from which a perpetrator’s profile can be given a numerical probability of existing in a population.12
That numerical probability is generally calculated using the “product rule,” which posits that the probability of several things occurring together is the product of their separate probabilities. (See Kaye, DNA Evidence: Probability, Population Genetics, and the Courts (1993) 7 Harv. J.L. & Tech. 101, 127-128.) For example, the probability of “heads” coming up on three successive coin tosses is the probability of heads on the first toss (1 in 2), multiplied by the probability of heads on the second toss (1 in 2), multiplied by the probability of heads on the third toss (1 in 2), resulting in an overall probability of 1 in 8.13 Similarly, if a set of paired alleles (a genotype) is known to occur in 1 in 3.47 people and another set of paired alleles is known to occur in 1 in 18.52 people, then the probability of both sets occurring in the same person is 1/3.47 multiplied by 1/18.52, or 1 in 64.26 people. When more alleles are examined, the probability of a multilocus profile can be exceedingly rare, even one in hundreds of billions, and therefore the profile is highly distinctive.14
Statistical evaluation raises two major issues with regard to uncertainty. One relates to substructure in the population. The other relates to the *631characteristics of the database, such as its size and whether it is representative of the relevant population. (NRCII, supra, at p. 125.)
a. Hardy-Weinberg Equilibrium
The use of the product rule to calculate the overall probability of a genetic profile requires two conditions within the database population. The first condition is Hardy-Weinberg (HW) equilibrium, the state in which the alleles at a single locus are independent of each other. Thus, a population is in HW equilibrium if inheritance of one allele at a locus is unaffected by inheritance of another allele at the same locus.15 Expected HW proportions result when the inheritance of alleles in a population is random rather than affected by pressures such as inbreeding, common ancestry, and small or isolated subpopulations.16 (NRCII, supra, at p. 98; see also People v. Soto, supra, 21 Cal.4th at pp. 525 -526.)
The HW expectations, however, are infrequently realized. Instead, populations are usually composed of subpopulations having allele probabilities that depart from equational equilibrium, a phenomenon known as population substructure. (Devlin & Roeder, DNA Profiling: Statistics and Population Genetics in Modem Scientific Evidence: The Law and Science of Expert Testimony (1997) § 18-3.2.1 (hereafter Modem Scientific Evidence).) Several years ago, the problem of population substructure raised a controversy over whether the product mle, in its unmodified form, could be used to accurately calculate profile frequencies. In response, the National Research Council (NRC)17 published reports for guidance in the *632field of forensic DNA analysis. The first report (NRCI)18 was issued in 1992, followed by a second report (NRCII)19 in 1996, which reevaluated NRCI and concluded the dispute over the product rule had been resolved. NRCII acknowledged that when population substructure exists HW equilibrium is often disrupted and thus the application of standard HW presumptions to calculate genotype frequencies is inappropriate; indeed, it “will always lead to an underestimate of homozygous genotype frequencies and usually to an overestimate of heterozygote frequencies.” (NRCII, supra, at p. 99.) NRCII therefore advised that calculations of genotype frequencies be adjusted or corrected when a database is affected by population substructure. NRCII’s “approach is not to assume HW proportions, but to use procedures that take deviations from HW into account.” (NRCII, supra, at p. 104.)20
Because heterozygote frequencies are overestimated by the HW equation, the defendant benefits from its use to calculate heterozygote frequencies. A correction is required, however, for calculation of homozygote frequencies, which are underestimated by the HW equation as being rarer than they actually are. Accordingly, for PCR-based DNA profiling, NRCII’s Recommendation 4.121 advises the following procedure to compensate for deviations caused by population substructure:
*633For heterozygous genotypes:
Use the standard HW proportion of 2pq to calculate a heterozygous genotype frequency because the result is a conservative overestimate of the probability of the genotype, which functions in the defendant’s favor.
For homozygous genotypes:
Instead of using the standard HW proportion of p2 to calculate a homozygous genotype frequency, use p2 + p(l - p)0, where 0 (theta) is between 0.01 (a conservative value appropriate for most United States subpopulations) and 0.03 (a more conservative value appropriate for small, isolated subpopulations), to correct for the underestimate caused by substructure. The 9 value of 0.03 may generally be chosen to ensure a conservative calculation biased toward the defendant. (NRCII, supra, at p. 122.)
b. Linkage Equilibrium
The second condition required for use of the product rule is linkage equilibrium—independence among many loci or, more precisely, between many sets of two loci. Linkage equilibrium exists when inheritance of alleles at one locus is not affected by inheritance of alleles at another locus, such that the various single-locus genotypes at each locus are statistically independent. A lack of linkage equilibrium usually arises due to factors such as inbreeding and physical proximity of loci on the DNA (increasing the chance they could be inherited together), although disequilibrium does not require that the loci be on the same chromosome. (See Modem Scientific Evidence, supra, at p. 701; NRCII, supra, at p. 106.) An example of linkage disequilibrium is found in Nordic populations where blond hair, blue eyes, and fair skin are not inherited independently of each other, but are found together in a disproportionate number of people. Theoretically, only when the population is in linkage equilibrium (in regard to the relevant loci) may the single-locus genotype frequencies of several loci properly be multiplied together by the product mle to give a legitimate and reliable overall profile frequency. (Modem Scientific Evidence, supra, at p. 722; see also People v. Soto, supra, 21 Cal.4th at pp. 525 -526.)
NRCII, however, explains that the effects of small departures from linkage equilibrium are usually inconsequential, especially when the genotype at *634each locus is assigned a conservative value. “[W]hereas a population broken into subgroups [causing a departure from HW equilibrium] has a systematic bias in favor of homozygosity, departures from [linkage equilibrium] increase some associations and decrease others in about equal degrees. Although there might be linkage disequilibrium, we would expect some canceling of opposite effects. The important point, however, is not the canceling but the small amount of linkage disequilibrium .... In this case, multiplying together the frequencies at the several loci will yield roughly the correct answer. An estimated frequency of a composite genotype based on the product of conservative estimates at the several loci is expected to be conservative for the multilocus genotypes.” (NRCII, supra, at p. 107, italics added, fn. omitted.)
Thus, although substantial linkage disequilibrium presents a significant problem, typically only small departures actually occur.
c. Random Sampling
Ideally, a database would be composed of samples chosen entirely at random so the relevant population would be properly represented. Yet, it is “difficult, expensive, and impractical to arrange a statistically valid random-sampling scheme.” (NRCII, supra, at p. 126.) But “[t]he saving point is that the [alleles] in which we are interested are believed theoretically and found empirically to be essentially uncorrelated with the means by which samples are chosen. Comparison of estimated profile frequencies from different data sets shows relative insensitivity to the source of the data . . . .” (Ibid., italics added.) “Convenience samples,” taken from people who contribute to blood banks or who have been involved in paternity suits, are appropriate for forensic use for two reasons. “First, the loci generally used for identification are usually not parts of functional genes and therefore are unlikely to be correlated with any behavioral or physical traits that might be associated with different subsets of the population. Second, empirical tests have shown only very minor differences among the frequencies of DNA markers from different subpopulations or geographical areas.” (Id. at pp. 30, 149-156.) “Within a racial group, geographic origin and ethnic composition have very little effect on the frequencies of forensic DNA profiles, although there are larger differences between major groups (races).” (Id. at p. 156.)22 If the sampling method nevertheless produces substructuring, those effects can be corrected by use of more conservative frequencies calculated using Recommendation 4.1. (Id. at p. 29.)
*635d. Database Size
A database is composed of a relatively small number of samples and is expected to substitute for a population, and thus there is a nagging question of the minimum database size required to adequately represent that population. Obviously, the larger the database, the more definitive the population data,23 but time and expense limit what is practical. Unfortunately, there is no simple answer to the question of adequate database size, and experts promote widely differing standards. Experts agree “the question of database size should be considered. Larger samples give more precise estimates of. allele frequencies than smaller ones, but there is no sharp line for determining when a database is too small.” (Kaye & Sensabaugh (2000) Reference Guide on DNA Evidence in Reference Manual on Scientific Evidence (2d ed.) p. 557, fn. omitted, italics added (Reference Guide on DNA Evidence)', see also Modem Scientific Evidence (2000 supp.) pp. 260-261.) Furthermore, many of the earlier suggestions on database size were based on earlier methods and thus may or may not be as applicable to PCR-based testing.
One commentator summed up the situation, as of 1998, in this way: “Published populations for STR [short tandem repeat] loci [for use in a PCR-based test] are generally of the order of either 100 or 200, but sometimes smaller numbers are reported for sub-populations. The International Society of Forensic Haemogenetics has recommended that 100 persons are sufficient[24], but no basis for this size is given and the number may well be a carry-over from what was considered adequate[25] for polymorphic protein systems. The Committee on DNA Technology in Forensic Science, when originally reporting the ceiling principle [i.e., NRCI], also suggested databases of 100.[26] More recently, though, in describing databases generally the Committee suggests at least a few (or several) hundred persons.[27] Several authors (Lander[28] and Devlin et al.,[29, 30] for example) have, however, argued that a sample of 100 individuals is too small. Lander, moreover, has suggested that even a database of 500 is too small although he has now given *636qualified support[31] for a database of 100 individuals. Weir [32] also has implied that samples of 100 individuals are too small because tests for independence of allele frequencies will have low power. Nevertheless, some practical support for a sample size of 100 was shown by Pacek et al.[33] in a study which compared allele frequencies obtained from individuals to allele frequencies of pooled blood samples.” (Harding & Swanson, DNA Database Size (1998) 43 J. Forensic Sci. 24S-249.)34
As Harding and Swanson’s comment recognizes, the question of minimum database size has been unsettled among scientists and statisticians for many years and shows no sign of imminent or easy resolution. Even NRC has been unable to decide on an advisable minimum database size—NRCI advised a minimum of 100 persons (NRCI, supra, at pp. 74-96), whereas NRCII advised a minimum of at least a few or several hundred persons (NRCII, supra, at pp. 34, 112, 114, 156).35 But NRCII also acknowledged that fewer data were available (in 1996) for PCR-based testing and therefore those databases might be smaller than those typically used for RFLP-based *637testing. (NRCII, supra, at p. 117 [“The databases are smaller, but the studies that have been done show the same agreement with HW and LE that [RFLP-based] VNTRs do [citations].”].) To account for the less extensive and less varied data, NRCII advised use of a correction factor more favorable to the defendant (9 = 0.03) in the application of Recommendation 4.1. (NRCII, supra, at pp. 119, 122.)
Without definitive guidelines from the scientific community, the courts have approved the use of databases of various sizes. For example, the court in Commonwealth v. Rosier (1997) 425 Mass. 807 [685 N.E.2d 739], confronted with contentions similar to those raised in the present case, found Cellmark’s 100-person database adequate for PCR-based testing. The court stated: “The defendant argues that Cellmark’s database is too small and, for various other reasons, unreliable. fl[] The judge acted properly in rejecting the defendant’s arguments. He accepted the expert testimony that the Cell-mark database was adequate and common within the field and that a database larger than Cellmark’s would produce ‘no significant difference in the result.’ There was expert and scientific evidence that the Cellmark database met two factors critical to the reliability of a database. The first factor, ‘linkage equilibrium’ (LE), establishes that the various chromosomal loci identified in a database occur randomly in proportion to one another, thus assuring that results related to one locus are not affected by, nor predictive of, the results related to another. The second factor is ‘Hardy-Weinberg equilibrium’ (HW). A database is considered to be ‘in HW’ when the predicted values for the various loci within the database actually correspond to those found in the population, assuming mates are randomly chosen. Thus, it was properly found that the Cellmark database was both in LE and in HW. The database and statistical results reached by Cellmark were also independently verified by Dr. Basten through calculations of ‘confidence intervals’ and a comparison of Cellmark’s results with other databases that achieved statistically comparable results. Dr. Basten also concluded that the methods used by Cellmark to generate statistical results are ‘generally accepted’ within the field of population genetics, and that the statistical results they produce are reliable and accurate.” (Commonwealth v. Rosier, supra, 685 N.E.2d at pp. 743-744 [DQA1, polymarker, and STR loci tested], fns. omitted.)
B. Expert Testimony and Evidence
In this case, nine variable loci were tested to create the genetic profiles of the perpetrator, Brown, and the victim. For each profile, six DNA regions or *638loci were tested for the “polymarker” and “DQA1”36 alleles and three loci were tested for “short tandem repeat” (STR) alleles. All nine of the tests utilized PCR methodology, which the defense stipulated was accepted by the scientific community. Expert testimony established that Brown’s profile matched the perpetrator’s.
The match was interpreted by using Cellmark’s databases to assign a frequency to each of the nine genotypes in the profile, then multiplying those genotypes together to yield the following probabilities of finding the profile in various populations: 1 in 580 billion Caucasian individuals, 1 in 180 million African-American individuals, and 1 in 140 billion Hispanic individuals.
1. Bruce Weir
Dr. Bruce Weir, a population geneticist, evaluated the data generated by Cellmark’s databases. He calculated the frequencies for each of the alleles and genotypes for the three databases, and his results were published in a report. Weir’s report was admitted into evidence, but Weir did not testify. In his report evaluating Cellmark’s databases, Weir cited NCRII, noting its recommendation that “attention on formal testing for independence of alleles within and between loci no longer be emphasized. Instead acknowledgement is to be given to the possibility of departures from Hardy-Weinberg equilibrium at single loci, and recognition is to be given to the fact that any dependencies among loci would be small.” Weir then stated: “Recommendation 4.1 is that heterozygote frequencies should be calculated as the product of allele frequencies but that homozygote frequencies should be modified to allow for inbreeding of an extent 9 = 0.03 (Equation 4.4a).”37 Weir acknowledged the databases contained “occasional . . . departures from independence.” He explained further, in the portion of the report entitled “One-locus Testing,” that “[flor all loci, except GC in the Caucasian database and LDLR in the African American database, there was no evidence for a departure from Hardy-Weinberg equilibrium . . . .”38
2. Kathryn Colombo
At the Kelly hearing, Kathryn Colombo, a staff DNA analyst at Cellmark, testified that she and Lisa Grossweiller, another Cellmark DNA analyst, *639performed the PCR testing of Brown’s DNA to determine Brown’s genotypes at nine loci. Grossweiller first analyzed the DQ alpha locus and five other loci known collectively as the polymarker loci. Colombo then tested STR at three more loci.
Colombo explained her own testing, as well as Grossweiller’s. Colombo reviewed the case file, the protocols used by Grossweiller, and Grossweiller’s test results. The laboratory report authored by Grossweiller containing the results of her testing was a record kept by Cellmark in the regular course of business. Colombo reviewed Grossweiller’s report and results, and determined that Grossweiller had followed proper scientific procedures that were accepted by the scientific community for the performance of the PCR-based DQ alpha and polymarker tests.
Colombo explained the three Cellmark databases contained samples from 100 Caucasians, 100 African-Americans, and 200 Hispanics.39 Cellmark then analyzed the DNA from each individual database and sent the raw data to Weir, a population geneticist. Weir evaluated the appropriateness of the databases and calculated the frequency for each genotype; those data were then tabulated for use by Cellmark analysts. Colombo stated that when she performed the DNA analysis, she transcribed the frequencies calculated by Weir for each locus she tested, then multiplied the frequencies together (using the product rule) to reach an overall probability. She stated use of the product rule with respect to PCR profiling is accepted throughout the scientific community. Colombo described the report (in evidence) in which she listed Brown’s genotype frequencies of the nine tested loci, assigned each the value calculated by Weir from the database, then multiplied the frequencies together to obtain the overall profile frequency. Her work on that report had been reviewed and initialed by a staff Ph.D. Also, Colombo stated Cellmark adopted the use of NRCH’s Recommendation 4.1 to “ensure[] that [the] database does not artificially overestimate the rareness of a particular type.”
Colombo testified Brown’s genetic profile was one which “you would expect to see ... in approximately 1 in 580 billion” Caucasian individuals, “for African-Americans, the frequency is 1 in 180 million; and for the Hispanic population, 1 in 140 billion.”
On cross-examination, Colombo explained the 100 African-American samples in the database were collected from Cellmark’s paternity casework. Blood was drawn from mothers and alleged fathers, but not children. Colombo stated some substructuring occurs within ethnic groups and therefore the NRCII Recommendation 4.1 correction factor is used. She explained *640that a database of 100 individuals can be used to generalize an entire population of millions because “with these PCR tests we are dealing with a limited number of possible [geno]types. At five of the six regions that were done in the first part of the testing, you can only be either two or three types. The most types possible are nine in one of the STR regions.
“So when you look at 100 individuals, you have to remember that since an individual inherits half their DNA from their mother and half from the father, that we actually have 200 pieces of information. And 200 pieces of information, seeing how those particular types fall out, 50 individuals in a database is considered a minimum but adequate to do this, because you are dealing with a limited number of individuals. So once you have determined the proportion of a particular type, it would be approximately the same if you looked at 100 individuals, a thousand, 10,000.”
Colombo agreed that 9 of the 100 samples in the database were taken from people in Baton Rouge and that other cities also seemed to be represented in large proportion, but she testified there was no danger that certain subgroups would have a disproportionate effect on the database because the “characteristics are considered to be inherited randomly. And that means that someone in Baton Rouge that is a Type A at one of the [loci] doesn’t know that he’s a Type A and he wouldn’t say, gee, I need to marry someone who is Type A so our children can all be Type A so therefore you have some substructuring in that population. They are generally to be considered randomly inherited although there is some substructuring, and that’s why the correction factor has been used.” Defense counsel asked why, if the types are inherited randomly, would the value be 1 in 180 million for African-Americans, but 1 in 580 billion for Caucasians. Colombo answered that the NRCII report “does recognize that there are departures from randomness. And one of the models that Dr. Weir applied to our data was the Hardy-Weinberg model, and he states that in his report. And it’s really—it’s the entire reason for using the correction factor. And we actually use the more conservative value. [The NRCII report] suggests] two, and we use the most conservative.” Again later, defense counsel asked if there are certain genetic profiles that are more common to people of different races. Colombo explained: “Right. It is recognized that there is substructuring within ethnic groups, and that’s why an empirically derived correction factor is used. ft[] . . . [ft| That is a value that was derived empirically from actual population studies where the departure from randomness is determined, and it’s a value that’s used to ensure that a particular type is not made too rare in your database. So to make the value more conservative.” She continued: “So it has been theorized that substructuring would result in an overabundance of homozygous types. And as a result of population studies, they have found *641this to actually be trae. HQ So it’s my understanding that the correction factor somehow mediates so that you are not artificially overestimating the rareness of homozygous types in your database. And the correction factor actually is only used when calculating genotype frequencies of homozygotes. HO • • • HD It corrects for substructuring in your database.”
On redirect, Colombo explained the meaning of the probability of 1 in 180 million, as follows: “That figure gives us an idea of how rare or how common a particular profile is. So it’s not really a chance thing. Either it is his DNA so it’s 100 percent or it’s not, which is zero percent. So the number helps us to infer the rareness or the commonness of a particular type. And, of course, as a profile becomes increasingly rare, the inference is stronger that that individual is actually the source of the DNA.”
Colombo opined that Brown could not be excluded as the source of the perpetrator’s DNA and that the frequency with which all nine loci would match this profile was “one person in 180 million out of an African-American population.”
3. Thomas Fedor
Thomas Fedor, a forensic serologist, worked for Serological Research Institute, which was entirely unrelated to Cellmark. Fedor reviewed the Cellmark laboratory report of Brown’s DNA analysis and various other documents, including Weir’s report. He also had the opportunity to discuss with Colombo the testing she had conducted in this case. Fedor verified that the product rale was properly applied in Cellmark’s report of Brown’s DNA analysis.
Fedor explained that the product rale calculates “the overall prevalence of these combination markers in the population at large.” With respect to Cellmark’s analysis of Brown’s DNA, Fedor stated: “The final Product Rule calculation for all the markers tells us that these markers are found together or expected to be found together in approximately 1 in 580 billion Caucasians, 1 in 180 million African Americans, and 1 in 140 billion Hispanics. HI] Obviously, there aren’t that many Caucasians and there aren’t that many Hispanics. So another way that one might look at this is what is the chance that—if we select a Caucasian individual at random ... or off the street, what is the chance he would have genetic markers the same as all nine of these? And the chance is approximately 1 in 580 billion, and we can think of that as extremely remote.” Fedor said use of the product rale is “perfectly safe” in DNA profiling, but the “key has been really judging whether those *642markers are independent.” The following colloquy occurred: “[Fedor:] The assumption of independence on which the [Product] Rule calculation is based is a concept that’s taken from theoretical genetics. . . . [W]e have family groupings in real world populations, and the effect of family grouping is . . . substructure in the population. That has the effect that there are small deviations from independence in these markers in real world populations.
“After a good deal of study, it has been found possible to take the substructure into account by means of a statistical treatment. That particular statistical treatment involves a calculation including a coancestry coefficient.[40] And, indeed, that coancestry coefficient is what has been applied by the Cellmark Lab to their population data so that these markers can be treated as though they are effectively independent once the coancestry coefficient is included in the calculations.
“[Prosecutor:] So this additional figuring in is what makes the Product Rule’s application in this particular case or any case using these markers is what makes it reliable?
“[Fedor:] Yes, ma’am, that’s exactly correct.
“[Prosecutor:] Where does that corrective measure or statistical additional input come from?
“[Fedor:] Well, it comes from genetic—statistical genetic theory, which is a complex body ... of which I have only a passing familiarity, but I believe the Court is aware of the document that was prepared for Cellmark Diagnostics by Professor Bruce Weir, [ft] • • • [1D • • • In this document, Professor Weir, who is perhaps one of the world’s leading authorities in the field of statistical genetics, has examined the population survey data that Cellmark Lab has collected and has actually applied the statistical correction involving the coancestry coefficient to that data to provide figures for what we call in shorthand NRC 4.1 data.
“Now, NRC 4.1 is a particular recommendation by the National Research Council in its evaluation of the forensic uses of DNA testing which provides that these coancestry coefficients be taken into account when calculating data for PCR testing, [ft] . . . [ft]
“[Prosecutor:] And did you have an opportunity to review [the documents] to, in fact, determine if Cellmark Diagnostics in their analysis in this *643case in arriving at the frequency number . . . whether they, in fact, used that recommendation, that corrected factor?
“[Fedor:] Yes. They have adopted Professor Weir’s treatment of their data wholeheartedly.
“[Prosecutor:] And, in fact, that corrective measure actually favors a suspect?
“[Fedor:] What it does is it corrects a tendency that small databases have ... to underestimate the frequency of something that is rare. ft[] Let me say it a different way. If something is in the population to the extent of, say, two in a thousand, it’s fairly rare. If you sample only a hundred members of that group, you may not find any instances of this thing that occurs only two in a thousand. And your sample of a hundred may say, well, this doesn’t occur at all. We couldn’t find it in a sample of a hundred. Well, that’s an underestimate of its true frequency of two in a thousand, and it arrives because the sample is small.
“If you were to sample 10,000 of these things, you would have found about 20 of them. So what happens when your databases are small is that you tend to underestimate rare events, and the calculation that NRC recommends be done corrects for . . . the potentially small size of a sample as well as the family grouping or the coancestry that may exist in the population.
“[Prosecutor:] And that was done in this particular case?
“[Fedor:] And that was done in this case, yes. [^] . . . [U]
“[Prosecutor:] And in your review in this case, did, in fact, Cellmark apply the corrective measure that is recommended by the National Research Council in arriving at the figures that you’ve just recited to us?
“[Fedor:] Yes, indeed.” (Italics added.)
On cross-examination, Fedor stated he used in his work a database consisting of about a thousand people in each ethnic group. He stated Cellmark’s database was small, but some authorities believe databases of Cellmark’s size are adequate. Furthermore, use of the correction factor compensated for any problems arising from inadequate size. Fedor provided the following testimony:
“[Defense Counsel:] You mentioned small databases. Do you consider a database of a hundred individuals to be small?
*644“[Fedor:] It is small. I have read authorities who publish in the scientific literature who say that even such a small database can be adequate.
“[Defense Counsel:] Is there some controversy on that point?
“[Fedor:] There may be different schools of thought about that point, but our concerns are alleviated by a particular statistical treatment. In fact, that was used by Professor Weir. And that treatment is this. As I said, our concern about a small database is that it may tend to underestimate rare events. The correction that’s commonly used [for] small databases is simply to add the observation in this case that you may never have seen before, add it to the database, that is, increase the database by one and add a new event to it. And what that does is it corrects for the opportunity that was missed in the original database to find this rare event. We have now found it by increasing the database by one.” (Italics added.)
On the topic of population substructure, the following was elicited:
“[Defense Counsel:] And you get radically different results just in multiplying out the Product Rule, say, as you did in this case between the subgroup of African-Americans and the subgroup of Caucasians?
“[Fedor:] Yes. In this case, there are pronounced differences from several hundred billion in the case of Caucasians to 180 million in the case of African-Americans. That’s quite a pronounced difference. There is a particular marker in this case that contributes a great deal to that pronounced difference. ft[] . . . [f]
“[Defense Counsel:] And would it be fair to say that one of the dangers of a small database is not only that it might understate differences, but if a substructure or a subgroup is overrepresented in that particular small database, it will skew the results?
“[Fedor:] That can happen, yes. If the small database has an inordinate number of members from a particular subpopulation, in theory you could overestimate the frequency of a marker in the population at large. That certainly can happen, yes. [H] . . . [H]
“[Defense Counsel:] . . . [S]ay, in that 100 African-Americans you find out that nine, ten, or eleven of them came from, say, Louisiana. Just from what we know generally about the country, that would seem that that particular area is overrepresented in their population study.
“[Fedor:] If Louisiana represents ten percent of the sample, then perhaps other states have been slighted, sir, yes.
*645“[Defense Counsel:] Let us assume just hypothetically that the city of Baton Rouge, Louisiana, represents nine percent of the sample. Would this tend to make you believe that other areas have been slighted? ffl] . . . [H]
“The Court: Well, slighted in the sense that it would skew the reliability of the database; is that your question?
“[Defense Counsel]: Yes, your Honor.
“The Court: Did you understand the question?
“[Fedor:] I think I did, your Honor. What I would suggest is that initially we might have some concern as to whether African-Americans in Baton Rouge happen to constitute a close family grouping. Perhaps because for generations African-Americans have not left Baton Rouge or other African-Americans have not come in to Baton Rouge from elsewhere in the country. That would suggest a common ancestral heritage for African-Americans in Baton Rouge.
“[Defense Counsel:] Now you mentioned small deviations in the statistical frequencies due to—can be due to substructures of a particular population; is that right?
“[Fedor:] Yes, sir.”
After hearing argument, the court ruled on the admissibility of the DNA evidence. The court stated:
“The Court has considered all of the evidence, both oral and documentary, and recognizing that we do have some stipulations that ehminate some potential issues, specifically the stipulation that PCR is generally accepted as a reliable form of testing in the scientific community . . . , I have considered the three prongs of the Kelly case, and I am satisfied even assuming that the reliability of the scientific technique having gained general acceptance in the particular field to which it belongs, assuming that that must be established by the independent expert in this case, who would be Mr. Fedor, I do find that the first prong has been satisfied with the evidence presented. Certainly it’s up to the jury to decide what weight to give to that. Actually, I guess it’s a close decision. Right now it’s the Court’s decision to find that the first prong has been met.
“The second prong, as far as the witnesses being properly qualified as experts, I find it has been met. / additionally find the third prong has been *646satisfied for purposes of this 402 hearing, that the correct scientific procedures were used in the particular case, and that’s the prong that the Court expects will be perhaps the main issue to the jury as to whether the jury decides that the scientific procedures being used were such that the jury should give this evidence more weight or less weight. At any rate, the motion is granted to admit the DNA evidence.” (Italics added.)
C. Kelly’s Third Prong
In People v. Venegas, supra, 18 Cal.4th 47, the Supreme Court comprehensively explained the purpose of Kelly’s third prong:
“The third prong of the test was separately set forth in Kelly as follows: ‘Additionally, the proponent of the evidence must demonstrate that correct scientific procedures were used in the particular case. . . .’ [Citation.] [f] The Kelly test’s third prong does not apply the Frye[41] requirement of general scientific acceptance—it assumes the methodology and technique in question has already met that requirement. Instead, it inquires into the matter of whether the procedures actually utilized in the case were in compliance with that methodology and technique, as generally accepted by the scientific community. [Citation.] The third-prong inquiry is thus case specific; ‘it cannot be satisfied by relying on a published appellate decision.’ [Citation.] fl[] . . . ‘Due to the complexity of the DNA multisystem identification tests and the powerful impact that this evidence may have on a jury, satisfying Frye [i.e., satisfying Kelly’s first prong] alone is insufficient to place this type of evidence before a jury without a preliminary critical examination of the actual testing procedures performed. . . .’ [Citation.] [H] . . . fl{]
“. . . The Kelly test is intended to forestall the jury’s uncritical acceptance of scientific evidence or technology that is so foreign to everyday experience as to be unusually difficult for laypersons to evaluate. [Citation.] In most other instances, the jurors are permitted to rely on their own common sense and good judgment in evaluating the weight of the evidence presented to them. [Citations.] [^] DNA evidence is different. Unlike fingerprint, shoe track, bite mark, or ballistic comparisons, which jurors essentially can see for themselves, questions concerning whether a laboratory has adopted correct, scientifically accepted procedures for [DNA testing] or determining a [profile] match depend almost entirely on the technical interpretations of experts. [Citation.] Consideration and affirmative resolution of those questions constitutes a prerequisite to admissibility under the third prong of Kelly.
“The Kelly test’s third prong does not, of course, cover all derelictions in following the prescribed scientific procedures. Shortcomings such as mislabeling, mixing the wrong ingredients, or failing to follow routine precautions *647against contamination may well be amenable to evaluation by jurors without the assistance of expert testimony. Such readily apparent missteps involve ‘the degree of professionalism’ with which otherwise scientifically accepted methodologies are applied in a given case, and so amount only to ‘[cjareless testing affect[ing] the weight of the evidence and not its admissibility’ [citations].
“The Kelly third-prong inquiry involves further scrutiny of a methodology or technique that has already passed muster under the central first prong of the Kelly test, in that general acceptance of its validity by the relevant scientific community has been established. The issue of the inquiry is whether the procedures utilized in the case at hand complied with that technique. Proof of that compliance does not necessitate expert testimony anew from a member of the relevant scientific community directed at evaluating the technique’s validity or acceptance in that community. It does, however, require that the testifying expert understand the technique and its underlying theory, and be thoroughly familiar with the procedures that were in fact used in the case at bar to implement the technique. [Citations.]” (People v. Venegas, supra, 18 Cal.4th at pp. 78-81, italics in 2d quoted par. added.)
“Unlike the independent appellate review of a determination of general scientific acceptance under Kelly’s first prong, review of a third-prong determination on the use of correct scientific procedures in the particular case requires deference to the determinations of the trial court. [Citation.]” (People v. Venegas, supra, 18 Cal.4th at p. 91.)
The third-prong hearing “will not approach the ‘complexity of a full-blown’ Kelly hearing. [Citation.] ‘All that is necessary in the limited third-prong hearing is a foundational showing that correct scientific procedures were used.’ [Citation.]” (People v. Morganti, supra, 43 Cal.App.4th at pp. 661-662.) Where the prosecution shows that the correct procedures were followed, criticisms of the techniques go to the weight of the evidence, not its admissibility. (People v. Wright (1998) 62 Cal.App.4th 31, 42 [72 Cal.Rptr.2d 246]; People v. Axell (1991) 235 Cal.App.3d 836, 868 [1 Cal.Rptr.2d 411].)
In People v. Axell, supra, .235 Cal.App.3d 836, the appellant challenged the database used for statistical analysis of a profile match resulting from RFLP-based DNA testing. She claimed, among other things, that Cellmark’s procedure in determining a profile match and in calculating the statistical probability of that match (using an allegedly inadequate database) did not conform to the accepted methodology. The court held that the procedures used were those generally accepted as reliable' in the scientific community. *648(Id. at p. 862.) As to the matching procedures, the court stated: “Since expert testimony established that Cellmark takes into account a margin of error in its measurement, whether it could or did change its measuring procedures to make them more accurate would appear to go to the weight of the evidence more than its admissibility.” (Id. at p. 864.)
With regard to the statistical procedures, the court explained that expert witnesses had testified that the database was adequate and acceptable within the scientific community, that other courts had recognized that conservative calculations such as those used by Cellmark may correct any HW deviation problems, and that the loci used were in linkage equilibrium. (People v. Axell, supra, 235 Cal.App.3d at pp. 867-868.) The court concluded: “Where the evidentiary foundation is adequate and statistical independence of the characteristics at issue adequately proved, objection to statistical conclusions goes to weight rather than admissibility. [Citation.] [f] Thus, the prosecution showed that the method used by Cellmark in this case to arrive at its data base and statistical probabilities was generally accepted in the scientific community. Any question or criticism of the size of the data base or the ratio pertains to weight of the evidence and not to its admissibility.” (Id. at p. 868.)
Similarly, in People v. Wright, supra, 62 Cal.App.4th 31, the appellant contended the PCR samples might have been contaminated or confused, and that laboratory procedures should have been more rigorous or controlled. The court stated: “ ‘ “Once the court acts within its discretion and finds the witness qualified, as it did in this case, the weight to be given the testimony is for the jury to decide.” [Citation.] ’ [Citation.] The objections here simply went to the weight, not the admissibility, of this evidence.” (Id. at p. 42.)
In People v. Venegas, supra, 18 Cal.4th 47, the Supreme Court held that the Court of Appeal had erred by failing to uphold one of the trial court’s factual conclusions and by failing to overturn another. On one hand, Venegas determined that the appellate court should have affirmed the trial court’s implied conclusion that a particular expert opinion posed only a question for the jury, which could properly have rejected the opinion in light of substantial contradictory expert testimony. (Id. at p. 91.) On the other hand, Venegas concluded that the appellate court should have found error in the trial court’s determination that the FBI’s failure to follow scientific procedures (by using unduly narrow “bins” in RFLP profiling) was a matter affecting only the weight of the evidence. There was no substantial evidence refuting the expert opinion that the bins were too narrow. The court concluded: “Accordingly, the trial court erred in failing to recognize and rule, based on the testimony presented at the Kelly hearing below, that. . . the FBI did not *649follow correct scientific procedures when it calculated [the] random-match probability .... There was no substantial evidence upon which to base a contrary conclusion, and therefore the trial court abused its discretion in not excluding the flawed statistical evidence. [Citations.]” (Id. at p. 93.)
Applying Venegas here, we must review the trial court’s factual finding— that proper scientific procedures were followed—for abuse of discretion.
D. Brown’s Contentions
1. Cellmark’s Database
Brown challenges Cellmark’s use of its African-American database to calculate the frequencies of each of his individual genotypes, which were subsequently multiplied together to arrive at the overall probability of his DNA profile. He argues the prosecution’s expert witnesses established the deficiencies of the database. Brown concludes “[t]he deficiencies in the database deprive the figures generated by Colombo of any scientific validity. It was therefore error for the trial court to overrule the defense objection to the admission of the DNA evidence.”
The determination of a match’s statistical significance (including HW and linkage equilibria), like the other procedural steps of DNA profiling, is subject to Kelly’s third prong analysis because of its complexity. (People v. Venegas, supra, 18 Cal.4th at pp. 83-84.) The court in Venegas explained: “ ‘To . . . leave it to jurors to assess the current scientific debate on statistical calculation as a matter of weight rather than admissibility, would stand Kelly-Frye on its head. We would be asking jurors to do what judges carefully avoid—decide the substantive merits of competing scientific opinion as to the reliability of a novel method of scientific proof. . . . The result would be predictable. The jury would simply skip to the bottom line—the only aspect of the process that is readily understood—and look at the ultimate expression of match probability, without competently assessing the reliability of the process by which the laboratory got to the bottom line. This is an instance in which the method of scientific proof is so impenetrable that it would “ ‘. . . assume a posture of mystic infallibility in the eyes of a jury . . . .’ [Citation.]” [Citations.]’ [Citation.]” (People v. Venegas, supra, 18 Cal.4th at pp. 83-84.)
a. Disequilibria
Brown asserts that Weir’s report acknowledged Cell mark’s African-American database departed from independence at the LDLR locus, depriving the database of HW equilibrium. Brown complains: “Despite the problems noted by Dr. Weir with the LDLR locus, the site is referenced in each *650table for the African-American database”; “The database was not in Hardy-Weinberg equilibrium, and a site examined by the polymarker test was not independent from the remaining locations”; and “The database was not in Hardy-Weinberg equilibrium [citation], and the LDLR locus tested in this case was not shown to be independent of the remaining locations [citation]. Profile frequency calculations grounded upon this dubious foundation cannot be construed as being obtained by generally accepted and correct scientific procedures.” Brown also maintains that Colombo’s testimony that weaknesses in the database were compensated for by the use of NRCII’s Recommendation 4.1 “does not appear to be correct” and the correction Colombo described “would seem to be illusory.”
1. Hardy-Weinberg
The evidence established that use of NRCII’s Recommendation 4.1 compensates for deviations from HW equilibrium, particularly when the more conservative value is used in the correction. In his calculations, Weir employed NRCII’s Recommendation 4.1 to compensate for the departures from HW equilibrium he observed at the two loci in Cellmark’s Caucasian and African-American databases. For the LDLR locus in the African-American database, the locus pertinent to this case, Brown was found to have a homozygous genotype of AA. According to Weir, the observed allele frequency (the p value) of the A allele in the African-American database was 0.270. Applying the HW expected proportion of AA = p2, the expected frequency of AA is 0.073, as stated in Table 4a of Weir’s report. If, on the other hand, Recommendation 4.1’s correction is applied using the most conservative (i.e., favoring the defendant) § value of 0.03, the expected frequency of AA is p2 + p(l - p)0, resulting in the following calculation:
AA = 0.073 + 0.270(1 - 0.270)(0.03)
= 0.079
This is the result stated by Weir in table 4a under the heading NRC 4.1.
When the Cellmark analyst calculated Brown’s profile probability, the analyst assigned Brown’s LDLR homozygous AA genotype the frequency of 0.079, the NRCII-recommended value most generous to Brown. This procedure faithfully followed that suggested by Recommendation 4.1 and calculated by Weir. As Fedor testified, Cellmark properly applied Weir’s corrected data in their calculations.
Brown’s criticism of Colombo’s expert testimony that weaknesses in the database were corrected by use of NRCII’s Recommendation 4.1 is ill-founded. Colombo testified that Recommendation 4.1 corrects for substructuring within ethnic groups, which causes departures from HW equilibrium, *651and thereby prevents artificial overestimation of a profile’s rarity. Brown assesses Colombo’s opinion as incorrect and illusory solely because People v. Soto, supra, 21 Cal.4th 512 noted that NRCII “explicitly approves use of the product rule in calculating match frequencies. ([NRCII, supra,] at p. 122 [‘Recommendation 4.1: In general, the calculation of a profile frequency should be made with the product rule.’].)” (People v. Soto, supra, 21 Cal.4th at p. 539.) From this, Brown apparently concludes Recommendation 4.1 only endorses the product rule.
The problem with Brown’s stance is threefold. First, Soto discussed whether use of the product rule was appropriate in the case of substructure (a once lively but currently resolved controversy) and referred to Recommendation 4.1 in that context. Second, NRCII’s chapter 4, the source of Recommendation 4.1, addresses not only the product rule but also the correction measures required for population substructure and database size. Chapter 4, entitled Population Genetics, concludes with Recommendation 4.1 (and three other recommendations). The authors introduced chapter 4 with the following summary: “Much of the controversy about the forensic use of DNA has involved population genetics. In this chapter, we first explain the principles that are generally applicable. We then consider the special problem that arises because the population of the United States includes different population groups and subgroups with different allele frequencies. We develop and illustrate procedures for taking substructure into account in calculating match probabilities. We then show how those procedures can be applied to VNTRs and PCR-based systems.” (NRCII, supra, at p. 89, italics added.)42
This summary was followed with equation 4.4a, which was incorporated into Recommendation 4.1 at the end of the chapter. As a note to Recommendation 4.1, NRCII suggested that “[a] more conservative value of 9 = 0.03 might be chosen for PCR-based systems in view of the greater uncertainty of calculations for such systems because of the less extensive and less varied population data than for VNTRs.” (NRCII, supra, at p. 122, italics added; see also id. at p. 119.) Colombo’s testimony therefore agrees with NRCII’s explanation of the reasons for using Recommendation 4.1. Fedor’s testimony also endorsed use of Recommendation 4.1 for these purposes. Fedor, in fact, went further than Colombo by explicitly stating the correction factor also compensated for problems arising from smaller databases.
*6522. Linkage
We believe Brown contends the LDLR locus caused not only a departure from HW equilibrium but also a departure from linkage equilibrium. We base this deduction on these two statements by Brown: “a site examined by the polymarker test was not independent from the remaining locations” and “the LDLR locus tested in this case was not shown to be independent of the remaining locations.” For support, Brown cites only to a page of Weir’s report, which states: “There are occasional departures from independence.”
Weir’s report, however, provides no evidence to support the proposition that the LDLR locus departs from linkage equilibrium. As revealed by “Two-locus Testing,” Weir found “no evidence of an association between the four allelic frequencies” when sets of two loci were tested for independence of each of the four alleles at those two loci. Brown fails to refer to this analysis by Weir, which explicitly tested for linkage disequilibria and found none. Furthermore, we find no other record evidence supporting Brown’s theory.
b. Population Not Randomly Collected
Brown next argues certain geographical locations were overrepresented in Cellmark’s African-American database and thus the database was not representative of the relevant population. In particular, he notes 9 of the 100 samples in the database were taken from the Baton Rouge, Louisiana, area. He presumably relies on Fedor’s testimony that overrepresentation of a geographical area can potentially skew the statistics of a small database. Presented with a hypothetical identical to the present case, Fedor explained: “What I would suggest is that initially we might have some concern as to whether African-Americans in Baton Rouge happen to constitute a close family grouping. Perhaps because for generations African-Americans have not left Baton Rouge or other African-Americans have not come in to Baton Rouge from elsewhere in the country. That would suggest a common ancestral heritage for African-Americans in Baton Rouge.”
Thus Fedor identified the questions relevant to determining whether geographical overrepresentation caused allele overrepresentation. The former, however, does not inevitably lead to the latter. Indeed, testimony established most alleles are randomly distributed over geographical regions unless substructure exists in the population, as Fedor explained. If it could have been shown that substructure based on common ancestry existed in the African-American population of Baton Rouge, then he would have been concerned about overrepresentation of certain alleles in the database. Here, however, *653there was absolutely no evidence offered to suggest common ancestry existed in Baton Rouge, and Fedor’s response to the hypothetical cannot support Brown’s conclusion.
c. Inadequate Size
Brown next complains that the 100-person database was too small. Testimony, however, demonstrated that many authorities consider a 100-person database adequate. Fedor, whose company used much larger databases, believed any concerns due to the size of Cellmark’s database were alleviated by the correction factor (Recommendation 4.1) recommended by NRCII and used by Weir. Colombo testified the database was adequate because larger databases would produce the same allelic proportions. She stated a 50-person database could suffice. No evidence countered the testimony that the database size was adequate.43
2. Failure to Call Second DNA Analyst
Brown contends the prosecution’s failure to produce DNA analyst Grossweiller to testify as to the PCR testing she performed in this case violated Kelly’s third prong because it was impossible to determine whether Grossweiller used proper scientific procedures.
Colombo testified Grossweiller followed proper and accepted scientific procedure in her testing. Colombo examined Grossweiller’s report, which, as she explained, was a business record kept in the routine business of Cell-mark. (People v. Parker (1992) 8 Cal.App.4th 110, 115-117 [10 Cal.Rptr.2d 38]044 Our appellate record discloses no evidence to suggest Colombo’s testimony regarding Grossweiller’s tests was unreliable or that Grossweiller’s lab notes were unreliable as business records.
We agree with the conclusion of a New Jersey court: “We reject the notion that the tests were rendered inadmissible because of the State’s failure to call Dr. Blake’s assistant, Ms. Mihalovich. Dr. Blake was permitted to rely on facts or data made known to him prior to his testimony if of a type *654reasonably relied on by experts in forming and rendering opinions upon the subject in question. [Citation.] Indeed, an expert’s testimony may be based on the work done or even hearsay evidence of another expert, particularly when, as here, the latter’s work is supervised by the former. [Citation.]” (State v. Dishon (1997) 297 NJ. Super. 254, 280-281 [687 A.2d 1074, 1087].)
3. Conclusion
As we have explained, the question of whether proper procedures were used to test the DNA and to determine the profile’s statistical significance is a matter going to admissibility, not weight. The trial court found the expert testimony demonstrated that proper scientific procedures had been followed in Brown’s case, thereby satisfying Kelly's third prong and permitting admission of the evidence. The court determined the remaining questions—as to Grossweiller’s DNA testing and the database’s equilibria, randomness, and size—were questions for the jury to consider in weighing the evidence. When reviewing the trial court’s rulings, we defer to the court’s resolutions of credibility and findings of fact. (People v. Glaser (1995) 11 Cal.4th 354, 362 [45 Cal.Rptr.2d 425, 902 P.2d 729].)
Here, the trial court did not err in finding that the prosecution made the necessary foundational showing that Cellmark was implementing the proper procedures when Grossweiller performed PCR testing of Brown’s DNA and when analysts used Cellmark’s African-American database for statistical interpretation of the data. Expert testimony established that Grossweiller followed proper laboratory procedures, that a 100-person database is acceptable among various authorities in the field and can be used to generalize the entire relevant population, and that Recommendation 4.1 ensures that the frequencies of rare events are not underestimated due to substructure or small database size and that the frequencies are effectively independent. Whether these proper procedures could have been made more accurate goes to weight rather than admissibility. (People v. Wright, supra, 62 Cal.App.4th at p. 42; People v. Axell, supra, 235 Cal.App.3d at pp. 864, 868.)
For the reasons we have discussed, therefore, we find no abuse of discretion in the trial court’s ruling. (People v. Venegas, supra, 18 Cal.4th at p. 93; People v. Reilly (1987) 196 Cal.App.3d 1127, 1155 [242 Cal.Rptr. 496].) The finding of admissibility is amply supported by substantially uncontroverted evidence.
*655II*
disposition
The judgment is affirmed.
Buckley, J., and Wiseman, J., concurred.
Appellant’s petition for review by the Supreme Court was denied November 14, 2001.
All statutory references are to the Penal Code unless otherwise noted.
Deoxyribonucleic acid.
The victim went to the hospital for a sexual assault examination. Brown had not used a condom when he had intercourse with the victim. Analysis revealed the presence of semen on the inside and outside of the victim’s vagina. The profile of the DNA from this semen matched Brown’s DNA profile. The DNA analysis and statistical interpretation of the DNA profile were performed by a company known as Cellmark Diagnostics (Cellmark).
PCR procedures for typing DNA are accepted in the scientific community. (People v. Morganti (1996) 43 Cal.App.4th 643, 666 [50 Cal.Rptr.2d 837] [DQ alpha]; People v. Allen (1999) 72 Cal.App.4th 1093, 1100 [85 Cal.Rptr.2d 655] [STR].) We note, however, that most case law describes the older RFLP (restriction fragment length polymorphism) procedures rather than PCR procedures. (E.g., People v. Venegas, supra, 18 Cal.4th 47.)
There are a few exceptions, the two most significant being red blood cells and sex cells. Red blood cells contain no nucleus and therefore no chromosomes. Egg and sperm cells contain half the number of chromosomes of the rest of the body’s cells, so that upon fertilization the complete number of chromosomes will be restored rattier than doubled. Blood can be used to test a person’s DNA because white blood cells contain DNA; sperm cells can be used because enough cells are tested that collectively the entire complement of DNA is represented. (National Research Council, Com. on DNA Forensic Science: An Update, The Evaluation of Forensic DNA Evidence (1996) p. 12 (hereafter NRCII).)
The physical characteristic exhibited by the library’s owner generally depends on the dominance or recessiveness of those two descriptions. Paragraphs describing a physical characteristic such as eye color, or describing a particular cellular product or function, are called genes. By definition, they contain a discrete amount of text sufficient to describe a particular thing or function.
Identical twins, however, share essentially identical DNA.
This, of course, assumes there was no error in handling of evidence or in laboratory procedure and analysis.
This probability is often called the random match probability.
“A determination that the DNA profile of an evidentiary sample matches the profile of a suspect establishes that the two profiles are consistent, but the determination would be of little significance if the evidentiary profile also matched that of many or most other human beings. The evidentiary weight of the match with the suspect is therefore inversely dependent upon the statistical probability of a similar match with the profile of a person drawn at random from the relevant population.” (People v. Venegas, supra, 18 Cal.4th at p. 82.)
Several forensic laboratories in the United States create and utilize their own databases.
If the ethnicity of the suspect is known, an ethnic database should be used; if the ethnicity is not known, a mixed database should be used. (Recommendation 4.1, NRCII, supra, at p. 122.)
Probabilities are also often represented in decimal form.
Obviously, there are situations in which the result of the product rule calculation exceeds the size of the particular population on earth. In that case, the result must be viewed in its alternative sense—the numerical probability that a person randomly chosen from that population will possess the same genetic profile.
The HW equilibrium formula predicts the frequency of genotypes in a population as follows: p2 + 2pq + q2 = 1, where p = frequency of the A allele, q = frequency of the B allele, p2 = frequency of the AA homozygous genotype, q2 = frequency of the BB homozygous genotype, and 2pq = frequency of the AB heterozygous genotype.
Random mating, assumed by HW equilibrium, refers to a choice of mate that is independent of ancestry and independent of genotypes at relevant loci. In other words, it assumes people do not choose each other based on their genotypes at the loci used for forensic testing.
“The NRC is a private, nonprofit society of distinguished scholars that is administered by the National Academy of Sciences, the National Academy of Engineering and the Institute of Medicine.” (People v. Soto, supra, 21 Cal.4th at p. 536, fn. 30.) Many courts have placed great reliance on the NRC reports (e.g., People v. Venegas, supra, 18 Cal.4th at p. 89 [“Indeed, ‘courts have recognized that “the [NRC] is a distinguished cross section of the scientific community. . . . Thus, that committee’s conclusion regarding the reliability of forensic DNA typing . . . and the proffer of a conservative method for calculating probability estimates can easily be equated with general acceptance of those methodologies in the relevant scientific community.” [Citation.]’ [Citation.]”]), but both the NRC reports and courts’ reliance upon them have been criticized extensively by the scientific community (e.g., Wright, DNA Evidence: Where We’ve Been, Where We Are, and Where We Are Going (1995) 10 Me. B.J. 206, 210 [“[P]erhaps comforted by the convenience of having the NRC Report to cite, some *632courts failed to perceive that they were relying on a study which was, as to its method for estimating the expected rarity of a DNA profile, not truly scientific.”]; Kaye, DNA, NAS, NRC, DAB, RFLP, PCR, and More: An Introduction to the Symposium, on the 1996 NRC Report on Forensic DNA Evidence (1997) 37 Jurimetrics J. 395, 404 [“One lesson to be drawn is that NRC II cannot be assumed to be correct merely because it is a consensus report of a respected organization. And, just as the source of an opinion is not a guarantee of its truth, neither is the certitude with which it is expressed. In airing a range of views on the success and failures of NRC ¡H, Jurimetrics shares the aspiration of the authors of that report—to contribute to a wider and deeper understanding of DNA evidence.”]; Balding, Errors and Misunderstandings in the Second NRC Report (1997) 37 Jurimetrics J. 469).
NRC, Committee on DNA Technology in Forensic Science, DNA Typing: Statistical Basis for Interpretation (1992).
We note that, in general, much of NRCII’s discussions refer to VNTR (variable number of tandem repeats) testing (which typically refers to RFLP-based tests), with explicit and separate mention of PCR-based tests.
The NRCH report suggests that the 1992 report, NRCI, “place[d] too much emphasis on formal statistical significance. In practice, statistically significant departures are more likely to be found in large databases because the larger the sample size, the more likely it is that a small (and perhaps unimportant) deviation will be detected; in a small database, even a large departure might not be statistically significant.... [0]ur approach is different. We explicitly assume that departures from HW proportions exist and use a theory that takes them into account. But ... we expect the deviations to be small.” (NRCII, supra, at pp. 97-98.)
NRCII’s Recommendation 4.1 states in relevant part: “Recommendation 4.1: In general, the calculation of a profile frequency should be made with the product rule. If the race of the person who left the evidence-sample DNA is known, the database for the person’s race should be used; if the race is not known, calculations for all racial groups to which possible suspects belong should be made. . . . For systems in which exact genotypes can be determined [such as PCR-based systems], p2 + p(l - p)9- should be used for the frequency at such a locus *633instead of p2. A conservative value of 9 for the US population is 0.01; for some small, isolation populations, a value of 0.03 may be more appropriate.. . . [2pq] should be used for heterozygotes.” (NRCII, supra, at p. 122.)
NRCII also noted that “correcting for population structure should make little difference, and the procedures outlined in [the NRCII report] can be expected to give fair estimates of the range of uncertainty in population and subpopulation frequency estimates for discrete allele systems.” (NRCII, supra, at p. 188.)
NRCB notes that “comparisons between [short tandem repeats of] geographical and racial groups show similarities and differences comparable to those of VNTRs.” (NRCII, supra, at p. 34.)
For instance, with larger samples the agreement with HW proportions is expected to be better. (NRCII, supra, at p. 93.)
Editorial (1992) 52 Forensic Sci. Internal 125-130.
Sensabaugh, Biochemical Markers of Individuality in Forensic Science Handbook (1982) at page 391.
NRCI, supra, at pages 74-96.
NRCH, supra, at pages 20-26.
Lander and Banbury, DNA Technology and Forensic Science (1989) at pages 143-156.
Devlin et al., Statistical Evaluation of DNA Fingerprinting: A Critique of the NRC’s Report (Feb. 5, 1993) 259 Sci. 748, 749, 837.
Devlin et al., Comments on the Statistical Aspects of the NRC’s Report on DNA Typing (1994) 39 J. Forensic Sci. 28-40.
Lander and Budowle, DNA Fingerprinting Dispute Laid to Rest (1994) 371 Nature 735-738.
Weir, Population Genetics in the Forensic DNA Debate (1992) 89 Proc. Nat. Acad. Sci. USA 11654-11659.
Pacek et al., Determination of Allele Frequencies at Loci with Length Polymorphism by Quantitative Analysis of DNA Amplified from Pooled Samples (1993) 2 PCR Meth. Appl. 313-317.
Harding and Swanson suggest a database does indeed reach a size at which its allele frequencies no longer change (i.e., their plotted frequencies level out). At that point, the database is likely adequate to offer reliable allele frequency estimates. The required database size will vary with the loci tested in the database. For example, one set of seven loci testéd by Harding and Swanson achieved level plots, and presumably adequate database size, at 100 persons; another set of eight required 170 to 180 people. (DNA Database Size, supra, 41 J. Forensic Sci. at pp. 248-249.)
Some sources suggest calculating a confidence interval (similar to a margin of error) is helpful to qualify a profile’s probability. (E.g., NRCII, supra, at pp. 34, 112, 125 [“If the database is small, the values derived from it can be uncertain even if it is compiled from a scientifically drawn sample; this can be addressed by providing confidence intervals on the estimates.”], 146, 156; Reference Guide on DNA Evidence, supra, at p. 557 [“[J]ust as pollsters present their results within a certain margin of error, the expert should be able to explain the extent of the statistical error that arises from using samples of the size of the forensic database.” (Fn. omitted.)].) NRCII stated: “It is probably safe to assume that within a race, the uncertainty of a value calculated from adequate databases ... by the product rule is within a factor of about 10 above and below the true value. If the calculated profile probability is very small, the uncertainty can be larger, but even a large relative error will not change the conclusion.” (NRCII, supra, at p. 156.) A smaller database produces a wider confidence interval; the width of the interval on the log scale is inversely proportional to the square root of the size of the database. (Id. at p. 147.) Other scientists have argued that confidence limits are not necessary “because the procedure for allele estimation is sufficiently ‘conservative’ to overestimate the frequency of matching alleles even without such a correction.” (Thompson, Evaluating the Admissibility of New Genetic Identification Tests: Lessons from the “DNA War” (1993) 84 J. Crim. L. & Criminology 22, 67, fn. omitted.)
AIso sometimes called DQ alpha.
Weir also commented that, in cases of deviations from independence, “an alternative to NRC Recommendation 4.1 is to use observed genotypic proportions at single loci. This approach could be adopted for LDLR [low density lipoprotein receptor] in the African American database and GC [group-specific component] in the Caucasian database.” Weir did not apply this approach, but followed the NRC guidelines.
The LDLR and GC loci are associated with functional genes and the variability between races is greater. (NRCII, supra, at pp. 118-119.)
Weir’s report indicates 103 Caucasians were used.
The “coancestry coefficient” to which Fedor refers is the correction factor of Recommendation 4.1.
Frye v. United States (D.C. Cir. 1923) 293 Fed. 1013 [34 A.L.R. 145].
See also Modem Scientific Evidence, supra, at page 646 (“the dispute about the ‘product rule’ centers on the degree of population structure and the effect that it could have”); NRCII, supra, at page 102 (“We can deal with a structured population by using a theory that is very similar to that of inbreeding. . . .”).
There was no evidence suggesting that use of confidence intervals was necessary.
The trial court was obviously satisfied as to the trustworthiness of the laboratory reports prepared by Brenda Smith, a criminalist employed by the Kern County Regional Laboratory, based on the testimony of Jeanne Spencer identifying the reports and detailing the tests and procedures used by all criminalists employed by the laboratory, that said procedures were standard in the industry, and that Ms. Smith’s notes indicated she had followed the normal procedures. The court’s implied finding of trustworthiness is amply supported by sufficient evidence independent of the reports themselves. The trial court did not abuse its discretion by admitting the reports [under the business record exception to the hearsay rule].” (People v. Parker, supra, 8 Cal.App.4th at p. 117.)
See footnote, ante, page 623.