¶ 46. dissenting. I cannot agree with the majority’s decision to affirm the trial court’s conclusion that summary judgment was appropriate after it improperly excluded claimant’s expert opinions. In addition, where the trial court appeared to be confused about the meta-analysis, it should have held a Daubert hearing or at least engaged in further review of the submitted materials. I would reverse and remand this case.
¶ 47. It is telling that the concurring opinion summarizes the majority as deciding this appeal “more as an adequacy-of-proof case than an admissibility-of-evidenee case.” Ante, ¶ 42. That statement is unfortunately true: both the trial court and the majority have exceeded their proper roles in this case and evaluated the evidence put forward by claimant to determine whether claimant should ultimately prevail on the merits. As the concurrence states, the trial court and the majority have concluded that “the evidence was inadequate.” Id. The problem is that this is a merits determination that should have been put to the jury. If the concurring opinion is correct that this is about the adequacy — not the admissibility — of the evidence, then on summary judgment we must view all of that evidence “in a light most favorable to” claimant. In re Carroll, 2007 VT 73, ¶ 8, 182 Vt. 571, 933 A.2d 193 (mem.) (quotation omitted). Viewing the evidence this way, we would have no choice other than to accept as true the expert opinions of the two highly qualified medical doctors who have explicitly stated that it is more likely than not that claimant’s non-Hodgkin’s lymphoma was caused by his firefighting. *256Thus, insurer cannot possibly prevail on summary judgment if this case is analyzed in terms of whether a reasonable jury could find the evidence adequate.
¶ 48. The only way that insurer could prevail on summary judgment is if the expert opinions of both of claimant’s medical doctors are held to be inadmissible. Perhaps it is the foundation for the medical doctor’s opinions that the majority and the concurrence find “inadequate.” Ante, ¶ 42. Regarding that question — a question of admissibility — the only way to dismiss the medical doctors’ opinions here would be if there were “too great an analytical gap between the data and the opinion[s] proffered.” Gen. Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997). This is the crux of the issue. The trial court held that the gap was too great here because claimant did not have studies meeting the 2.0 relative risk standard. I agree that the 2.0 standard corresponds with the ultimate issue that must be decided on the merits: whether it is more likely than not that claimant’s non-Hodgkin’s lymphoma was caused by firefighting. The problem is that it is not the standard for admissibility.
¶ 49. The standard for admissibility is whether there is too great a gap between the studies offered and the medical doctor’s opinions based in part on those studies. Id. But here, there is no gap at all: two of the studies relied upon by the doctors — the Figgs study and the Sama study — show statistically significant results that meet even the trial court’s strict 2.0 admissibility standard.14 Those studies directly support the doctors’ conclusions that it is more likely than not that claimant’s non-Hodgkin’s lymphoma was caused by firefighting. The doctors’ opinions are therefore admissible. That should be the end of the admissibility analysis. But even where there is a gap between the studies and the doctors’ opinions, as there is for those studies which show a relative risk of less than 2.0, that gap is more than filled here by specific knowledge about claimant that makes it more likely that claimant’s non-Hodgkin’s lymphoma was caused by firefighting. Finally, to the extent that the trial court believed that there still existed an analytical gap, the gap was “of the . . . court’s making” *257for failing to hold a Daubert hearing or at least engage in further review of the submitted materials to clear up the court’s confusion. Kennedy v. Collagen Corp., 161 F.3d 1226, 1230 (9th Cir. 1998). The materials put forward by claimant, even if not all admissible, included numerous admissible studies and admissible factual details about claimant that provided a more-than-adequate foundation for the medical doctors’ opinions regarding specific causation. Those opinions therefore should have gone to the jury, and the trial court should not have dismissed this case on summary judgment.
¶ 50. The trial court’s summary judgment decision is premised on its erroneous exclusion of claimant’s expert testimony linking claimant’s firefighting service to non-Hodgkin’s lymphoma. It is undisputed that claimant’s three experts here — one epidemiologist and two medical doctors — are well qualified. Indeed, they are arguably some of the most qualified experts in their respective fields.15 The conclusions of claimant’s experts — relating claimant’s death from non-Hodgkin’s lymphoma to his forty years fighting fires — are not a type of “junk science.” 985 Assocs., Ltd. v. Daewoo Elecs. Am., Inc., 2008 VT 14, ¶ 10, 183 Vt. 208, 945 A.2d 381. To the contrary, these opinions rely on sound methodologies and are in line with what has become generally accepted in the scientific community.16
¶ 51. The opinions offered by claimant’s experts were based on numerous statistically significant scientific studies with confidence intervals for relative risk entirely above 1.0. Those studies, *258published in peer-reviewed scientific journals, are routinely used by experts to determine the issue litigated here. The studies — and the expert opinions based upon them — were therefore both reliable and relevant. The trial court should have permitted this statistically significant evidence to be presented to a jury and then let the jury decide whether it was sufficient to carry claimant’s burden. In excluding the opinion evidence based upon these studies, and in adopting the 2.0 relative risk standard as a test for admissibility, the trial court overstepped its gatekeeper role and decided questions that should have been left to the jury. In addition, in applying the 2.0 relative risk standard, the trial court, without any explanation, ignored at least two statistically significant studies that the experts relied upon that exceeded that same 2.0 standard.
¶ 52. There are several defects in the trial court’s decision, but the main problem is that it ignores Vermont’s limitation on the gatekeeping role of trial courts in evaluating expert testimony. By failing to limit itself to adopting a legal standard for statistical significance, and instead adopting the requirement that each study meet the 2.0 standard — meaning a doubling of the risk, which is the same standard for showing at the merits stage that causation is more likely than not — the trial court improperly thrust itself into a merits determination. This Court has squarely stated that trial courts should not engage in “a preliminary inquiry into the merits of the case.” Daewoo, 2008 VT 14, ¶ 10. Although the trial court claimed to use the 2.0 relative risk standard merely as a “benchmark,” it applied a hard-line 2.0 standard for the admittance of any epidemiological study.17 Thus, there was no “benchmark” involved here; to the contrary, the trial court established a misplaced bright-line rule and an improper legal standard for admissibility.
¶ 53. Specifically, the trial court abused its discretion in the following ways: (1) by conducting a preliminary inquiry into the merits of the case and adopting a standard requiring that each piece of evidence be sufficient to make claimant’s entire case; (2) by ignoring the fact that two doctors looked at a number of health-related factors that were peculiar to claimant and based their opinions on these factors; (3) by failing to explain why the standard the court adopted was not met here, at least as to two *259studies that were both statistically significant and exceeded the court’s 2.0 relative risk standard; and (4) by failing to hold a Daubert hearing or further engage in the submitted materials, especially given the trial court’s apparent confusion about numerous aspects of the proffered testimony regarding the meta-analysis.
I.
¶ 54. The majority is correct that we review trial court decisions excluding evidence for abuse of discretion. USGen New Eng., Inc. v. Town of Rockingham, 2004 VT 90, ¶¶ 21-23, 177 Vt. 193, 862 A.2d 269.18 That said, this Court has specifically noted that “we cannot allow our deferential standard of review to blind us to fundamental misapplications of the Daubert analysis.” Daewoo, 2008 VT 14, ¶ 9. Thus, we have held reviewing for abuse of discretion does not prevent us from “engag[ing] in a substantial and thorough analysis of the trial court’s decision and order to ensure that the trial judge’s decision was in accordance with Daubert and our applicable precedents.” USGen, 2004 VT 90, ¶ 24 (quotation omitted); cf., e.g., United States v. Pansier, 576 F.3d 726, 737-38 (7th Cir. 2009) (noting that appellate courts do not apply any deference in determining “whether the [trial] court applied the legal framework required under Rule 702 and Daubert”). Further, if there is an “arguable lack of clarity in ornease law” — as there is here, where the trial court recognized that this was an issue of first impression — and if the trial court resolved legal questions incorrectly, we must reverse and remand for an application of the correct legal standard. DeYoung v. Ruggiero, 2009 VT 9, ¶ 31, 185 Vt. 267, 971 A.2d 627; see also, e.g., United States v. Snyder, 136 F.3d 65, 67 (1st Cir. 1998) (holding that a per se abuse of discretion occurs when a trial court commits an error of law). In my view, the trial court applied the wrong legal standard and violated several of our applicable precedents when it adopted the 2.0 relative risk standard for determining whether to admit epidemiological studies.
¶ 55. We have previously stated that “the trial court’s inquiry into expert testimony should primarily focus on excluding ‘junk *260science’— because of its potential to confuse or mislead the trier of fact — rather than serving as a preliminary inquiry into the merits of the case.” Daewoo, 2008 VT 14, ¶ 10; accord In re JAM Golf, LLC, 2008 VT 110, ¶ 11, 185 Vt. 201, 969 A.2d 47 (“In finding evidence to be reliable, the trial court is not expected to make a substantive decision on the merits of the proponent’s argument but is instead required to make an inquiry into the factual basis and methodology used by the expert witness.” (quotation omitted)); see also, e.g., Kennedy, 161 F.3d at 1228 (finding an abuse of discretion where “the trial court failed to distinguish between the threshold question of admissibility of expert testimony and the persuasive weight to be accorded such testimony by a jury”). Thus, where a trial court “cannot conclude that [an expert’s] testimony is the kind of ‘junk science’ that Daubert meant to exclude,” the evidence should be admitted. JAM Golf, 2008 VT 110, ¶11.
¶ 56. The trial court’s role here is limited because evaluating an expert’s “credibility and [the] weight of the evidence [is] the ageless role of the jury.” McCullock v. H.B. Fuller Co., 61 F.3d 1038, 1045 (2d Cir. 1995). Thus, this Court has “emphasize[d] . . . that Daubert presents an admissibility standard only.” USGen, 2004 VT 90, ¶ 19. The Daubert Court itself noted that “[vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.” Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 596 (1993); see also, e.g., Ruiz-Troche v. Pepsi Cola of P.R. Bottling Co., 161 F.3d 77, 85 (1st Cir. 1998) (“Daubert does not require that a party who proffers expert testimony carry the burden of proving to the judge that the expert’s assessment of the situation is correct. As long as an expert’s scientific testimony rests upon ‘good grounds, based on what is known,’ it should be tested by the adversary process — competing expert testimony and active cross-examination — rather than excluded from jurors’ scrutiny for fear that they will not grasp its complexities or satisfactorily weigh its inadequacies.” (quoting Daubert, 509 U.S. at 590)); McCullock, 61 F.3d at 1044 (“Disputes as to the strength of [an expert witness’s] credentials, faults in his use of differential etiology as a methodology, or lack of textual authority for his opinion, go to the weight, not the admissibility, of his testimony.”). This Court has similarly stated that “to tease out deficiencies of expert testimony, *261opponents should attack testimony of this nature through the adversarial process,” rather than through excluding the evidence altogether. JAM Golf, 2008 VT 110, ¶ 9.
¶ 57. The trial court’s adoption of the 2.0 relative risk standard as the threshold for admitting evidence of epidemiological studies, with no consideration of a study’s statistical significance, goes far enough in passing judgment on the evidence to amount to an evaluation of the merits of the case, rather than a proper inquiry into the methodology and reliability of the studies used by the experts. Whether it is more likely than not that claimant’s firefighting caused his non-Hodgkin’s lymphoma is the exact fact question that must be resolved on the merits.
¶ 58. The 2.0 standard for admissibility is also problematic because it sets a threshold that requires each study to prove that claimant should win on the merits. By definition, the 2.0 standard only admits each study if that study independently meets the more-likely-than-not standard for proving causation. But we have said: “The admitted evidence does not alone have to meet the proponent’s burden of proof on a particular issue.” USGen, 2004 VT 90, ¶ 19; see also, e.g., In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 744 (3d Cir. 1994) (“The evidentiary requirement of reliability is lower than the merits standard of correctness.”). In Daewoo, we distinguished “[t]he central issue [of] the admissibility of the proffered expert testimony” from “the sufficiency of the evidence in proving plaintiffs’ case.” 2008 VT 14, ¶ 13. By requiring each study to show — on its own — that it is more likely than not that claimant’s cancer was caused by firefighting, the trial court failed to recognize that claimant is free to combine various pieces of evidence to make his case. While for purposes of summary judgment under Vermont Rule of Civil Procedure 56 the trial court’s focus is properly on the sufficiency of the uncontroverted evidence to meet the nonmoving party’s burden of proof, in this instance the trial court’s exclusion of the expert opinions conflated the court’s roles as gatekeeper and as Rule 56 decision-maker, resulting in an improper focus on the sufficiency of the opinion evidence, rather than on its admissibility. Id.
¶ 59. The trial court’s analysis appears to stem in part from a mistaken belief that an epidemiological study that fails to meet the 2.0 relative risk standard is not statistically significant. That is simply not true. Statistical significance and relative risk are two different concepts, and a doubling of the risk is not required for *262a study to be statistically significant. The Second Circuit Court of Appeals has flatly rejected the idea that even a 1.5 relative risk is required for a study to be statistically significant. In re Joint E. & S. Dist. Asbestos Litig., 52 F.3d 1124, 1134 (2d Cir. 1995). In Joint Eastern, when the district court held that epidemiological studies with a relative risk of less than 1.5 were statistically insignificant, the Second Circuit rejected this “bold assertion” and held that “it would be far preferable for the district court to instruct the jury on statistical significance and then let the jury decide whether . . . studies over the 1.0 mark have any significance in combination.” Id.; see also, e.g., Pick v. Am. Med. Sys., Inc., 958 F. Supp. 1151, 1160 (E.D. La. 1997) (holding that a study with a “relative risk above 1.0 .. . even if not sufficient, by itself’ can be used to “establish causation by a preponderance of the evidence”).
¶ 60. In summarizing its holding, the Joint Eastern court noted that the trial court “erred ... in rendering independent assessments of the epidemiological evidence far beyond the role authorized by Daubert) in rejecting all epidemiological studies that yielded [a relative risk] below the unexplained floor of 1.50; . . . and in generally encroaching upon the factfinding role of the jury.” 52 F.3d at 1139. The same can be said for what the trial court did here. Indeed, here the trial court’s error was more grave because it adopted the higher floor of 2.0 and in doing so went “beyond the role authorized by Daubert.” Id.
¶ 61. Scientists usually determine statistical significance by looking at a study’s confidence interval, rather than the exact relative risk arrived at in a particular study. The confidence interval is generally set at a 95% confidence level and is rendered as a range with endpoints on both sides. The lower endpoint represents the lowest possible relative risk (RR) or odds ratio (OR) that appeared (or would be expected to appear) in repeated trials:
[T]he RR or OR has a “confidence interval” around it that expresses how “stable” the estimate is in repeated trials. A 95% confidence interval is the range of numbers that would include the “real” risk 95 times out of 100 if the same study were done over and over again, allowing for random fluctuations of the data inherent in the selection of subjects. Thus, a RR of 1.8 with a confidence *263interval [between] 1.3 [and] 2.9 could very likely represent a true RR of greater than 2.0, and as high as 2.9 in 95 out of 100 repeated trials.
R. Clapp & D. Ozonoff, Environment and Health: Vital Intersection or Contested Territory?, 30 Am. J.L. & Med. 189, 210 (2004).
¶ 62. The confidence interval is important because it speaks to whether a study is statistically significant. Rather than focusing on the exact relative risk that a study produces, courts engaged in a gatekeeper analysis need to look at a study’s confidence interval. Here, all of claimant’s experts — and the epidemiological studies they relied upon — discussed confidence intervals in great detail in numerous documents submitted to the trial court. This was good science. The Fifth Circuit Court of Appeals has noted that “a study with a relative risk of greater than 1.0 must always be considered in light of its confidence interval before one can draw conclusions from it.” Brock v. Merrell Dow Pharms., Inc., 874 F.2d 307, 312 (5th Cir.) (emphasis added), modified on reh’g, 884 F.2d 166 (5th Cir. 1989) (per curiam). Yet the trial court’s decision here never even mentions confidence intervals.
¶ 63. It is important to look at confidence intervals because doing so is the best way to determine whether a study’s results are statistically significant. “If the confidence interval is so [wide] that it includes the number 1.0, then the study will be said to show no statistically significant association between the factor and the disease.” Id. The Brock court specifically defined a “statistically significant” epidemiological study as “one whose confidence interval [is entirely above and does] not include 1.0.” Id.19
*264¶ 64. In Daubert, the United States Supreme Court cited Brock with approval as a case where the evidence on causation was insufficient. See Daubert, 509 U.S. at 596. In Brock, all of the studies offered by the plaintiffs had results with a confidence interval that included 1.0 and was therefore too low for statistical significance. The court therefore noted that the “plaintiffs did not offer one statistically significant” study showing an increased risk. Brock, 874 F.2d at 312. The court also found that “[n]o published epidemiological study has found a statistically significant increased risk.” Id. The evidence in the Brock case was therefore not sufficient to permit a trier of fact to make a reasonable inference of causation. Id. at 315. Applying the Brock standard to this case, on the other hand, leads to a different result. Here, claimant has offered numerous statistically significant studies with confidence intervals entirely above 1.0.
¶ 65. The Brock standard is “generally accepted” as the proper way to evaluate whether a study is statistically significant. In re Viagra Prods. Liab. Litig., 572 F. Supp. 2d 1071, 1078-79 (D. Minn. 2008); see also In re Ephedra Prods. Liab. Litig., 393 F. Supp. 2d 181, 199 (S.D.N.Y. 2005) (referring to the “generally accepted standard of a 95% confidence interval above 1.0”). Indeed, the Reference Manual on Scientific Evidence states the same test: “Where the confidence interval contains a relative risk of 1.0, the results of the study are not statistically significant.” M. Green et al., Reference Guide on Epidemiology, in Reference Manual on Scientific Evidence 333, 389 (Federal Judicial Center ed., 2d ed. 2000), available at http://www.fjc.gov. The Reference Manual also states that where “the confidence boundaries ... do not include a relative risk of 1.0, the study does have a positive finding that is statistically significant.” Id. at 361. The reason is that if the 95% confidence interval is entirely above 1.0, then we are at least 95% certain that the agent studied is associated with the disease. Such a study “is ‘statistically significant.’ ” Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1101 (D. Colo. 2006) (citing Green, supra, at 361); accord, e.g., Turpin v. Merrell Dow Pharms., Inc., 959 F.2d 1349, 1353 n.1 (6th Cir. 1992) (“If . . . the *265confidence interval spans a range entirely above 1.0 .. . then this interval would be statistically significant.”).20
¶ 66. The experts here — all indisputably well qualified in their respective fields — used this standard for determining statistical significance. Claimant’s expert Dr. Guidotti explicitly stated that when the lowest number in the confidence interval “does not include 1.0, [it] means that [the study] is statistically significant.” Similarly, claimant’s expert Dr. Lockey stated that “[f]our of the seven [studies] were statistically significant (Burnett, Ma, Figgs and Sama) as the confidence intervals around the risk estimates do not include 1.0.” Claimant’s epidemiologist Dr. LeMasters applied the same standard and also stated that the four studies with confidence intervals above 1.0 were statistically significant.
¶ 67. If the trial court wanted to impose a minimum threshold for admissibility, that threshold should have been to require studies to show a statistically significant relationship. See Brock, 874 F.2d at 312.21 But the trial court failed to address statistical significance as a screening device.
¶ 68. If a study has a confidence interval in a range that is entirely above 1.0, it is statistically significant, and any questions about the strength of the relationship shown by the study go to the study’s weight, not its admissibility. If the trial court applied this standard here, the experts would be allowed to rely on the Burnett, Ma, Figgs, and Sama studies — all of which had a confidence interval entirely above 1.0. The trial court abused its discretion in excluding those studies.'
*266¶ 69. The Supreme Court of Nebraska recently wrestled with the issue of whether to adopt the 2.0 relative risk standard. See King v. Burlington N. Santa Fe Ry., 762 N.W.2d 24 (Neb. 2009). After a thorough review of existing caselaw and the Reference Manual on Scientific Evidence, the King court “decline[d] to set a minimum threshold for relative risk, or any other statistical measurement, above the minimum requirement that the study show a relative risk greater than 1.0.” Id. at 46. The court correctly concluded that “[i]n short, the significance of epidemiological studies with weak positive associations is a question of weight, not admissibility.” Id. at 46-47.
¶ 70. While “acknowledging] that courts disagree on the appropriate relative risk threshold that a study must satisfy to support a general causation theory,” id. at 45, the King court held that those courts that have adopted the 2.0 standard often “failed to distinguish between general causation and its brother, specific causation,” id. at 46. The trial court here made this exact mistake when it excluded expert testimony based upon peer-reviewed studies simply because — in the trial court’s opinion — those studies did not show a sufficiently strong association between firefighting and non-Hodgkin’s lymphoma.
¶ 71. The King court noted that “general causation addresses whether a substance is capable of causing a particular injury or condition in a population, while specific causation addresses whether a substance caused a particular individual’s injury.” Id. at 34 (emphasis added). Because “a plaintiff must show both general and specific causation,” id., evidence that survives the Daubert test is admissible if it speaks to either general or specific causation. See V.R.E. 401 (“ ‘Relevant evidence’ means evidence having any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.”); cf., e.g., In re Neurontin Mktg., Sales Practices, & Prods. Liab. Litig., 612 F. Supp. 2d 116, 158 (D. Mass. 2009) (holding that although “the parties’ experts [might] debate the strength and specificity of the association,” the mere fact of establishing a positive association “alone significantly strengthens the Plaintiffs’ case for admission under Daubert”). Thus, it is common for courts to hold that an expert “is qualified to render an opinion ... as to general causation, but not as to specific causation.” Burke v. TransAm Trucking, Inc., 617 F. Supp. 2d 327, 334 (M.D. Pa. 2009) (quotation omitted). That would have *267been an appropriate course for the trial court to take here, allowing the epidemiologist Dr. LeMasters to testify that firefighting is one cause of non-Hodgkin’s lymphoma, as this helps makes claimant’s case on general causation. See, e.g., id. That is all that Dr. LeMasters proposed to offer in her testimony. The trial court abused its discretion by excluding such testimony altogether merely because it does not speak to specific causation. Cf., e.g., In re Hanford Nuclear Reservation Litig., 292 F.3d 1124, 1133-37 (9th Cir. 2002) (holding that it was reversible error for the trial court to exclude studies showing less than a doubling of the risk when the plaintiff was only trying to prove general causation).
¶ 72. Further, although “epidemiology focuses on general causation rather than specific causation,” King, 762 N.W.2d at 34-35, epidemiological studies can be combined with specific information about an individual to show specific causation, as both of the medical doctors did here. When the King court surveyed the cases that have adopted the 2.0 standard, it found that “epidemiological evidence appears to have been the only evidence supporting specific causation” in those cases. Id. at 46 (emphasis added). The 2.0 standard makes much more sense when a plaintiff is using epidemiological studies alone to prove specific causation.22 But here, as discussed in detail below, claimant’s experts relied on more than just the epidemiological studies.
¶ 73. The trial court abused its discretion by adopting a standard for admissibility that requires each study to make claimant’s entire case. See, e.g., USGen, 2004 VT 90, ¶ 19 (“The admitted evidence does not alone have to meet the proponent’s burden of proof on a particular issue.”). Whenever a trial court “relie[s] on a standard we have determined to be erroneous,” it is an abuse of discretion, and reversal is warranted. Hanford Nuclear, 292 F.3d at 1138.
II.
¶ 74. The trial court’s error in adopting the 2.0 standard stems in part from a misunderstanding of the proffered testimony in this *268case. The court apparently accepted insurer’s erroneous position that all of claimant’s experts looked only at the epidemiological studies and did nothing to relate those studies to anything particular about claimant. While it is true that the epidemiologist Dr. LeMasters appropriately limited her proposed testimony to the epidemiological studies, each of the medical doctors (Dr. Lockey and Dr. Guidotti) looked at several factors particular to claimant before concluding that it is more likely than not that claimant’s disease was caused by firefighting. As Dr. Guidotti stated, the studies on general causation “inform[] our interpretation of the case, and then we try to bring it down to the particulars of that case, with as much knowledge as we have available.”
¶ 75. Although the trial court is correct that some courts have adopted the 2.0 standard when determining whether to admit epidemiological studies, see, e.g., Merrell Dow Pharms., Inc. v. Havner, 953 S.W.2d 706, 716 (Tex. 1997), those courts that have adopted the 2.0 standard have for the most part done so on the basis of the following passage from the Ninth Circuit’s decision in Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311 (9th Cir. 1995) (Daubert II):
For an epidemiological study to show causation under a preponderance standard, the relative risk of limb reduction defects arising from the epidemiological data will, at a minimum, have to exceed ‘2’. That is, the study must show that children whose mothers took Bendectin are more than twice as likely to develop limb reduction birth defects as children whose mothers did not. While plaintiffs’ epidemiologists make vague assertions that there is a statistically significant relationship between Bendectin and birth defects, none states that the relative risk is greater than two. These studies thus would not be helpful, and indeed would only serve to confuse the jury, if offered to prove rather than refute causation. A relative risk of less than two may [be] suggestive] . . . , but it actually tends to disprove legal causation, as it shows that Bendectin does not double the likelihood of birth defects.
Id. at 1321 (quotation omitted). Indeed, the trial court here cited part of this very passage as a rationale for adopting the 2.0 *269standard. What the trial court failed to appreciate is that the plaintiffs in Daubert II based their claims solely on statistical studies: “plaintiffs’ experts did not seek to differentiate these [particular] plaintiffs from the subjects of the statistical studies.” Id. at 1321 n.16. That is not this case. Here, claimant’s experts differentiate claimant in numerous ways. As the Daubert II court went on to recognize, “[a] statistical study showing a relative risk of less than two could be combined with other evidence to show it is more likely than not that the accused cause is responsible for a particular plaintiff’s injury.” Id. (emphasis added). That is what occurred here. Although insurer argues that claimant’s experts rely solely on statistics, and although the trial court stated that “Dr. Guidotti’s testimony . . . relies solely upon these [epidemiological] studies,” the record flatly contradicts this claim. In developing their expert opinions on causation, both Dr. Lockey and Dr. Guidotti considered a number of facts specific to claimant.
¶ 76. First, both doctors considered claimant’s extraordinarily long forty years of service as a firefighter. Dr. Lockey specifically looked at the fact that claimant “worked as a fireman for forty years.” Similarly, Dr. Guidotti noted that claimant’s forty years of exposure “places him in a high-risk category,” specifically for non-Hodgkin’s lymphoma “among other things.” This deposition testimony in itself is sufficient to allow claimant to argue that the epidemiological studies underestimate the real risk that claimant faced through his firefighting and that even studies showing a relative risk of less than 2.0 can therefore support his claim that firefighting more than doubled his risk of getting non-Hodgkin’s lymphoma. The trial court completely failed to address the fact that the experts in this case rendered opinions that this particular claimant was a firefighter for a much longer period of time than the average firefighter discussed in the studies.
¶ 77. Second, Dr. Lockey and Dr. Guidotti looked at the fact that claimant was likely exposed to more toxins than the average firefighter, since claimant’s firefighting career covered a time when protective equipment was often not used. Dr. Lockey noted that claimant was a firefighter “during a timeframe back in the ‘60s and ‘70s when control measures more likely than not were not as good as they are currently.” Dr. Guidotti similarly noted that it was not until the 1970s that a self-contained protective breathing apparatus was widely introduced and that even then “relatively few” firefighters actually used such an apparatus. According to Dr. *270Guidotti, “there was a gap in the 1970s and the early ‘80s when firefighters very often were not using their personal protection when fighting fires.”
¶ 78. Third, Dr. Guidotti also looked at the particular type of non-Hodgkin’s lymphoma that claimant contracted. Non-Hodgkin’s lymphoma does not refer to just one disease; rather, it is a large category that includes at least thirty recognized types of lymphoma. Dr. Guidotti noted that only some of those types “are known to be associated with environmental exposures and occupations.” Claimant had small cell lymphoma, which Dr. Guidotti noted is associated with environmental exposures. In particular, it is associated with exposure to solvents, including some of the same chemicals that are “released during firefighting.” Thus, Dr. Guidotti concluded that “the chemicals that are known to be associated with small cell lymphocytic lymphoma seem to be more than likely the kinds of things that one would encounter on the job.” This information was a significant factor leading Dr. Guidotti to state that, although “[scientific certainty in the matter is unattainable,” it was his opinion that the evidence favored the conclusion that claimant’s “lymphoma arose from work as a firefighter.”
¶ 79. Finally, both doctors also ruled out other possible causes of claimant’s disease before reaching their ultimate conclusions. Dr. Lockey examined claimant’s medical records and looked at whether there were “any other potential factors as it applies to [claimant] that would be known to be associated with a risk for the occurrence of non-Hodgkin’s lymphoma.” Dr. Lockey concluded that he “could not identify any other known risk factors based on the information that was available to me.”23 Dr. Lockey *271specifically noted that to his knowledge claimant “apparently did not have an immune deficiency disorder, which is the primary risk. As far as I was aware, he was not HIV positive, which would put him at risk for non-Hodgkin’s lymphoma.” This was a major factor leading Dr. Lockey to conclude “with a reasonable medical probability that [claimant’s] work as a firefighter was the cause of his non-Hodgkin’s lymphoma.” Dr. Guidotti also examined claimant’s medical records and similarly noted that this main risk factor could be ruled out for claimant, since a “severe immune problem . . . would have expressed itself by inability to work.” Because alternative explanations for contracting non-Hodgkin’s lymphoma were ruled out, the trial court should not have excluded the opinions concluding that it was more likely than not that claimant’s disease was caused by firefighting. See Daubert II, 43 F.3d at 1321 n.16; see also, e.g., Clapp & Ozonoff, supra, at 210 (“If it turns out that a particular individual plaintiff with a disease has few or none of these [alternative] risk factors, then a [relative risk] of 1.9 is a serious underestimate of the effects of his or her exposure.” (emphasis added)). Rather, the trial court should have recognized that this case presented the precise type of scenario that the Daubert II court noted could meet the admissibility threshold by combining studies with relative risks of less than 2.0 with other evidence:
[A] statistical study may show that a particular type of birth defect is associated with some unknown causes, as well as two known potential causes — e.g., smoking and drinking. If a study shows that the relative risk of injury for those who smoke is 1.5 as compared to the general population, while it is 1.8 for those who drink, a plaintiff who does not drink might be able to reanalyze the data to show that the study of smoking did not account for the effect of drinking on the incidence of birth defects in the general population. By making the appropriate comparison — between non-drinkers who smoke and nondrinkers who do not smoke — the teetotaller plaintiff might be able to show that the relative risk of smoking for her is greater than two.
43 F.3d at 1321 n.16.
*272¶ 80. The trial court failed to recognize that both medical experts relied on numerous factors specific to claimant. The trial court stated that “Dr. Guidotti’s testimony in particular relies solely upon these [epidemiological] studies.” This was clear error. It was an abuse of discretion for the trial court to ignore all of the other factors relied upon by the experts in rendering their opinions. See, e.g., Foster v. Mydas Assocs., 943 F.2d 139, 143 (1st Cir. 1991) (“Abuse of discretion occurs, of course, when a material factor deserving • significant weight is ignored.” (quotation omitted)).
¶ 81. Courts have previously held that it is an “accepted methodology” to engage in an “analysis of medical literature and case study comparison with the individual characteristics of the patient’s case to determine” the cause of a disease. Celia v. United States, 998 F.2d 418, 426 (7th Cir. 1993). That is what Dr. Guidotti and Dr. Lockey did here. They looked at numerous epidemiological studies and then applied those studies to the particular facts they knew about claimant. In doing so, the doctors are free to rely on studies that fall below a relative risk of 2.0: “The physician or other such qualified expert may view the epidemiological studies and factor out other known risk factors . . . which might enhance the remaining recognized risks, even though the risk in the study fell short of the 2.0 correlation.” Grassis v. Jokns-Manville Corp., 591 A.2d 671, 675 (N.J. Super. Ct. App. Div. 1991). The Grassis court chose not to adopt a 2.0 admissibility standard because “[t]he total basis for the expert’s opinion must be scrutinized.” Id. at 676.
¶ 82. The majority notes that the Baris study found that the level of excess risk of non-Hodgkin’s lymphoma “was not associated with an increased number of lifetime runs, and that, in fact, the standardized mortality ratio was highest in those individuals who made the lowest number of firefighting runs.” Ante, ¶ 33. There are three problems with the majority’s approach here: (1) it does not address any of the other factors particular to this claimant that the doctors relied upon in making their conclusions, such as claimant’s lack of protective safety equipment, lack of other known risk factors, and contraction of a type of non-Hodgkin’s lymphoma that is linked to solvents released during fires; (2) the majority’s questioning of the experts’ opinions is precisely the type of issue that goes to the weight of those *273opinions, not to their admissibility; and (3) the majority’s foray into interpretation of the Baris study is misleading and contrary to how the experts interpret that study.
¶ 83. The Baris study itself noted that “[s]mall numbers of observed deaths in the subcategories of the . . . cumulative runs analyses resulted in imprecise risk estimates.” Dr. Guidotti noted that “Baris is very clear . . . that they don’t consider that the runs analysis was particularly useful.” Dr. Lockey, working with Dr. LeMasters, takes the same position and lists numerous possible alternative explanations, including “gross miselassifieation,” “a chance finding,” and “a healthy survivor effect.”24 At a minimum, these observations raise issues of disputed fact.
¶ 84. Dr. Guidotti and Dr. Lockey unsurprisingly found these alternative explanations more convincing than insurer’s counter-intuitive claim (adopted by the majority today) that increased exposure to fires can decrease the likelihood of getting cancer. It is undisputed that firefighting exposes firefighters to known carcinogens: most of the studies presented to the trial court below state that as a given. For instance, the first sentence of the Baris study notes that “[f]irefighters are exposed under uncontrolled conditions to a wide variety of toxic chemicals including known and suspected carcinogens, such as benzene and formaldehyde in wood smoke, polycyclic aromatic hydrocarbons (PAHs) in soot and tars, arsenic in wood preservatives, asbestos in building insulation, diesel engine exhaust, and dioxins.” A carcinogen is defined as “a substance or agent producing or inciting cancer.” Webster’s New Collegiate Dictionary 165 (1981). Thus, it is difficult to understand why the majority puts any stock in the claim that increased exposure to carcinogens decreases one’s chance of cancer — a claim that is inherently self-contradictory, is called into question by the Baris study itself, and is resoundingly rejected by all three of claimant’s experts below.
*274¶ 85. The majority has to mention this strange finding from the Baris study because there is no other way to affirm the trial court’s decision. Dr. LeMasters, Dr. Lockey, and Dr. Guidotti all put much more stock in the Baris study’s finding that firefighters who are employed for more than twenty years are at a greater risk than other firefighters for contracting non-Hodgkin’s lymphoma. If those three experts are correct — or, rather, if a jury could conclude that they are correct — that increased exposure to fires leads to increased risk of non-Hodgkin’s lymphoma, claimant can argue that the generalized studies showing an association among firefighters underestimate the risk that he personally experienced. Then, even studies showing a relative risk of less than 2.0 help claimant make out a prima facie case that his forty years as a firefighter made it more likely than not that firefighting caused his disease.
¶ 86. The trial court recognized that “[t]he court in Daubert II even acknowledged that a study showing a ‘relative risk of less than two’ could be admissible, if ‘combined with other evidence to show it is more likely than not that the accused cause is responsible for a particular plaintiff’s injury.’ ” (quoting Daubert II, 43 F.3d at 1321 n.16). Here, the studies are combined with (among other things) the fact that it is more likely than not that claimant’s forty years as a firefighter put him at greater risk for contracting cancer than other firefighters. At the very least, claimant’s experts are entitled to an opportunity to make that argument to the jury, and it was an abuse of discretion for the trial court to rule summarily against claimant.
¶ 87. These facts could easily lead a reasonable jury to conclude that because claimant fought fires for forty years, he was exposed to more carcinogens — and was at greater risk for contracting non-Hodgkin’s lymphoma — than the average firefighter discussed in the epidemiological studies. Again, this comes back to this Court’s well-reasoned statement that “[t]he admitted evidence does not alone have to meet the proponent’s burden of proof on a particular issue.” USGen, 2004 VT 90, ¶ 19. Here, the epidemiological studies can be combined with what was known about this particular claimant to make out a prima facie case that firefighting caused claimant’s cancer. Thus, the trial court abused its discretion when it excluded the proposed expert testimony.
*275III.
¶ 88. Although it is my view that adopting the 2.0 standard was a clear error of law here, even if that standard were acceptable the trial court abused its discretion by failing to provide any explanation as to why it excluded evidence based upon the Aronson, Figgs, and Sama studies — all three of which exceeded the 2.0 standard. Granted, the Aronson study could properly be excluded because its 2.04 relative risk finding was not statistically significant, as it had a confidence interval that included the number 1.0. But the trial court never explains that as a reason for excluding the Aronson study. More importantly, the Figgs and Sama studies could not be excluded as statistically insignificant, because both of these studies had confidence intervals entirely above 1.0. The Figgs study found a relative risk of 5.6, and the Sama found a relative risk of 3.27. Both of these studies were statistically significant and met the trial court’s strict 2.0 standard. Therefore, the court’s failure to explain why these studies were not themselves sufficient support for the opinion evidence constitutes an abuse of discretion that requires reversal. Joint Eastern, 52 F.3d at 1134 (reversing a trial court because it “did not specify its basis for disregarding” studies that met the court’s standard).
¶ 89. In this situation, the majority is wrong to conclude that there is “too great an analytical gap between the data and the opinion proffered.” Joiner, 522 U.S. at 146. Here there is no gap at all. The Figgs and Sama studies meet even the trial court’s strict 2.0 standard. Indeed, Joiner can be distinguished when the plaintiff has “several statistically significant epidemiological studies that . . . demonstrate!] an association” between the injury and its alleged cause. Giles v. Wyeth, Inc., 500 F. Supp. 2d 1048, 1061 (S.D. Ill. 2007). Here, claimant has presented two studies that are not only statistically significant, but that even meet the stringent 2.0 standard.
¶ 90. The trial court itself recognized that “two [of the studies] show a relative risk greater than 2.0 — Figgs and Sama.” But the trial court appears to require some unspecified percentage (the majority? all?) of surveyed epidemiological studies to meet the 2.0 standard before the jury can even hear about any of the studies. Even Daubert IPs hard-line adoption of the 2.0 standard noted that in that case “[n]one of plaintiffs’ epidemiological experts claims that ingestion of Bendectin during pregnancy more than *276doubles the risk of birth defects.” 43 F.3d at 1320-21. By contrast, here there are two statistically significant studies that show a doubling of the risk. Yet the trial court never explains why that is not enough to send this issue to the jury.
¶ 91. Although the trial court found that the epidemiological studies “reflect widely varying degrees of relative risk,” that is not a reason to exclude all of the studies. Just because the studies had different results does not mean that they are all wrong, and claimant should be allowed to argue to the jury why the Figgs and Sama studies are the studies that arrived at the correct relative risk for claimant. That is particularly true here, where claimant’s experts found that although the relative risks were different, they for the most part all pointed in the same direction. As Dr. Lockey stated, there was “consistency across the medical literature based on epidemiology studies of, in fact, a cause-effect relationship between this profession and the occurrence of non-Hodgkin’s lymphoma.”
¶ 92. The trial court’s unexplained dismissal of the Figgs and Sama studies is also problematic because it implies that the trial court improperly weighed these studies against other studies. Rule 702 “is not intended to authorize a trial court to exclude an expert’s testimony on the ground that the court believes one version of the facts and not the other.” Advisory Committee Notes, 2000 Amendments, F.R.E. 702. Numerous courts have come to the same conclusion. See, e.g., Heller v. Shaw Indus., Inc., 167 F.3d 146, 160 (3d Cir. 1999) (holding that expert testimony cannot be excluded simply because the expert uses one test rather than another, when both tests are accepted in the field and reach reliable results); Ruiz-Troche, 161 F.3d at 85 (“Daubert neither requires nor empowers trial courts to determine which of several competing scientific theories has the best provenance.”); Neurontin, 612 F. Supp. 2d at 150 (“That two key experts . . . vigorously disagree on the interpretation of the existing literature makes clear that Plaintiffs theory falls squarely within the range where experts might reasonably differ and is thus proper fodder for a jury.” (quotation omitted)).
IV.
¶ 93. In the final part of the trial court’s opinion, it addressed the methodology of claimant’s experts in producing a meta-analysis, and the court concluded that this methodology was not *277sound. Because the trial court has yet to engage in a proper analysis of this issue, we should remand this issue to the trial court to determine whether the meta-analysis meets the requirements of Daubert.
¶ 94. In the final two pages of the opinion below, the trial court openly disclosed the following instances of confusion regarding the proposed expert testimony: “it is unclear from Plaintiff’s brief how [Dr. Guidotti] reached his opinion”; “[w]e do not know what scientific method he used”; “[t]his [study] apparently includes other types of cancers”; “Lockey and Masters apparently used a ‘weight-of-the-evidence’ approach”; “[it] is unclear whether Lockey and LeMasters’ opinions in this case are based upon the meta-analysis.” (Emphases added.) Finally, the trial court concluded that although the meta-analysis conducted by Dr. Lockey and Dr. LeMasters “may be” a reliable scientific method, “we do not know which studies of which cancers were included in the meta-analysis,” and “we have no way of knowing whether it is based upon sufficient facts or data.” (Emphases added.)
¶ 95. But the trial court did have a way of knowing whether these expert studies were relevant and reliable: holding a Daubert hearing. The trial court never held a Daubert hearing.25
¶ 96. At the end of claimant’s response to insurer’s motion for summary judgment, claimant specifically “requested] a hearing pursuant to [Vermont Rule of Evidence] 104” if the trial court found one to be necessary. Although the trial court has a great deal of discretion in determining whether a hearing is necessary, here the court’s own opinion recognizes its confusion on numerous critical issues, and it was therefore an abuse of discretion for the trial court to exclude the experts’ testimony without holding a Daubert hearing or at least engaging in further analysis of the submitted materials.
¶ 97. Courts have noted that a trial court cannot exclude expert testimony without considering all of the data that the experts put forward in support of their conclusions:
*278Although the district court properly may exclude expert testimony if the court concludes too great an analytical gap exists between the existing data and the expert’s conclusion, here the gap was of the district court’s making. The court did not consider all of the data relied upon by Dr. Spindler, namely, studies by the defendant and others finding that Zyderm can induce autoimmune reactions. Consequently, the court abused its discretion in concluding that Dr. Spindler’s testimony failed to meet Daubert’s scientific knowledge requirement.
Kennedy, 161 F.3d at 1230; accord Jahn v. Equine Sens., PSC, 233 F.3d 382, 393 (6th Cir. 2000) (noting that although a trial court is not obligated to always hold a Daubert hearing, a “court should not make a Daubert ruling prematurely, but should only do so when the record is complete enough to measure the proffered testimony against the proper standards of reliability and relevance”); Paoli, 35 F.3d at 739 (“Given the ‘liberal thrust’ of the federal rules, it is particularly important that the side trying to defend the admissibility of evidence be given an adequate chance to do so.” (citing Daubert, 509 U.S. at 588)). Thus, in Padillas v. Stork-Gamco, Inc., 186 F.3d 412 (3d Cir. 1999), the court noted that although the decision to hold a Daubert hearing “rests in the sound discretion of the [trial] court,” an abuse of discretion occurs when the trial court finds that the expert’s opinion is “insufficiently explained,” yet fails to hold a Daubert hearing to “giv[e] plaintiff an opportunity to respond to the court’s concerns.” Id. at 418.
¶ 98. Similarly, in USGen, we upheld the exclusion of evidence in part because the trial judge “heard from all three experts” before excluding evidence, 2004 VT 90, ¶ 1, and “rigorously reviewed all three experts’ testimony, made detailed and extensive findings based on that review, and explained why he credited particular testimony above other testimony,” id. ¶ 44. By contrast, here the trial court never heard any direct testimony from claimant’s experts, and the court made numerous clear errors (such as stating that Dr. Guidotti “relie[d] solely upon epidemiology”) that suggest that the court failed to engage in even a cursory review — let alone a rigorous one — of the materials that claimant submitted to the trial court. In this type of situation, as in Kennedy, it is fair to say that “[t]he court did not consider all *279of the data relied upon by” the experts, and any perceived analytical gap is “of the . . . court’s making.” 161 F.3d at 1230. This issue should therefore be remanded to the trial court for further analysis under Daubert26
¶ 99. In summary, the trial court abused its discretion in numerous ways by summarily excluding all of claimant’s evidence and granting insurer’s motion for summary judgment. Although some of claimant’s studies can be excluded because they are statistically insignificant, and although a remand is needed to determine whether the meta-analysis is inadmissible, this does not justify excluding all of the proposed expert testimony.27 The trial court should have admitted the two medical doctors’ expert testimony, which found specific causation based on four statistically significant studies (Burnett, Ma, Figgs, and Sama) and specific information about claimant. Two of those studies (Figgs and Sama) meet even the strict 2.0 relative risk standard and therefore directly support the experts’ conclusions that it is more likely than not that claimant’s injuries were caused by firefighting. The other two studies (Burnett and Ma) can be combined with specific information about claimant to bridge any “analytical gap between the data and the opinion proffered.” Joiner, 522 U.S. at 146. The trial court should have also admitted testimony from claimant’s epidemiologist to. help explain the underlying studies and how firefighting can cause non-Hodgkin’s lymphoma.
¶ 100. In the end, this case is indistinguishable from Daewoo, where this Court found an abuse of discretion when a trial court “excluded] expert testimony that met the standards articulated in Daubert and adopted by this Court.” Daewoo, 2008 VT 14, ¶ 16. Regardless of whether the conclusions of claimant’s experts are ultimately persuasive — an issue that is not before us today — “[t]he trial court should have allowed the adversarial process to *280draw out any deficiencies in the expert testimony, rather than usurping the jury’s function.” Id.
¶ 101. For these reasons, I would reverse and remand to the trial court to apply the proper legal standard for the admission of evidence. I therefore dissent.
¶ 102. I am authorized to state that Justice Johnson joins this dissent.
As noted by the majority, ante, ¶ 17 n.5, these studies and others are cited and collected within G. LeMasters et al., Cancer Bisk Among Firefighters: A Review and Meta-analysis of 32 Studies, 48 J. Em. Med. 1189 (2006), available at http://www.iaff.org/HS/PDF/Cancer%20Risk%20Among%20Fireflghters%20%20UC%20Study. pdf.
Dr. Lockey, for instance, is an occupational pulmonary physician and has “obtained thousands of occupational histories” in his practice and research. He is especially well qualified to combine his special knowledge of epidemiological patterns with a review of claimant’s particular occupational and medical history to form an opinion on whether firefighting caused claimant’s disease.
The Vermont Legislature recently stated that when a firefighter dies from certain cancers — including lymphoma, which is the type of cancer that killed claimant — “the firefighter shall be presumed to have suffered the cancer as a result of exposure to conditions in the line of duty.” 2007, No. 42, § 2 (emphasis added). In adopting this presumption, the Legislature noted that around “28 states and the provinces of Canada have adopted legislation creating a presumption that certain cancers suffered by eligible firefighters are caused by exposure during their employment as firefighters.” Id. § 1(4). Although here claimant died before the passage of this legislative presumption, it is notable that many others agree with the views of claimant’s experts. Cf. Ambrosini v. Labarraque, 101 F.3d 129, 139 (D.C. Cir. 1996) (recognizing “additional indicia of reliability [that] support the admissibility of [the expert’s] testimony”).
Indeed, the court even excluded studies that exceeded the 2.0 standard.
Thus, the majority’s decision today does not preclude future trial courts from admitting evidence based on a more lenient standard than the 2.0 standard used by the trial court below.
"When the Brock court adopted its standard (requiring studies to have a confidence interval that does not include 1.0), the court noted that it viewed its decision as “encouraging district judges faced with medical and epidemiologic proof in subsequent toxic tort cases to be especially vigilant in scrutinizing the basis, reasoning, and statistical significance of studies presented by both sides.” Id. at 315, as modified on reh’g, 884 F.2d at 167. In the case before the Court today, the majority affirms the trial court’s adoption of a standard that is much more stringent than what was adopted by the Brock court. Given that adopting the Brock standard is a way of being “especially vigilant in scrutinizing” expert evidence, id., it is clear that here the trial court’s much more stringent standard goes too far, particularly in light of this Court’s holdings that we adopted Daubert to allow a more liberal standard for admitting evidence. See State v. Tester, 2009 VT 3, ¶ 18, 185 Vt. 241, 968 A.2d 895 (“Daubert intended a more liberal approach to the admission of expert evidence.”); Daewoo, 2008 VT 14, ¶ 9 (“We adopted the Daubert decision precisely because it comported with the ‘liberal thrust’ of the *264rules of evidence and broadened the types of expert opinion evidence that could be considered by the jury at trial.”).
Turpin, like Brock, was also cited with approval by the United States Supreme Court in Daubert. See Daubert, 509 U.S. at 596.
Indeed, several courts and commentators have stated that even the Brock standard requiring statistical significance is too stringent a test for admissibility. See, e.g., Joint Eastern, 52 F.3d at 1134; Berry v. CSX Transp., Inc., 709 So. 2d 552, 570 (Fla. Dist. Ct. App. 1998) (“The use of ‘statistical significance’ to reject an epidemiological study has been roundly criticized by the experts in the field.”); King v. Burlington N. Santa Fe Ry., 762 N.W.2d 24, 47 (Neb. 2009); Clapp & Ozonoff, supra, at 205 (“[PJrominent epidemiologists eschew ‘statistical significance,’ believing that it is not a sine qua non of good science and maintaining that it is neither necessary nor appropriate as a requirement for drawing inferences from epidemiologic data.”). That said, the more common position on this issue is that the Brock court was correct in stating that it is within a trial court’s discretion to exclude studies that do not show a statistically significant result. See Green, supra, at 359 n.73 (“A number of courts have followed the Brock decision or have indicated strong support for significance testing as a screening device.”).
Several courts have held that epidemiological studies that meet the 2.0 standard are in themselves “sufficient to support an inference that an agent caused the particular plaintiffs disease.” King, 762 N.W.2d at 46. Indeed, even defendant’s own brief cites a court’s holding that “[w]ith proper scientific interpretation, these correlations [found in epidemiological studies] provide an inference of causation.” Smith v. Ortho Pharm. Corp., 770 F. Supp. 1561, 1573 (N.D. Ga. 1991).
Although neither of the doctors ever had a chance to personally examine claimant, this Court has previously stated that an expert need not have “firsthand” knowledge of something to make “conclusions [that] were not speculative but instead were ‘based on what is known.’ ” JAM Golf, 2008 VT 110, ¶ 10 (quoting Daubert, 509 U.S. at 590); cf. also Daewoo, 2008 VT 14, ¶ 14 (“[T]he fact that [the expert] did not himself visit the fire scene in conducting his investigation did not render insufficient the factual underpinnings of his opinion.”). Because experts need only focus on what is known, it is also no defense that claimant failed to produce details regarding specific exposures to specific known carcinogens while firefighting. As claimant noted during oral argument, claimant cannot be required to show a list of all of the carcinogens released during each fire he attended — information that even defendant concedes is simply not available. Further, to the extent that certain information is available, but experts fail to make use of it — for instance, *271if the doctors failed to engage more fully in the details of claimant’s medical history — this presents “an issue subject to cross-examination, but does not render [their] opinion[s] inadmissible under Rule 702.” Daewoo, 2008 VT 14, ¶ 12.
The Baris study itself recognized that a “healthy survivor effect” can underestimate the true risk of exposure: “if there is a survivor effect in which the healthiest workers continued to be employed for a long term, using duration [of employment] as a proxy for exposure may mask a true relationship over the range of duration of employment.” For the same reason, a healthy survivor effect could underestimate the true risk of exposure when only the healthiest workers (those who are naturally less susceptible to contracting non-Hodgkin’s lymphoma) are able to make large numbers of lifetime runs.
Although the majority claims that “[n]o party raises this issue on appeal,” ante, ¶ 25 n.9, the ultimate question on appeal is whether the trial court abused its discretion in dismissing evidence. This Court has previously noted that how “rigorously” a trial court reviewed proposed expert testimony is relevant to determining whether the trial court abused its discretion in dismissing that testimony. USGen, 2004 VT 90, ¶ 44.
This does not necessarily mean that the trial court would need to hold a Daubert hearing on remand. Further analysis of the documents that have already been submitted could make it clear that the meta-analysis is either admissible or inadmissible. For instance, although the trial court noted that the meta-analysis is suspicious because it looks at studies involving related cancers in addition to non-Hodgkin’s lymphoma, the court ignored the experts’ explanation that such studies are in fact more reliable than narrower studies —• an explanation that could tip the balance in favor of admissibility. Nevertheless, this should be left for the trial court to decide in the first instance after a more thorough analysis.
For instance, as claimant told the trial court at oral argument, the meta-analysis “is not the only piece of the puzzle,” but is rather “just one study” among many.