Katz, Abosch, etc., P.A. v. Parkway Neuroscience

Court: Court of Appeals of Maryland
Date filed: 2023-08-30
Citations:
Copy Citations
Click to Find Citing Cases
Combined Opinion
Katz, Abosch, Windesheim, Gershman & Freedman, P.A., et al. v. Parkway Neuroscience
and Spine Institute, LLC, No. 30, September Term, 2022. Opinion by Biran, J.

EXPERT WITNESSES – ADMISSIBILITY OF EXPERT TESTIMONY –
MARYLAND RULE 5-702 – LIMITED REMAND – Respondent filed a lawsuit against
Petitioners alleging accountant malpractice and related claims. In the course of discovery,
Respondent designated an expert to provide an opinion concerning Respondent’s lost
profits resulting from Petitioners’ alleged torts. Petitioners moved to exclude the testimony
of the proffered expert under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579
(1993), and Rochkind v. Stevenson, 471 Md. 1 (2020). After conducting a Daubert-
Rochkind hearing, the circuit court granted Petitioners’ motion to exclude the proffered
expert testimony. The Supreme Court of Maryland held that much of the trial court’s
consideration of the Daubert-Rochkind factors was appropriate, including the trial court’s
assessment of how the expert’s choice of data, assumptions, and other inputs affected the
reliability of her methodology. However, the trial court erred when it considered the
expert’s “normalizing adjustments” that recategorized certain expenses from one year to
another, as reflecting on the reliability of the expert’s methodology. The Court ordered a
limited remand to the circuit court under Maryland Rule 8-604(d)(1) so that the trial court
may decide to admit or exclude the expert’s testimony without consideration of the
normalizing adjustments as reflecting on the reliability of the expert’s methodology.
Circuit Court for Howard County
Case No.: C-13-CV-18-000181
Argued: May 4, 2023
                                                                        IN THE SUPREME COURT

                                                                             OF MARYLAND*

                                                                                     No. 30

                                                                           September Term, 2022


                                                                   KATZ, ABOSCH, WINDESHEIM,
                                                                GERSHMAN & FREEDMAN, P.A., ET AL.

                                                                                       v.

                                                                       PARKWAY NEUROSCIENCE
                                                                       AND SPINE INSTITUTE, LLC


                                                                       Fader, C.J.
                                                                       Watts
                                                                       Hotten
                                                                       Booth
                                                                       Biran
                                                                       Gould
                                                                       Eaves,

                                                                                     JJ.


                                                                             Opinion by Biran, J.
                                                                              Booth, J., concurs.
                                                                Gould, J., concurs in part and dissents in part.
Pursuant to the Maryland Uniform Electronic Legal Materials                   Watts, J., dissents.
Act (§§ 10-1601 et seq. of the State Government Article) this
document is authentic.


                   2023-08-30
                   12:44-04:00                                             Filed: August 30, 2023
Gregory Hilton, Clerk



* At the November 8, 2022 general election, the voters of Maryland ratified a constitutional
amendment changing the name of the Court of Appeals of Maryland to the Supreme Court
of Maryland. The name change took effect on December 14, 2022.
       When this Court adopted the Daubert1 expert testimony admissibility standard in

Rochkind v. Stevenson, 471 Md. 1 (2020), we embraced a regime that prizes the reliability

of an expert’s methodology over its general acceptance. We empowered trial judges to

protect juries from junk science while also broadening the range of possibly admissible

opinions beyond just those dominant among practitioners. We asked judges to engage with

the science without playing amateur scientist, and we promised the deference appropriate

to courts administering a flexible approach to analyzing the admissibility of expert

testimony. This case requires us to reflect on that flexibility and deference.

       Parkway Neuroscience and Spine Institute, LLC (“PNSI”, the Respondent here) is

a medical and surgical practice that began to expand in 2011 and needed accounting help.

In 2013, PNSI retained accounting firm Katz, Abosch, Windesheim, Gershman &

Freedman, P.A. and, specifically, Mark Rapson, who specialized in medical practice

accounting (we shall refer to the firm and Mr. Rapson, the Petitioners here, collectively as

“KatzAbosch”). Within a few years after retaining KatzAbosch, PNSI began to

disintegrate; members of the practice began leaving in 2015, and by the middle of 2016,

only two members remained of the nine who had been in place at the end of 2014. PNSI

terminated KatzAbosch’s services in 2015.

       PNSI alleges that malpractice by KatzAbosch caused the mass exodus of its

members. In 2018, PNSI sued KatzAbosch in the Circuit Court for Howard County to

recover damages for lost profits. To establish those damages, PNSI designated certified



       1
           See Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).
public accountant Meghan Cardell as an expert witness. She used the widely accepted

“before-and-after” method to calculate PNSI’s lost profits, choosing 2015 as a “baseline”

period against which she would compare the actual profits in subsequent years through

2019, and adding up the differences to arrive at an estimate of what profits PNSI missed

out on due to KatzAbosch’s alleged harmful conduct. A few weeks before the June 2021

Daubert-Rochkind hearing, Ms. Cardell issued updated calculations reflecting some

“normalizing adjustments” she had made; although PNSI’s accounting records had not

changed since her initial analysis, Ms. Cardell reviewed PNSI’s financial information again

and noticed some payments that had been categorized in the wrong years. She reallocated

those payments to the years she believed to be correct and updated her calculations.

       Those two issues – Ms. Cardell’s choice of 2015 as the “before” in her “before-and-

after” analysis and her June 2021 normalizing updates – rose to the top of the trial court’s

mind in the Daubert-Rochkind hearing. The trial court noted speculative and insufficiently

substantiated judgment calls that Ms. Cardell had made in arriving at the 2015 benchmark.

Among other things, the trial court wondered why Ms. Cardell had chosen 2015 (a

profitable year) rather than, say, an average that included the several (unprofitable) years

prior to the alleged harm event. The trial court also was concerned about Ms. Cardell’s

inability to articulate industry standards relating to the concept of “economic impact” and

to the proper treatment of owner draws.

       In addition to these points, the trial court commented several times about Ms.

Cardell’s June 2021 normalizing adjustments, which negatively affected its opinion of Ms.

Cardell’s reliability. Essentially, the court did not understand why it had taken Ms. Cardell


                                             2
so long to notice the errors. The court discussed these adjustments when considering the

Daubert factors relating to a methodology’s error rate and to whether the field of expertise

claimed by the expert is known to reach reliable results for the type of opinion the expert

would give.

       Based on its application of the Daubert-Rochkind factors, the trial court excluded

Ms. Cardell’s testimony, leading to summary judgment in favor of KatzAbosch because

PNSI could not prove damages.

       PNSI appealed, and the Appellate Court of Maryland2 held that the circuit court

abused its discretion in finding Ms. Cardell’s methodology unreliable. As to the 2015

baseline choice, the Appellate Court agreed with PNSI that the choice was a question of

data (and thus a factual question for the jury) rather than of methodology. With respect to

the normalizing adjustments, the Appellate Court said that Daubert’s “error rate” factor

must be understood as the rate of unknown errors in the methodology employed, not as an

“error correction rate,” or else courts would create incentives against experts disclosing

and explaining errors they made. The intermediate appellate court reversed the trial court’s

exclusion of Ms. Cardell’s expert testimony and remanded for consistent proceedings.

KatzAbosch petitioned this Court for further review.

       As we explain more fully below, the choice or calculation of the inputs to a

methodology can be a part of the methodology itself, and we reject an unduly rigid dividing



       2
         At the November 8, 2022 general election, the voters of Maryland ratified a
constitutional amendment changing the name of the Court of Special Appeals of Maryland
to the Appellate Court of Maryland. The name change took effect on December 14, 2022.

                                             3
line between “data” and “methodology” that binds courts to admit methodologically

questionable analyses cloaked as data. To the extent the trial court considered how Ms.

Cardell’s choice of data, assumptions, and other inputs affected the reliability of her

methodology, the trial court’s Daubert-Rochkind analysis was proper. However, the trial

court erred in its consideration of the normalizing adjustments as reflecting on the

reliability of Ms. Cardell’s methodology, as opposed to the credibility (or reliability) of

Ms. Cardell herself. After a careful review of the record, we determine that the fair and

prudent course of action at this point is to remand the case to the circuit court to decide

whether to admit or exclude Ms. Cardell’s testimony without consideration of the June

2021 normalizing adjustments as reflecting on the reliability of Ms. Cardell’s methodology.

                                             I

                                       Background

   A. Facts

       During the period relevant to this case, PNSI was a Western Maryland and

Pennsylvania mixed medical practice that diagnosed, managed, and treated disorders of the

brain, spine, and peripheral nervous system. It employed neurosurgeons, interventional and

non-interventional pain physicians, neurologists, physicians’ assistants, and support staff.

The practice had operated since 1998. Beginning in 2011, PNSI expanded, hiring more

physicians and support staff. These efforts caused PNSI to spend more on salaries and

build-out expenses without offsetting revenue. At the end of 2014, the practice had nine

member-owners, all physicians.




                                             4
       1. The Engagement of KatzAbosch

       None of these members, however, were accounting or finance experts. In early

2013, PNSI’s long-time accountant advised PNSI that the practice had outgrown his firm’s

services and recommended that the practice retain a new accounting firm to help guide

PNSI through its growth and expansion process. So PNSI began searching for a firm that

specialized in medical practice accounting and finance. In October 2013, PNSI retained

KatzAbosch to provide tax, accounting, and financial advice and services, as well as to

provide “expert business and financial guidance and direction to help PNSI continue to

grow its practice.” The engagement included analyzing PNSI’s general ledger and financial

statements, making recommendations concerning PNSI’s financial affairs, and designing

and administering a new member compensation model. Mr. Rapson (chair of KatzAbosch’s

Medical Services Group) was responsible for the account.

       PNSI’s 2012 Operating Agreement provided the compensation terms for its

member-physicians. First, each member was to maintain a capital account on the books of

the practice, with a minimum balance that PNSI’s Board of Managers (the “Board”) would

determine annually. Second, members were to be paid a monthly “draw” (determined by

the Board) at the start of each year – in essence a form of salary. Third, revenue received

by the practice for hospital and trauma calls would be distributed to the member who had

taken the call, with PNSI functioning as a pass-through entity. These pass-through

payments, along with the monthly draws, comprised “guaranteed payments” to the

members. Finally, each quarter, the Board would distribute any excess cash flow

(“distributions”) in proportion to each member’s ownership stake.


                                            5
       In early 2014, KatzAbosch designed and proposed a new compensation model, and

PNSI adopted it. According to PNSI, KatzAbosch’s model did not reserve funds for known

build-out-related expenses concerning one of PNSI’s locations that would be coming due

later that year as well as other significant expenses. PNSI alleges that, despite these

expenses coming due, KatzAbosch directed PNSI to make almost $1 million in quarterly

distributions to members between July and October 2014.

       2. Termination of KatzAbosch and the Departure of Most of PNSI’s Members

       PNSI alleges that, as a result of KatzAbosch’s erroneous advice, PNSI almost ran

out of money by the end of 2014. According to PNSI, the practice had to use over $660,000

from its line of credit in the fourth quarter of 2014 alone. In January 2015, KatzAbosch

disclosed to PNSI its precarious financial situation. While PNSI had had almost $1 million

in cash on hand at the start of 2014, it had less than $40,000 by the start of 2015. PNSI

terminated KatzAbosch’s services in the spring of 2015.

       The practice then began to come apart. PNSI alleges that members – who had

personally guaranteed loans the practice had taken – were increasingly distressed about the

financial condition of the practice and their personal liability. Starting in mid-2015,

members began to withdraw from the practice, taking with them patients and the associated

revenue streams. By mid-2016, only two members – Dr. Brian Holmes and Dr. Neil

O’Malley – remained of the nine who had been in place at the end of 2014. KatzAbosch

claims that Dr. Holmes and Dr. O’Malley received more compensation from the practice

after they were the only two members remaining (thereby depressing estimates of PNSI’s




                                            6
profits post-exodus), although PNSI explains this as the result of those two doctors taking

more trauma calls and thus receiving more in guaranteed payments.

   B. The Lawsuit

       In 2018, PNSI sued KatzAbosch in the Circuit Court for Howard County, alleging

claims for accountant malpractice, negligent misrepresentation, breach of contract, and

unjust enrichment. PNSI initially sought damages for lost profits, settlement amounts paid

to a departing member, written-off amounts owed by two departing members, fees paid to

KatzAbosch, and prejudgment interest, all totaling $9,456,035. The bulk of the claimed

damages was the alleged lost profits.

       1. Meghan Cardell, PNSI’s Damages Expert

       In July 2019, PNSI designated Meghan Cardell as an expert witness expected to

testify that “PNSI suffered and will continue to suffer significant financial losses resulting

from the mass exodus of members and staff, loss of patients, replacement of long-time

established physicians with newer-practicing physicians, and litigation with withdrawing

members, as identified in the Calculation of Damages[.]” Ms. Cardell is a certified public

accountant and a certified fraud examiner. At the time of her designation by PNSI as an

expert in this case, Ms. Cardell was Director for Disputes and Investigations at the

Washington, D.C. accounting firm Alvarez & Marsal, where she had worked since 2014.

Before that, she was a senior associate in forensics at Veris Consulting from 2011 to 2014.

       Ms. Cardell used the “before-and-after” methodology of calculating PNSI’s lost

profits. Under this approach, the expert compares profits before and after a damaging event,




                                              7
the former being the “benchmark” or “base” period and the latter being the “loss” period.

Ms. Cardell initially calculated PNSI’s damages in May 2019, as follows:

                        Damages (less prejudgment interest)
                               (Calculated in May 2019)
 Total lost profits                          $8,520,744
 Dr. DeMarco Settlement                      $89,421
 Dr. Sullivan Amount Owed                    $84,836
 KatzAbosch Fees                             $182,010
 Total:                                      $8,877,012[3]
                                Prejudgment interest
   Lost profits                              $503,999
   DeMarco & Sullivan                        $20,185
   KatzAbosch Fees                           $54,839
   Total prejudgment interest:               $579,023
 Total Damages                               $9,456,035

          The lost profits total included figures for 2016 through 2025. Ms. Cardell selected

2015 as the base year. She then used actual figures to calculate the lost profits for 2016,

2017, and 2018; she used the trend line from those three years to project lost profits through

2025, discounting the figures by 20% in 2019-22 and 50% in 2023-25 to reflect a trend of

declining annual lost profits as the practice would gain doctors, patients, and income in the

future.

          On July 26, 2019, KatzAbosch moved to strike PNSI’s claim and to exclude Ms.

Cardell’s testimony. At a hearing in June 2020, the trial court (the Honorable Richard S.

Bernhardt, specially assigned) denied KatzAbosch’s motion, but expressed concern about

admitting Ms. Cardell’s calculations at trial if she continued to project lost profits for 2020



          3
         This total amount is one dollar greater than the sum of the four amounts that
precede it. It is not clear whether this discrepancy was due to a rounding error or some
other issue.

                                               8
through 2025 related to the 2015-16 departure of physicians, despite the changed landscape

created by the COVID-19 pandemic.

      On May 5, 2021, KatzAbosch renewed its motion to strike PNSI’s lost profit claims

and to exclude Ms. Cardell’s testimony under this Court’s recently adopted Daubert-

Rochkind standard for expert testimony, disputing the factual basis of Ms. Cardell’s

Calculation of Damages report.

      Ms. Cardell updated her Calculation of Damages on May 17, 2021. PNSI no longer

sought damages for 2020 or beyond, and Ms. Cardell’s new calculations omitted those

years. The updated calculations featured a revised damages calculation of $7,335,447:

             Damages (less prejudgment interest) – Updated May 17, 2021
  (dropping 2020 and onward; concerning only 2016-19 pre-pandemic; using finalized
                                 actual 2019 figures)
 Total lost profits                         $8,520,744 $5,789,521
 Dr. DeMarco Settlement                     $89,421
 Dr. Sullivan Amount Owed                   $84,836
 KatzAbosch Fees                            $182,010
 Total:                                     $8,877,012 $6,145,789
                                Prejudgment interest
   Lost profits                             $503,999   $1,078,968
   DeMarco & Sullivan                       $20,185    $37,630
   KatzAbosch Fees                          $54,839    $73,060
   Total prejudgment interest:              $579,023   $1,189,658
 Total Damages                              $9,456,035 $7,335,447




                                            9
       On May 27, 2021, the trial court scheduled a full-day, in-person evidentiary hearing

on KatzAbosch’s renewed motion for June 30, 2021. On June 11, 2021, Ms. Cardell

updated her calculations again:

             Damages (less prejudgment interest) – Updated June 11, 2021
 Total lost profits                        $8,520,744 $5,789,521 $4,956,080
 Dr. DeMarco Settlement                    $89,421
 Dr. Sullivan Amount Owed                  $84,836
 KatzAbosch Fees                           $182,010
 Total:                                    $8,877,012 $6,145,789 $5,312,348
                                Prejudgment interest
   Lost profits                            $503,999     $1,078,968 $890,837
   DeMarco & Sullivan                      $20,185      $37,630      $38,346
   KatzAbosch Fees                         $54,839      $73,060      $73,808
   Total prejudgment interest:             $579,023     $1,189,658 $1,002,991
 Total Damages                             $9,456,035 $7,335,447 $6,315,339

Notably, the June 2021 updates eliminated the loss from 2016 (meaning Ms. Cardell had

now found that year to be profitable) and showed lost profits only from 2017-19. These

changes reflected “normalizing adjustments” discussed in detail below.

       2. The Daubert-Rochkind Hearing

       The trial court conducted a Daubert-Rochkind hearing on June 30, 2021. The crux

of PNSI’s argument was that KatzAbosch’s concerns with Ms. Cardell dealt less with her

methodology and more with the assumptions she had made, and those assumptions went

to weight rather than admissibility. The court declined either to accept or to reject Ms.

Cardell’s qualifications as an expert, although it eventually said her experience (or lack

thereof) with “niche” medical practices bore somewhat on the reliability of her testimony,

independent of the question of her qualification as an expert witness.




                                            10
              a. Issues Raised in the Hearing

                i.   2015 as the Base Year

       After describing why other methodologies to determine contractual damages and

lost business value (and, relatedly, lost future profits) were not applicable here, Ms. Cardell

described the before-and-after methodology that she chose to apply. She testified that this

method “in [her] experience is the most commonly used and most commonly accepted

methodology of measuring lost profits,” that she had used it in her career “many, many

times,” and that she had seen it used by other experts in and out of litigation settings just

as frequently. She oriented her analysis around the harm event of seven of nine doctors

withdrawing from the practice within a short time; the first doctor left in June 2015, a few

more at the end of 2015, and a few more by June 2016. So calendar year 2015 was her

benchmark “before” period, and the “after” period (or “loss” period) began in calendar year

2016 and ran through 2019. She described this as a “conservative proxy” for the practice’s

future earnings, had so many members not left.

       The business had been profitable in 2015, and PNSI had been investing to grow;

Ms. Cardell said “[t]he practice had sort of hit its stride in 2015.” She said medical

businesses are not subject to swings of consumer preference that might have destabilized

the results of those investments, and the medical specialty industry was projected to grow

because of aging Baby Boomers and the prevalence of chronic illness.

       The trial court articulated some concerns with Ms. Cardell’s methodology. First, the

court asked why she had chosen just one year (2015) rather than a benchmark of multiple

years before the harm event; Ms. Cardell said the benchmark could be an average or could


                                              11
be one year, particularly if there were past years that would not be representative of what

future earnings would look like. Second, the court noted that 2015 had been the most

profitable year since 2010,4 and the court did not understand why 2015, rather than a slower

year, should become the “new normal,” as there might have been post-harm years that also

would have been a “squeeze.” Ms. Cardell believed 2014 was not entirely unusual but

would have been unrepresentative because 2015 was the year when PNSI had emerged

from the phase of significant expenditures (and reduced profits) in order to grow the

practice. Third, the court questioned how Ms. Cardell knew that PNSI had truly hit its stride

in 2015 and that there would not have been significant future problems caused by

overexpansion – personality conflicts between the new doctors, sub-par new facilities, etc.

She answered that the practice had been profitable in 2016 even as the exodus had begun,

showing that it could still be profitable despite some amount of dysfunction and confirming

that it would have continued to generate profits in the future without the full exodus.

               ii.   Other Data and Assumptions (Reimbursement Rates)

       Ms. Cardell assumed that, but for the harm event, PNSI’s profits in 2017, 2018, and

2019 would have at least equaled the 2015 profits. Ms. Cardell had not analyzed changes

in medical service reimbursement rates (which had been decreasing) during the relevant

years, although the market research she examined showed that medical industry revenues

were expected to grow because of expanded access to Medicare, Medicaid, and private

health insurance under the Affordable Care Act. The core of the trial court’s concern was


       4
       As measured by Ms. Cardell, PNSI had had a $22,000 profit in 2010, losses in
2011-14, and a $321,751 profit in 2015.

                                             12
whether Ms. Cardell had looked “at all of the various revenue threads that come up with

the fabric of [PNSI’s] yearly income” or whether she had just been “looking at numbers

without understanding from further records or where those numbers came from[.]” Ms.

Cardell said that her analysis was based on revenues.

              iii.   Correction of Calculations

       Ms. Cardell described normalizing adjustments she had made to her calculations

earlier in that month relating to trauma and on-call payments. In 2016, PNSI had made

payments to its physicians based on trauma and on-call revenues the practice had received

in 2015. Ms. Cardell initially included those payments as expenses of the practice for 2016.

After Ms. Cardell discovered that the practice had received the associated revenues in 2015,

she removed that expense from 2016 and added it to 2015. This reduced by half PNSI’s

adjusted income in base year 2015 by increasing adjusted expenses. Ms. Cardell’s

$395,010 downward adjustment in expenses for 2016 increased adjusted income for that

year, making 2016 profitable (compared to her earlier May 2021 estimates for 2016, which

had shown a loss) and contributing to the elimination of any loss in profits in 2016.

       There were also trauma and on-call adjustments for 2017, 2018, and 2019. In those

years, Dr. Holmes and Dr. O’Malley earned income based on their trauma and on-call

services, and that money flowed to the practice for normal pass-through purposes, but they

chose not to pay themselves those funds; rather, they effectively loaned those amounts to

the practice in order to increase its cash on hand. In the past, those guaranteed payments

would have been recorded as expenses, so Ms. Cardell included them as such via the June

2021 adjustments. Her first updated calculations (issued May 2021) had included upward


                                            13
trauma and on-call expense adjustments for years 2017 ($129,573), 2018 ($767,573), and

2019 ($767,573). Each of these upward adjustments had the effect of increasing adjusted

expenses and therefore reducing adjusted income, making the lost profits figures (and thus

PNSI’s alleged damages) larger for each year. Ms. Cardell’s second updated calculations

(issued June 2021) decreased the upward expense adjustments for 2017 (to $77,000) but

increased the upward expense adjustments for 2018 (to $842,400) and 2019 (to $1,233,617)

in comparison to her first updated calculations. For all three years, the post-update upward

adjustments still increased adjusted expenses, decreased adjusted income, and increased

lost profits (and damages).

       Although the information that led to these adjustments had long been available to

her, Ms. Cardell explained that she looked again at the numbers ahead of the Daubert-

Rochkind hearing and identified a payment that looked like it belonged to a different year.

A PNSI accounting manager confirmed this intuition, and so Ms. Cardell made the

judgment that the changes relating to trauma and on-call payments/expenses were needed.

The trial court expressed concern that Ms. Cardell’s calculations might be liable to

additional such changes; Ms. Cardell testified that she did not believe any other

adjustments were necessary.

       KatzAbosch’s counsel attempted to characterize Ms. Cardell’s updates to her

calculations (from May 2021 to June 2021) as a “fifty percent error rate” because she had

made revisions between the first and second updates.




                                            14
                iv.   Treatment of Member Draws

       The trial court noted two different ways that a business could treat member draws

when calculating profits. To illustrate the point, the court referred to a hypothetical limited

liability corporation that, after paying all its expenses other than potential draws to its

owner, has $100 in cash. On the one hand, the owner could withdraw the $100 as salary

and the business would show no profit that year. On the other hand, the business could be

considered to have $100 in profits regardless of whether the owner withdraws it from the

business that year. Ms. Cardell said that she believed the latter was the correct way to look

at it, but there was no industry standard on this issue one way or the other. The trial court

expressed skepticism that there was no industry standard, given that the classification of

draws (owner salary) as profits or expenses “seems like a pretty basic issue that [is] capable

of rearing its head in every case in which … an owner’s draw is possible.”

                v.    Lack of Member-Specific Lost Profits Calculation

       Ms. Cardell was “unable to parse out” the financial effects of any particular

member’s departure from the practice. She could not recall whether she had looked at

specific collections for Dr. Holmes and Dr. O’Malley, her analysis having been based on

overall revenues for the practice. Defense counsel presented her with PNSI financial

records showing that those doctors’ post-exodus revenue collections declined from 2016

through 2019.

              b. The Trial Court’s Ruling

       PNSI acknowledged that “reasonable minds can differ” as to whether its profits in

2015 was the appropriate benchmark against which to measure the practice’s future profits.


                                              15
However, PNSI argued that none of the Daubert-Rochkind factors militated toward

excluding Ms. Cardell. The court ruled from the bench, granting KatzAbosch’s motion to

exclude Ms. Cardell’s testimony, based on the Daubert-Rochkind standard.

                i.   The Court’s Overall Sense

       The trial court began its ruling by acknowledging the wide acceptance of the before-

and-after methodology for calculating lost profits. The court then highlighted its prime

concerns with Ms. Cardell’s testimony: speculation, ipse dixit “judgment calls,”

helpfulness to the jury, and information Ms. Cardell had failed to consider.

       First, the court found that Ms. Cardell’s selection of profitable 2015 as a benchmark

(rather than unprofitable 2011-14, marginally profitable 2010, or an average) was

speculation, especially given that she had no specialization in a niche practice like PNSI.

So, too, was Ms. Cardell’s analysis that, after years of unprofitability, 2015 marked the

critical turning point and that, in the court’s words, “the rocket had left the launch pad and

was going straight up.” The speculation finding was bolstered by the fact that Ms. Cardell

relied heavily on direct oral communications with PNSI and its employees rather than

information more concretely assessable by the jury.

       Second, the court pointed to unreliable and ipse dixit “judgment calls.” The court

discerned possible bias and some degree of unreliability in the judgment calls that Ms.

Cardell made without some clear authority as to (1) the June 2021 normalizing adjustments

(which the court said Ms. Cardell could have made as part of her original calculations);

(2) “whether owner draws reduce profits or not”; and (3) Ms. Cardell’s subjective




                                             16
understanding of the term “economic impact.”5 These judgment calls took on special

importance because, the court said, either Ms. Cardell lacked awareness of industry

standards or no industry standards existed, heightening her reliance on unverified sources

of information and her own say-so.

       Third, the court questioned how helpful Ms. Cardell’s testimony would be to the

jury. Ms. Cardell chose not to disaggregate revenues according to each individual departing

physician, and instead she considered lost profits in an “all or nothing” manner, rendering

her opinion “only helpful if the jury accepts that each and every doctor of the seven … left

solely because … of the acts of [KatzAbosch].” Relatedly, the trial court stated that “[n]ot

every doctor, necessarily is going to make or contribute the same amount of money each

and every year. There was nothing presented in the Report to, that considered whether or

not the income, the revenue generating ability of the remaining doctors or the leaving

doctors was considered [sic]. It was just, let’s take 2015 and go from there. Let’s just use

the numbers as we get them without examining whether or not, you know, the doctors who

have income would have had that income and things of that nature. And again, there’s

nothing about her training to me that would qualify her to make those assumptions that all

these numbers would not be affected by passage of time and the doctor’s passage of time.”




       5
         Ms. Cardell testified that she considers “economic impact” in her work. When the
court asked her what “economic impact” means, Ms. Cardell began her response by saying,
“the way I think about it is…” This raised a red flag for the trial court: “[Y]ou start off
with, ‘the way I think of it’. What does your industry consider it to be?” Ms. Cardell could
not provide a specific industry definition for the term.

                                            17
       Fourth, the court noted that Ms. Cardell had failed to consider certain important

pieces of information, including: (1) whether Dr. Holmes and Dr. O’Malley had other

income streams; and (2) how changing insurance reimbursement rates were influencing the

practice’s profitability.

                ii.         The Daubert Factors

       The trial court then considered the Daubert-Rochkind factors:

                      (1)      Testing – The court acknowledged that a “before-and-after”

                               analysis was appropriate and testable generally, but questioned the

                               testability of Ms. Cardell’s judgment calls, including why she had

                               chosen 2015 as the base year and the meaning of “economic

                               impact.”

                      (2)      Peer review – The court did not find this factor relevant.

                      (3)      Rate of error – The court did not find this factor relevant as it is

                               normally applied, where there is some known rate of false positive

                               or false negative results. But the court did note concern with Ms.

                               Cardell’s June 2021 pre-hearing updates, as those changes were

                               not caused by a change in the facts.

                      (4)      Standards and controls – The court said “there was very little

                               evidence of any standards of controls that exist,” in particular on

                               economic impact and the treatment of owner draws in profit

                               computations.




                                                   18
(5)   General acceptance – The court acknowledged that the before-

      and-after analysis to measure lost profits was generally accepted.

(6)   Purpose: prior research or litigation – Although the court

      expressed that there is “no inherent negativity” to experts who

      develop their opinions for litigation purposes compared to those

      who develop their expertise for independent research, the court did

      express concerns about Ms. Cardell’s heavy reliance on oral

      communications with PNSI personnel.

(7)   Unjustifiable extrapolation from an accepted premise – The

      court said: “This applies if the … premise that we’re talking about

      from which the unfounded conclusions roles [sic] would be the

      acceptance of 2015 as the benchmark[.]”

(8)   Accounting for obvious alternative explanations – The court

      found that Ms. Cardell “clearly” had not accounted for alternative

      explanations – in particular, she had failed to consider doctor-

      specific revenue generation and changing reimbursement rates’

      effects on revenue.

(9)   Care here as in professional non-litigation work – The court

      observed as to this factor: “I don’t find that to be applicable, I don’t

      know what to say about that and I have no reason to think that she

      blew this off as an inconsequential project, I mean she took this

      very seriously.”


                            19
                  (10)   Field known to reach reliable results for this type of opinion –

                         The court “incorporate[d] everything [it had] said,” noting in

                         particular the June 2021 updates (driven by subjective reasons

                         rather than newly revealed facts) as “mak[ing] the whole reliability

                         even that much more suspect.”

       In sum, the court said, PNSI had failed to meet its burdens primarily as to Ms.

Cardell’s reliability and secondarily as to Ms. Cardell’s usefulness to the jury, although the

court described the “usefulness” finding as a “very, very slight factor.”

       On the following day, the trial court issued a written supplement to its oral ruling in

which it discussed the persuasive value of CDW LLC, et al. v. NETech Corp., 906 F. Supp.

2d 815 (S.D. Ind. 2012). In CDW, the expert determined lost profits by using the “yardstick

method” – i.e., he compared the subject business branch’s profits to the plaintiff-business’s

other branches. The CDW Court noted that “[a]n expert’s choice in data sampling is at the

heart of his methodology. A yardstick approach is an acceptably reliable method under

Daubert for calculating lost profits only if the benchmarks (or yardsticks) are sufficiently

comparable that they may be used as accurate predictors of what the target would have

done.” CDW, 906 F. Supp. 2d at 824 (internal citation omitted). “Absent the requisite

showing of comparability, a damage model that predicts either the presence or absence of

future profits is impermissibly speculative and conjectural.” Id. (internal quotation marks

and citation omitted). The trial court found the before-and-after and yardstick methods

sufficiently similar and wrote that “the expert’s choice of a benchmark in CDW is

analogous to the expert’s choice of 2015 as the benchmark in the instant case. … Thus


                                             20
[this] Court cites CDW as having persuasive value in finding that [PNSI] has not satisfied

Maryland Rule 5-702 and this Court’s decision to exclude this expert on Daubert grounds.”

       The parties subsequently filed a stipulation of dismissal, agreeing (among other

things) that PNSI could not prove a prima facie case with respect to its claims for

accountant malpractice, negligent misrepresentation, and breach of contract, in light of the

trial court’s exclusion of Ms. Cardell’s expert testimony. The trial court entered summary

judgment in favor of KatzAbosch and dismissed the case in its entirety.

       3. Appeal

       PNSI appealed to the Appellate Court of Maryland. In a reported opinion, the

Appellate Court held that the circuit court had abused its discretion in finding Ms. Cardell’s

methodology unreliable under Daubert-Rochkind. Parkway Neuroscience and Spine

Institute, LLC v. Katz, Abosch, Windesheim, Gershman & Freedman, P.A., et al., 255 Md.

App. 596, 623-37 (2022). The Appellate Court discerned error in the trial court’s criticisms

of Ms. Cardell’s testimony based on: (1) the selection of 2015 as the base year;

(2) insurance reimbursement rates; (3) standards for member draws; (4) the June 2021

updates to her calculations; and (5) the lack of individual per-doctor lost profit figures. Id.

at 623-37. The Appellate Court also interpreted the trial court’s comments regarding Ms.

Cardell’s lack of experience with respect to specialty medical practices as a finding that

she lacked the requisite qualifications to be accepted as an expert. See id. at 622-23.

       Specifically, the Appellate Court concluded that “[a]nalyzing reimbursement rates,

selecting the base year, and relying on data from and conversations with PNSI all are issues

with the soundness of the data” rather than with the reliability of Ms. Cardell’s


                                              21
methodology. Id. at 627 n.9. The court further opined that “[w]hether Ms. Cardell should

have deducted the member draws from the projected profits is something [KatzAbosch]

can attack during cross-examination before a jury – but not at the Daubert-Rochkind

hearing.” Id. at 633. The Appellate Court considered the proper treatment of member draws

to be “a fact-laden issue – involving credibility, not reliability.” Id.

       As to the June 2021 updates, the Appellate Court determined that the trial court

misapplied the “known or potential rate of error” Daubert factor, because “[t]he error rate

that Daubert speaks of is the rate of unknown errors in the methodology employed, not an

error correction rate.” Id. at 634 (internal quotation marks and citation omitted) (emphasis

in original). To find methodological unreliability in an expert’s decision to correct errors

in her earlier analysis “would be a disincentive to disclose and explain errors.” Id. at 635.

       Finally, regarding Ms. Cardell’s failure to calculate member-specific lost profits,

the Appellate Court concluded that the circuit court looked improperly to causation when

the purpose of the Daubert hearing was to assess the reliability of Ms. Cardell’s

methodology – not whether PNSI had met its burden of proof on causation. Id. at 636.

       The Appellate Court reversed the circuit court’s decisions excluding Ms. Cardell,

striking PNSI’s lost profits claim, and granting summary judgment. The Appellate Court

remanded the case to the circuit court for proceedings consistent with its opinion. Id. at

639.

       KatzAbosch then petitioned this Court for a writ of certiorari, which we granted on

January 20, 2023. Katz, Abosch, Windesheim, Gershman & Freedman, P.A. v. Parkway

Neuroscience and Spine Institute, LLC, 482 Md. 534 (2023). KatzAbosch presents the


                                               22
following question for our review: “Did the [Appellate Court] err in finding that the trial

court abused its discretion in excluding expert testimony on lost profits?”

                                             II

                                   Standard of Review

       Appellate courts review a trial court’s decision concerning the admissibility of

expert testimony under Maryland Rule 5-702 for abuse of discretion. See Rochkind, 471

Md. at 10-11; State v. Matthews, 479 Md. 278, 305-06 (2022). As we said in Matthews:

       Under this standard, an appellate court does “not reverse simply because the
       ... court would not have made the same ruling.” Devincentz v. State, 460 Md.
       518, 550 (2018) (internal quotation marks and citation omitted). “Rather, the
       trial court’s decision must be well removed from any center mark imagined
       by the reviewing court and beyond the fringe of what that court deems
       minimally acceptable.” Id. (internal quotation marks and citation omitted);
       see also Williams v. State, 457 Md. 551, 563 (2018) (“An abuse of discretion
       occurs where no reasonable person would take the view adopted by the
       circuit court.”); Jenkins v. State, 375 Md. 284, 295-96 (2003) (“Abuse occurs
       when a trial judge exercises discretion in an arbitrary or capricious manner
       or when he or she acts beyond the letter or reason of the law.”).

Matthews, 479 Md. at 305-06; see also Abruquah v. State, 483 Md. 637, 652 n.5 (2023)

(an abuse-of-discretion analysis requires a reviewing court to determine the “outer bounds

of what is acceptable expert evidence”). As the Supreme Court has explained, “the law

grants a [trial] court the same broad latitude when it decides how to determine reliability

as it enjoys in respect to its ultimate reliability determination.” Kumho Tire Co. v.

Carmichael, 526 U.S. 137, 142 (1999) (emphasis in original).




                                            23
                                            III

                                        Discussion

   A. From Frye-Reed to Daubert-Rochkind

       1. Frye-Reed

       Starting with Maryland’s 1978 adoption of the D.C. Circuit’s 1923 Frye general

acceptance test, Maryland courts deciding the admissibility of expert testimony predicated

on a novel scientific principle or discovery would determine whether the scientific

principles or discovery were generally accepted in the relevant scientific community. Reed

v. State, 283 Md. 374 (1978); see Rochkind, 471 Md. at 13 (collecting cases under the Frye-

Reed regime).

       In 1993, in Daubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court held

that Federal Rule of Evidence 702 superseded Frye. The Court provided a non-exclusive

list of factors that may be pertinent when determining whether the scientific testimony at

issue is not only relevant but reliable. 509 U.S. 579, 589, 593-94 (1993). “The Daubert

analysis, according to the Supreme Court, was more flexible than the ‘uncompromising

[Frye] general acceptance test’ and gave trial courts greater discretion to admit scientific

expert testimony that is relevant and founded on sound principles, even though novel or

controversial.” Rochkind, 471 Md. at 14 (quoting Daubert, 509 U.S. at 596) (additional

internal quotation omitted). Daubert allowed trial courts to admit a broader range of

scientific testimony than would have been possible under Frye – including, for example,

minority opinions within a field. General Electric Co. v. Joiner, 522 U.S. 136, 142 (1997).




                                            24
But it did not uncritically throw the doors open to expert testimony, and instead it required

the trial court to act as a “gatekeeper.” Id.

       Under the Frye-Reed regime, the proponent of the evidence typically showed the

general acceptance of the scientific expert’s methodology “by surveying scientific

publications, judicial decisions, or practical applications, or by presenting testimony from

scientists as to the attitudes of their fellow scientists.” 1 MCCORMICK ON

EVID. § 203.1 (8th ed. 2020) (Standards for admitting scientific evidence–The general-

acceptance requirement). “Daubert, by contrast, refocus[ed] the attention away from

acceptance of a given methodology … and centers on the reliability of the methodology

used to reach a particular result.” Rochkind, 471 Md. at 31.

       While Daubert originally focused solely on the expert’s methodology (as opposed

to their conclusions), the Joiner Court recognized that conclusions and methodology “are

not entirely distinct from one another. Trained experts commonly extrapolate from existing

data. But nothing in either Daubert or the Federal Rules of Evidence requires a district

court to admit opinion evidence that is connected to existing data only by the ipse dixit of

the expert. A court may conclude that there is simply too great an analytical gap between

the data and the opinion proffered.” Joiner, 522 U.S. at 146.




                                                25
       2. Maryland Rule 5-702

       This Court adopted Maryland Rule 5-702 in 1994, shortly after the Supreme Court

issued Daubert but well before this Court fully embraced Daubert’s approach in Rochkind.6

Under Rule 5-702,

       [e]xpert testimony may be admitted, in the form of an opinion or otherwise,
       if the court determines that the testimony will assist the trier of fact to
       understand the evidence or to determine a fact in issue. In making that
       determination, the court shall determine

           (1) whether the witness is qualified as an expert by knowledge, skill,
               experience, training, or education,

           (2) the appropriateness of the expert testimony on the particular subject,
               and

           (3) whether a sufficient factual basis exists to support the expert
               testimony.

The third “sufficient factual basis” prong includes two sub-factors. First, the expert must

have available an adequate supply of data. Second, the expert must use a reliable

methodology in analyzing that data. Matthews, 479 Md. at 309; Roy v. Dackman, 445 Md.

23, 42-43 (2015). Absent either of these factors, an expert opinion is “mere speculation or

conjecture.” Matthews, 479 Md. at 309 (quoting Rochkind, 471 Md. at 22).

       3. Rochkind

       After the consensus among the states had shifted to the Daubert regime, Maryland

courts followed suit in 2020. Rochkind pointed to the problem with Frye that Daubert was



       6
        In the 40 years after Reed, Maryland courts experienced a “jurisprudential drift:
the Frye-Reed standard announced in 1978 slowly morphed into a ‘Frye-Reed Plus’
standard, implicitly and explicitly relying on and adopting several Daubert principles.”
Rochkind, 471 Md. at 5.

                                             26
meant to solve: the court sought not just pure liberalization but instead to correct errors in

both directions. “[U]sing acceptance as the only measure of reliability presents a

conundrum: a generally accepted methodology may produce ‘bad science’ and be admitted,

while a methodology not yet accepted may be excluded, even if it produces ‘good

science.’” Rochkind, 471 Md. at 30. Maryland’s adoption of Daubert’s focus on reliability

would “streamline the evaluation of scientific expert testimony under Rule 5-702.” Id. at

35.

       Under Rochkind, trial courts “should consider a number of factors in determining

whether the proffered expert testimony is sufficiently reliable to be provided to the trier of

fact.” Matthews, 479 Md. at 310. They are:

          (1) whether a theory or technique can be (and has been) tested;

          (2) whether a theory or technique has been subjected to peer review and
              publication;

          (3) whether a particular scientific technique has a known or potential
              rate of error;

          (4) the existence and maintenance of standards and controls; …

          (5) whether a theory or technique is generally accepted[;]

          […]

          (6) whether experts are proposing to testify about matters growing
              naturally and directly out of research they have conducted
              independent of the litigation, or whether they have developed their
              opinions expressly for purposes of testifying;

          (7) whether the expert has unjustifiably extrapolated from an accepted
              premise to an unfounded conclusion;

          (8) whether the expert has adequately accounted for obvious alternative
              explanations;


                                             27
            (9) whether the expert is being as careful as he [or she] would be in his
                [or her] regular professional work outside his [or her] paid litigation
                consulting; and

            (10) whether the field of expertise claimed by the expert is known to
                 reach reliable results for the type of opinion the expert would give.

Matthews, 479 Md. at 310-11 (summarizing Rochkind) (internal citations omitted). The

Rochkind Court added several overarching items of guidance in adopting the Daubert

standard:

            1. The reliability inquiry is flexible.

            2. Trial courts must focus “solely on principles and methodology, not on the

               conclusions that they generate,” although those are not entirely distinct and

               thus a trial court must consider the relationship between the two.

            3. A trial court need not “admit opinion evidence that is connected to existing

               data only by the ipse dixit of the expert”; rather, a court may conclude that

               there is simply too great an analytical gap between the data and the opinion

               proffered.

            4. All of the Daubert factors are relevant in the reliability inquiry, but none is

               dispositive, and a trial court may apply some, all, or none depending on the

               particular expert testimony at issue.

            5. Rochkind did “not upend [the] trial court’s gatekeeping function. ‘Vigorous

               cross-examination, presentation of contrary evidence, and careful instruction




                                               28
              on the burden of proof are the traditional and appropriate means of attacking

              shaky but admissible evidence.’”

See Matthews, 479 Md. at 311-12 (summarizing the Rochkind Court’s observations).

       4. Matthews

       In Matthews, this Court for the first time post-Rochkind addressed under the

Daubert standard whether a trial court erred in deciding the admissibility of expert

testimony. Matthews, 479 Md. at 284. In attempting to solve a murder, the State enlisted

the help of an FBI scientist who used a technique known as “reverse projection

photogrammetry” to estimate the height of a suspect who had been caught on video but

whose face was indiscernible. Id. at 288-89. The FBI scientist determined that the suspect

on the video was approximately 5’8” tall, plus or minus two-thirds of an inch; defendant

Matthews was approximately 5’9”. Id. But the FBI scientist’s expert report noted that, due

to several variables, “the degree of uncertainty in this measurement could be significantly

greater,” and at a pretrial hearing, the expert testified that she could not scientifically

quantify several variables that might lead to uncertainty greater than two-thirds of an inch.

Id. Nevertheless, the trial court admitted the expert testimony, and the jury convicted

Matthews of the murder. Id. at 297, 304.

       The intermediate appellate court reversed Matthews’s conviction, reasoning that the

inability to quantify the effect of the variables noted by the expert made her height

measurement unreliable. The court perceived an “analytical gap” between the underlying

data and the expert’s conclusion, and therefore held that the trial court abused its discretion

in admitting the expert opinion testimony. Id. at 304-05.


                                              29
       This Court reinstated Matthews’s conviction, concluding that the expert’s

methodology was reliable and finding no analytical gap in the expert’s proffered testimony.

Matthews had argued that only one Daubert-Rochkind factor warranted exclusion of the

testimony: the error rate, driven by the expert’s inability to provide an overall margin of

error for her height estimate. This Court explained that “it is not sufficient to point to an

unknown degree of uncertainty/error rate that applies to an expert opinion and claim that a

trial court is necessarily stripped of discretion to admit that opinion.” Matthews, 479 Md.

at 314. Instead, we distinguished between “uncertainty inherent in an expert’s

methodology” and “uncertainty that applies to an expert’s conclusions following the

application of a reliable methodology.” Id. at 315. Under the former circumstance of

inherent uncertainty, a trial court more likely would exclude expert testimony due to “the

unacceptably high risk of an inaccurate conclusion being reached in every case where the

technique is used.” Id. at 316. However, the latter scenario, involving uncertainty in an

expert’s conclusion following the application of a reliable methodology, “is generally less

problematic than where an expert has applied a technique that is unreliable in every

instance in which it is used.” Id.

       In Matthews, it was undisputed that the expert’s methodology was reliable, and that

the uncertainty related to the expert’s conclusion following the application of that

methodology. Thus, we recognized, the trial court was not required to exclude the

testimony due to an inherently unreliable methodology. Id. at 317. However, we

“emphasize[d] that just because the trial court was not required to exclude [an expert’s]

testimony when [the expert] acknowledged an unknown degree of uncertainty, it does not


                                             30
follow that the trial court was required to admit it.” Id. We further explained that, if the

uncertainty applies to the expert’s conclusions, the “trial court should determine whether

the uncertainty in the expert’s conclusions is the product of an analytical gap in the expert’s

analysis and/or whether the uncertainty ultimately renders the opinion unhelpful to the trier

of fact.” Id. at 314-15. If either of those circumstances exist, the trial court acts within its

discretion in excluding the proffered testimony. Id.

       In Matthews’s case, we perceived no analytical gap in the expert’s testimony, where

there was no disconnect between the results of the expert’s analysis and the expert’s

opinion. Id. at 318. There was nothing illogical about the expert’s explanation that her

analysis showed the subject’s height was 5’8” plus or minus two-thirds of an inch, although

the margin of error might be greater based on other variables she could not quantify. Id. at

318.

       We also concluded that the trial court had acted within its discretion in finding that

the expert’s testimony would “assist the trier of fact to understand the evidence or to

determine a fact in issue” – in other words, that it was helpful, as required by the text of

Rule 5-702. See id. at 319-23. First, the expert had explained her analysis in detail.

Matthews, 479 Md. at 319. Second, the expert had explained why, even with the unknown

degree of uncertainty attributable to certain variables, she remained comfortable with her

height estimate, including the two-thirds inch margin of error. Id. at 319-20. Finally, the

expert herself had stood in the same spot and position as the subject in the image, and was

able to opine that the subject of the image was just slightly shorter than the expert herself,

who was between 5’9” and 5’10”. Id. at 320. Together, these factors allowed the trial court


                                              31
to reasonably conclude that the expert’s opinion would help the jury, despite the expert’s

acknowledged uncertainty. Id. at 320-21.

   B. Applying Daubert-Rochkind Here

       The case centers on the primacy – and boundaries – of methodological reliability in

the Daubert-Rochkind analysis. As we explain below, the trial court acted within its

discretion in considering most of its points of concern at the Daubert-Rochkind hearing.

However, the court made a significant error when it relied on Ms. Cardell’s June 2021

normalizing adjustments regarding trauma and on-call pass-through payments as a basis

for excluding her testimony. We shall order a limited remand under Maryland Rule

8-604(d)(1) for the trial court to revisit its ruling without consideration of the June 2021

normalizing adjustments as reflecting on the reliability of Ms. Cardell’s methodology.

       1. The Relationship Between Data and Methodology

       An expert witness generally arrives at an opinion by choosing a methodology,

selecting data to which to apply the chosen methodology, and drawing conclusions based

on the results of the application of the methodology. The reliability of the expert’s

methodology is a core focus of a trial court under Rule 5-702. Without reliable methods

(in addition to an adequate supply of data), an expert’s opinion lacks sufficient factual basis

to support it and is instead “mere speculation or conjecture.” Matthews, 479 Md. at 309.

The disagreement here largely is over the precise boundary between data and methodology

when assessing the factual basis of an expert’s testimony. There are two competing visions.

       KatzAbosch and the trial court offer one vision, in which input-related choices an

expert makes can be so central to a straightforward methodology that those choices


                                              32
implicate the reliability of the methodology itself. As the trial court stated in its

supplemental written memorandum, “[a]n expert’s choice in data sampling is at the heart

of his methodology.” (Quoting CDW, 906 F. Supp. 2d at 824.) The before-and-after method

for measuring a company’s lost profits is as widely accepted as it is simple: identify a

benchmark period before an alleged harm event and compare the company’s profits during

that benchmark period to the company’s profits in subsequent periods. The play in the

joints arises out of the choices the expert makes in identifying the benchmark period and

thereby deciding what will be the “before” period and what will be the “after” period, and

in the choices the expert makes about how she will measure “profits.” Should the expert

use pre- or post-“guaranteed payment” figures? Should she apply the before-and-after

method to a medical practice’s financial performance without regard to changes in

reimbursement rates between years? In KatzAbosch’s and the trial court’s view, these

considerations concerning data differ from consideration of the soundness of the data that

an expert uses in applying their methodology, or the soundness of the data from which an

expert extrapolates after completing the application of the methodology.7



       7
         Consider the use of inputs where their selection does not depend on judgment calls
in the way that the trial court viewed the selection of the benchmark year in this case. For
example, imagine hypothetically that KatzAbosch did not dispute that 2015 was the proper
benchmark year and agreed that Ms. Cardell used all the correct sources of data in her
calculation of lost profits, but KatzAbosch called the veracity of some of the data into
question (perhaps suggesting that bookkeepers at PNSI had not accurately classified
various expenses in the company’s books and records). In that instance, KatzAbosch would
have been complaining about the soundness of the data to which Ms. Cardell applied a
concededly reliable methodology. The veracity of the data in this hypothetical is a factual
question that the expert has no real impact on; instead, it is up to the jury to decide whether
the expert’s reliable methodology has been applied to accurate data or whether the

                                              33
       PNSI and the Appellate Court offer a more rigid vision, arguing that anything

dealing with the inputs is a “data” question to be siloed off from the methodological

reliability analysis and instead left to the jury.8 They acknowledge that the line between

data and methodology (and, for that matter, conclusions) is blurry, but they draw the line

sharply in this case, arguing that only the core before-and-after subtraction counts as

methodology, and the rest is data (e.g., why 2015?) or conclusion. In this regard, PNSI

relies primarily on the Seventh Circuit’s decision in Manpower, Inc. v. Ins. Co. of Pa., 732

F.3d 796 (7th Cir. 2013), noting that we cited Manpower in Matthews.9

       In Manpower, the federal district court excluded insured-plaintiff Manpower’s

accounting expert, whom Manpower needed to establish business interruption damages in

its suit against the insurer-defendant. The insurer-defendant challenged the reliability of

the expert’s methodology, and the district court found that the expert had followed the

master insurance policy’s “straightforward” methodology for his calculations. Manpower,

732 F.3d at 801-02. But the calculations’ reliability nevertheless “turn[ed] on whether [the

expert] used reliable methods when selecting the numbers used in his calculations.” Id. at


proponent has offered “garbage in, garbage out” expert testimony to be discredited even
after being admitted.
       8
        For example: “Whether Ms. Cardell failed to consider reimbursement rates is not
an issue with the methodology – the before-and-after method. Rather, it is an issue with
the soundness of the data she used to reach her conclusion.” Parkway Neuroscience, 255
Md. App. at 627.
       9
        See Matthews, 479 Md. at 316 (quoting Manpower, 732 F.3d at 806: “The district
court usurps the role of the jury, and therefore abuses its discretion, if it unduly scrutinizes
the quality of the expert’s data and conclusions rather than the reliability of the
methodology the expert employed.”).

                                              34
801. The district court found that the answer to that question was no. The expert used a

shorter base period (from which he extrapolated a relatively high growth rate) because of

recent corporate acquisitions, new policies, and new managers at Manpower, and the expert

did not consider other indicators that may have affected the growth rate. Id. The district

court also criticized the expert’s reliance on conversations with the company managers. Id.

But the district court primarily found that the expert’s “‘analysis [broke] down’ at his

choice of growth rate.” Id. In the district court’s view, “[o]nly a more thorough analysis of

the reasons for the growth would have supported [the expert’s] choice of a projected growth

rate.” Id.

       The Seventh Circuit reversed the district court’s exclusion of the expert, agreeing

with Manpower that “the district court exercised its gatekeeping role under Daubert with

too much vigor.” Id. at 805. The federal appellate court opined that reliability is primarily

a question of the expert’s methodology, not the data that is the input or the conclusions that

are the output. Id. at 806. The Manpower Court acknowledged the blurriness of the line

between undue scrutiny of an expert’s data and proper scrutiny of an expert’s methodology,

writing that “[t]he critical inquiry is whether there is a connection between the data

employed and the opinion offered.” Id. Opinions are properly excluded when they are

connected to existing data “only by the ipse dixit of the expert.” Id.

       More specifically, the Manpower Court said that the district court should have

stopped its inquiry when it found the methodology reliable, but instead it “drilled down to

a third level in order to assess the quality of the data inputs [the expert] selected” and then

took issue with that data selection. Id. at 807. The expert’s data selection was “substantially


                                              35
more nuanced and principled than the district court’s characterization reflects,” and so the

ultimate opinion was not ipse dixit but rather “reasoned and founded on data.” Id. at 809.

Each of the district court’s criticisms of the expert’s reliability “was a comment on the

soundness of the factual underpinnings of his calculation” and the district court’s rulings

could even have been something of a “roadmap for [the defendant-insurer’s] cross-

examination of [the expert].” Id. “But the district court supplanted that adversarial process

with its admissibility determination” and “set the bar too high and therefore abused its

discretion.” Id.

       The Manpower Court’s understanding of the line between data and methodology,

of course, does not bind this Court, and its rigidity is not in keeping with the approach

taken by other federal appellate courts. Instead, many federal courts have explained the

Daubert standard in ways that reject this sharp line and acknowledge that problems with

data and data selection (which itself can involve its own methodology) can bear on

admissibility before the judge and not just weight before the jury. See, e.g., EEOC v.

Freeman, 778 F.3d 463, 467, 472 (4th Cir. 2015);10 Elcock v. Kmart Corp., 233 F.3d 734,




       10
          In Freeman, plaintiff EEOC’s industrial psychologist expert witness delivered a
report based on a database riddled with errors, introducing even more errors in an updated
analysis; the “sheer number of mistakes and omissions” rendered the expert’s analysis
“outside the range where experts might reasonably differ” under Kumho Tire, and so the
district court did not abuse its discretion in excluding the proffered expert’s testimony as
unreliable. The EEOC unsuccessfully argued that “the issue of the reliability of an expert’s
data is always a question of fact for the jury, except perhaps in some theoretical, rare case.”
Freeman, 778 F.3d at 472 (Agee, J., concurring).

                                              36
755-56 (3d Cir. 2000);11 In re Mirena IUS Levonorgestrel-Related Products Liab. Litig.

(No. II), 982 F.3d 113, 123 (2d Cir. 2020).12

       Consider as one example Rink v. Cheminova, an Eleventh Circuit case in which the

appellate court found no abuse of discretion in the district court’s exclusion of a putative

class’s expert. 400 F.3d 1286, 1293-94 (11th Cir. 2005). The plaintiffs maintained that

Cheminova’s pesticide malathion defectively contained elevated levels of isomalathion,

which makes malathion particularly toxic to humans and which, they claimed, was created

by exposure of the malathion supply to temperatures above 77 degrees Fahrenheit during

storage at sites in Texas, Georgia, and Florida. Id. at 1289. The plaintiffs’ expert estimated

the isomalathion content of pesticides used in Tampa; he began with National Weather

Service temperature readings near the malathion storage sites, using the recorded high and

low temperatures as upper and lower limits of the probable exposure temperatures. Id.

Because there was evidence that the inside of the Texas storage facility was actually

18 degrees warmer than the outside ambient air temperature, he added 18 degrees to the



       11
         In Elcock, the Third Circuit held that the district court abused its discretion in
admitting the expert’s economic damages model, which relied on empirical assumptions
not supported by the record, fearing that a jury would be “likely to adopt the gross figure
advanced by a witness who has been presented as an expert.” 233 F.3d at 755-56.
       12
          In Mirena, the Second Circuit rejected the plaintiffs’ contention that a trial court
erred by taking a “hard look” at their expert’s methodology: “[A]n expert’s methodology
must be reliable at every step of the way, and in deciding whether a step in an expert’s
analysis is unreliable, the district court should undertake a rigorous examination of the
facts on which the expert relies, the method by which the expert draws an opinion from
those facts, and how the expert applies the facts and methods to the case at hand.” 982 F.3d
at 123 (internal quotation marks and citation omitted) (emphasis added by the Mirena
Court).

                                             37
upper plausibility limits. Id. at 1290. He then averaged the upper and lower limits to find

the most probable exposure temperature. Id. He took that temperature and plugged it into

an equation to calculate the level of toxic isomalathion as a function of time and

temperature. Id. Importantly, the plaintiffs’ theory hinged on the isomalathion content of

the pesticides used in Tampa, Florida, which included not just the malathion that had

initially been stored in Texas, but also malathion that had been initially stored at sites in

Florida and Georgia. See id. at 1289-90.

       The district court excluded the expert’s testimony for several reasons, including the

expert’s “method of extrapolating data from one site [Texas] to another [Florida and

Georgia] without making particularized findings which accounted for the differences in

conditions and length of storage at each site.” Id. at 1290. On appeal, the plaintiffs argued

that the district court had improperly taken issue with the expert’s temperature data and not

his methodology. The Eleventh Circuit disagreed:

       While [the plaintiffs] suggest that the only methodology at issue was [the
       expert’s] use of an equation to determine isomalathion levels, this argument
       belies the fact that [the expert] employed two methodologies: first, he
       employed certain methods of extrapolation and transposition to arrive at
       temperature data; and second, he inserted the temperature data into an
       equation to arrive at the level of isomalathion in the [pesticide]. As we have
       explained, the district court’s exclusion of [the expert] was based on its
       rejection of his methodology to derive temperature data, not the data itself.
       Thus, the district court’s exclusion of [the expert] can be distinguished from
       a situation in which an exclusion is based on a district court’s refusal to credit
       hard data arrived at by unassailable methods.… Here, the data [the expert]
       produced was driven by the methodology he used, and thus the district
       court’s inquiry into how he arrived at the data is not inappropriate




                                              38
       considering that the district court is charged with evaluating an expert’s
       methodology.

Id. at 1293.13 The Rink Court took an appropriately holistic view of the methodology being

applied, examining the production of the equation’s inputs as part of the process itself. A

narrower understanding of the methodology – something along the lines of “use this

formula to calculate the isomalathion content of this pesticide” – might have called for

allowing the expert to testify at trial, where opposing counsel could have impeached him

by showing the weaknesses in his Tampa storage temperature input selection. We agree

with the Eleventh Circuit that, under a proper application of Daubert, the trial court in Rink

acted as an appropriate gatekeeper.

       Another useful case to consider is In re Wholesale Grocery Products Antitrust

Litigation, an Eighth Circuit antitrust case in which the district court had excluded an expert

witness who sought to establish the plaintiff’s injury by selecting a competitive benchmark.

946 F.3d 995 (8th Cir. 2019). The district court excluded as unreliable the benchmark the

expert selected, because the expert’s choice of a non-independent chain grocery store (Stop

& Shop) was premised on an unfounded assumption that independent retailers’ charges


       13
          The Rink Court contrasted the case before it with Quiet Tech. DC-8, Inc. v. Hurel-
Dubois UK Ltd., 326 F.3d 1333 (11th Cir. 2003), where the district court had admitted
testimony of Hurel-Dubois’s expert on computational fluid dynamics. Rink, 400 F.3d at
1293. After losing at trial, Quiet Tech argued that the district court abused its discretion by
admitting the other side’s expert testimony. The Eleventh Circuit disagreed, observing that
Quiet Tech “does not argue that it is improper to conduct a [computational fluid dynamics]
study using the sorts of aerodynamic data that [the expert] employed, but rather that the
specific numbers that [the expert] used were wrong. Thus, the alleged flaws in [the
expert’s] analysis are of a character that impugn the accuracy of his results, not the general
scientific validity of his methods.” Quiet Tech, 326 F.3d at 1343-44. This case resembles
Rink, whereas the hypothetical we pose above in footnote 7 resembles Quiet Tech.

                                              39
followed the same pattern as Stop & Shop’s. Id. at 999. The plaintiff argued on appeal that

benchmarking is a recognized tool for establishing antitrust injury, and once the expert

chose a benchmark, there was nothing left for the district court to analyze under Daubert;

what remained were questions of fact for the jury. Id. at 1001. The Eighth Circuit rejected

this argument “because it is the foundation of the assumption underlying the application of

the method employed by [the expert] on these facts … that led the district court to its

conclusion that [the expert’s] testimony should be excluded under Rule 702.” Id. “The

district court did not go ‘miles beyond’ its appropriate role in this case, as [the plaintiff]

argues, but rather held that the reasoning underlying [the expert’s] testimony was not on

solid footing because the assumption upon which the report relied was insufficient to

validate his opinion.” Id. at 1002. “[T]he district court held that ultimately these analyses

were a house of cards of sorts and our own analysis reveals no abuse of discretion in this

conclusion. At its base level, the core assumption of the analysis was by the ipse dixit of

[the expert].” Id. Here too, we see the same arguments as in Manpower and in the case

before us: undue scrutiny and a tunnel vision focus on methodology. But the Eighth Circuit

declined to accept the benchmark selection without question and instead approved of the

district court’s inquiry into the assumptions underlying the expert’s choice of benchmark.

This is a long way from Manpower’s approach.

       In short, whether an expert’s methodology is sufficiently reliable to admit the

expert’s testimony at trial will sometimes require a trial court to consider data and

assumptions that the expert has employed in deciding threshold points relating to the

methodology. Manpower’s rigid separation of “data” and “methodology” misses this grey


                                             40
area and creates a categorical rule when the Daubert-Rochkind regime calls for flexibility

and deference.

       When this Court cited Manpower in Matthews, it did so to demonstrate the

proposition that where a sufficient factual basis exists under Rule 5-702(3) and Daubert-

Rochkind – that is, where an expert has applied a reliable methodology to an adequate

supply of data – courts should not exclude an expert merely because the expert’s particular

conclusions may be inaccurate, but rather should only exclude expert testimony that is

“mere speculation or conjecture.” Matthews, 479 Md. at 316.

       The Appellate Court in this case relied on the part of Manpower to which we do not

subscribe, categorizing the 2015 base year choice, the failure to consider reimbursement

rates, etc., as arguable defects in the soundness of Ms. Cardell’s data, rather than defects

of her methodology. See Parkway Neuroscience, 255 Md. App. at 627-28 & n.9. The

Appellate Court was right to center methodology in its analysis of the expert opinion’s

reliability, a critical aspect (along with the data’s adequacy) of the opinion’s factual basis

without which the opinion would be “mere speculation or conjecture.” Matthews, 479 Md.

at 309 (citing Rochkind, 471 Md. at 22); see Parkway Neuroscience, 255 Md. App. at 629

& n.11. But the intermediate appellate court, relying on Manpower, took an overly rigid

approach in analyzing the relationship between data and methodology, holding that the trial

court’s analysis of Ms. Cardell’s opinion went to her data rather than to her methodology.

Just as the U.S. Supreme Court has noted the blurred line between methodology and

conclusions, Joiner, 522 U.S. at 146, we note the sometimes blurred line between data and

methodology. Trial courts must not transmute all questions of data’s provenance or veracity


                                             41
into questions of methodology, just as they must not – under Joiner – transmute all

disagreements with conclusions into disagreements with methodology. But by the same

token, trial courts should not wear “methodology blinders” and deny the existence of some

limited overlap between data and methodology (and between methodology and

conclusions). Determining whether a dispute concerning expert testimony implicates the

soundness of data or soundness of methodology is precisely the type of matter that calls

for the exercise of a trial court’s discretion.

       Here, the trial court explicitly noted speculative, insufficiently substantiated

judgment calls that were central to Ms. Cardell’s application of the before-and-after

method. In the trial court’s estimation, Ms. Cardell exercised subjective judgment in

settling on the 2015 base year – that is, it was impossible to test the validity of that decision.

Relatedly, the trial court was troubled by the effect on the base year determination of Ms.

Cardell’s treatment of member draws as expenses without understanding whether that

decision was consistent with any industry standard. In addition, the trial court faulted Ms.

Cardell for failing to appropriately factor in confounding variables (in particular, declining

insurance reimbursements) into her methodology. The court determined these judgment

calls veered toward speculation and conjecture and ate away at the factual basis for Ms.

Cardell’s opinions, because even though many of the problems had to do with the “inputs”

to the before-and-after method, the selection of those inputs is central to the reliability of

the method itself.

       We rely on trial courts that conduct Daubert-Rochkind hearings to determine where

the line between data and methodology is in the specific cases before them, and whether


                                                  42
the proffered expert’s choices relating to data, assumptions, and other inputs implicate the

reliability of the expert’s methodology. In this part of the trial court’s analysis, the court

did just that. That is, we discern no error in the trial court’s analysis of Daubert-Rochkind

factors one and two and four through nine.14,15




       14
         The trial court discussed Ms. Cardell’s decision not to account for the doctors’
individual revenue generation capabilities as part of the court’s discussion of Daubert
factor eight (accounting for obvious alternative explanations). As discussed below, we
think the court’s concerns on that front go more to the reliability of the before-and-after-
method when used to calculate lost profits of a limited liability company with a small
number of revenue generators, rather than a failure to account for obvious alternative
explanations.
       15
          The trial court said that, at the Daubert hearing stage, it was not the court’s
responsibility to determine whether Ms. Cardell qualified as an expert or not. And the court
explicitly stated that it was not deciding one way or the other whether Ms. Cardell was, in
fact, qualified to render an expert opinion at trial. However, the court did note Ms. Cardell’s
lack of experience with specialty medical practices and her reliance on oral
communications with PNSI staff (rather than universally available, concrete sources of
information), and her reliance on her own experiences and judgment rather than on industry
standards. In the court’s mind, these points undermined Ms. Cardell’s overall reliability.
We do not discern any abuse of discretion in the court’s reliance on these points.

        However, it is important to acknowledge that there is nothing inherently problematic
with an expert’s use of information provided orally or in writing by the proponent of her
testimony. Experts routinely rely on such information in arriving at their opinions. Nor is
it uncommon for experts to apply their subjective judgment, based on their training and
experience, in formulating their opinions. The trial court did not indicate that it believed it
was per se improper for an expert to rely on oral communications with the proponent of
her testimony or to make subjective judgment calls. Rather, the trial court’s point was that,
in the absence of other sources of information that tended to support the reliability of Ms.
Cardell’s methodology – such as industry standards – and given her inability to adequately
explain why she made the judgment calls she did, Ms. Cardell’s reliance on oral
communications with PNSI and on her subjective judgment detracted from the reliability
of her methodology.

                                              43
       2. The Federal Analog to Maryland Rule 5-702

       The direction of analogous Federal Rule 702 confirms our understanding of

meaningful gatekeeping as to an expert opinion’s factual basis. In a May 2022 report, the

Advisory Committee on Evidence Rules wrote that

       the Committee resolved to respond to the fact that many courts have declared
       that the reliability requirements set forth in Rule 702(b) and (d) – that the
       expert has relied on sufficient facts or data and has reliably applied a reliable
       methodology – are questions of weight and not admissibility, and more
       broadly that the expert testimony is presumed to be admissible. These
       statements misstate Rule 702, because its admissibility requirements must be
       established to a court by a preponderance of the evidence. The Committee
       concluded that in a fair number of cases, the courts have found expert
       testimony admissible even though the proponent has not satisfied the Rule
       702(b) and (d) requirements by a preponderance of the evidence – essentially
       treating these questions as ones of weight rather than admissibility….

COMM. ON RULES OF PRAC. AND PROC. OF THE JUD. CONF. OF THE U.S., REP. OF THE

ADVISORY COMM. ON EVIDENCE RULES                     6   (May    15,   2022),    available   at

https://perma.cc/PK3B-Q8G5.

       Absent any contrary Congressional action, Federal Rule 702 will officially reflect

this reality come December 1, 2023, when a set of amendments will take effect to provide

that the proponent of expert testimony must meet Rule 702’s standards – including related

to the testimony’s factual basis – by a preponderance of evidence, or else the testimony is

inadmissible and may not go to the jury:

       A witness who is qualified as an expert by knowledge, skill, experience,
       training, or education may testify in the form of an opinion or otherwise if
       the proponent demonstrates to the court that it is more likely than not that:

          (a) the expert’s scientific, technical, or other specialized knowledge will
              help the trier of fact to understand the evidence or to determine a fact
              in issue;


                                              44
          (b) the testimony is based on sufficient facts or data;

          (c) the testimony is the product of reliable principles and methods; and

          (d) the expert’s opinion reflects a reliable application of the principles
              and methods to the facts of the case.

U.S. SUPREME COURT, ORDER 4 (April 24, 2023), available at https://perma.cc/RU2S-

KEYM (most relevant new language emphasized). The change emphasizing the

preponderance standard “specifically was made necessary by the courts that have failed to

apply correctly the reliability requirements of [Federal Rule 702].” FED. R. EVID. 702

advisory committee’s note to 2023 amendment. “[M]any courts have held that the critical

questions of the sufficiency of an expert’s basis, and the application of the expert’s

methodology, are questions of weight and not admissibility. These rulings are an incorrect

application of Rules 702 and 104(a).” Id.; see Sardis v. Overhead Door Corp., 10 F.4th

268, 283-84 (4th Cir. 2021) (observing that the then-proposed amendments to Federal Rule

702 would make explicit the preponderance of evidence standard of admissibility to the

rule’s sufficiency of basis and reliability analyses; confirming that these rule revisions and

clarifications “clearly echo[] the existing law on the issue” from Daubert, Kumho Tire, and

Rule 702 itself).

       The new amendments comprehend that some challenges to expert testimony will,

in fact, go to weight rather than admissibility.

       For example, if the court finds it more likely than not that an expert has a
       sufficient basis to support an opinion, the fact that the expert has not read
       every single study that exists will raise a question of weight and not
       admissibility. But this does not mean, as certain courts have held, that
       arguments about the sufficiency of an expert’s basis always go to weight and
       not admissibility. Rather it means that once the court has found it more likely


                                              45
       than not that the admissibility requirement has been met, any attack by the
       opponent will go only to the weight of the evidence.

FED. R. EVID. 702 advisory comm. note to 2023 amendment (emphasis added). Indeed,

they do not require federal courts to “nitpick an expert’s opinion in order to reach a perfect

expression of what the basis and methodology can support,” but instead seek to block

“claims that are unsupported by the expert’s basis and methodology.” Id.

       3. The Trial Court’s Error

       Although we discern no error in most of the trial court’s application of the Daubert-

Rochkind factors, we are constrained to conclude that the court erred in one respect. The

trial court viewed Ms. Cardell’s June 2021 updates, which the court discussed in the

context of Daubert-Rochkind factors three (known or potential rate of error) and 10

(whether the field of expertise is known to reach reliable results for the projected type of

expert opinion), as implicating the reliability of her methodology. To be sure, the court

observed that the error rate factor did not apply “as it was ordinarily considered,” e.g.,

concrete and numerical probabilities of correctness in DNA testing. Still, the court was

troubled by the timing of Ms. Cardell’s updates – without new information and “for fully

subjective reasons” – which, the court believed, reflected negatively on her methodology.

       This analysis missed the mark. In fact, there was new information of a sort: Ms.

Cardell noticed something she had not noticed before on first examination. She then sought

clarification and revised her opinion, just as a doctor might order a biopsy and diagnose a

patient with skin cancer if the doctor had missed a mole upon first examination of the

patient. Catching something peculiar the second time around neither undermines the



                                             46
adequacy of the data (the patient’s skin) nor the court’s understanding of the expert’s

methodology (examining the patient’s skin for disease indicators).

       Notably, when discussing the tenth Daubert factor – whether the field of expertise

is known to reach reliable results for the type of opinion the expert would give – the trial

court specifically referenced the June 2021 adjustments, stating that “the mere fact the

findings changed in June of this year for fully subjective reasons, it had nothing to do with

any new information. It kind of makes the whole reliability even that much more suspect.”

This comment, along with the court’s other references to the June 2021 adjustments, leaves

us with the abiding concern that, in this instance, the trial court strayed from its gatekeeping

role and that this error was significant in the court’s overall analysis.16 Ms. Cardell’s

decision to make the adjustments relating to trauma/on-call payments in 2016 did not

implicate the reliability of her methodology. At most, it went to the care with which she



       16
          The trial court also erred when it stated that Ms. Cardell’s lost-profit calculations
would not be helpful to the jury. The court reached this conclusion based on Ms. Cardell’s
failure to consider each departing physician’s projected revenue generation in her
calculation of lost profits. The court’s point was that, if the jury concluded that one (or
more) of the departing physicians left for reasons other than KatzAbosch’s negligence, a
lost-profits analysis that included that physician’s projected revenue would overstate the
practice’s lost profits and therefore not help the jury to accurately calculate damages. The
court stated, however, that its finding with respect to helpfulness was a “very, very slight
factor” in its decision to exclude Ms. Cardell’s testimony.

       We confirmed in Matthews that a trial court has discretion under Maryland Rule
5-702 to exclude an expert’s opinion based on a reliable methodology if the court
nevertheless concludes that the expert’s testimony would not be helpful to the jury. See
Matthews, 479 Md. at 320-21. Here, however, there was no way to know before the trial
whether the jury would conclude that any of the physicians left for reasons other than
KatzAbosch’s negligence. The trial court’s conclusion that Ms. Cardell’s testimony would
not be helpful, therefore, was speculative and not a proper ground for exclusion.

                                              47
applied her methodology, which is a matter to be explored on cross-examination before the

jury (if Ms. Cardell’s testimony is otherwise found to be sufficiently reliable).17

       In Matthews, we “emphasize[d] that just because the trial court was not required to

exclude [the expert’s] testimony when [the expert] acknowledged an unknown degree of

uncertainty, it does not follow that the trial court was required to admit it.” 479 Md. at 317.

Similarly, we do not believe that, based on the record before the trial court in this case, the

court was required to exclude Ms. Cardell’s testimony. Rather, based on the factors that

the trial court appropriately considered in this case, it was within the trial court’s discretion

to admit or exclude Ms. Cardell’s testimony. Having carefully reviewed the record, we

conclude that the fair and prudent course of action at this point is to order a limited remand

to the circuit court under Maryland Rule 8-604(d)(1) so that the trial court may decide to

admit or exclude Ms. Cardell’s testimony without consideration of her June 2021

normalizing adjustments as reflecting on the reliability of Ms. Cardell’s methodology.18

The trial court may make that decision based on the existing record or, in its discretion,

may allow further examination of Ms. Cardell and/or other witnesses before issuing a new



       17
          The trial court elsewhere in its analysis of the Daubert factors said that it had “no
reason to think that [Ms. Cardell] blew this off as an inconsequential project, I mean she
took this very seriously.”
       18
          We do not mean to suggest that, in every case where an appellate court concludes
that part of a trial court’s Daubert ruling was based on proper factors and another part was
not, a limited remand to the trial court is necessary. We expect that, in many cases, it will
be clear from the record whether the trial court would have admitted or excluded the expert
testimony without consideration of a factor that is later determined to have been improper
on appeal. We encourage trial courts to make such matters explicit on the record when
possible.

                                               48
ruling.19 In the trial court’s discretion, it also may permit the parties to submit additional

written and oral arguments prior to issuing its ruling. The trial court should provide a

written explanation of its decision.

       We shall retain jurisdiction over this case. After the trial court issues its decision on

remand, we shall issue an appropriate Order.



       19
          The trial court’s criticism of Ms. Cardell’s failure to consider the remaining and
departing doctors’ inherent ability to generate revenue when discussing Daubert factor
eight (accounting for obvious alternative explanations) is more a criticism of the reliability
of the before-and-after methodology itself as applied to a limited liability company with a
small number of revenue-generating members, than it is a criticism of Ms. Cardell’s failure
to account for obvious alternative explanations in applying the before-and-after
methodology. In any business with a small number of revenue-generating members, the
firm’s overall performance in a particular year may well be the result of some members
having unusually strong or unusually weak years, compared to their individual
performances over time. As Ms. Cardell described the before-and-after method of
calculating lost profits, there is no consideration of how individual revenue generators
perform over time. Rather, Ms. Cardell testified, “under the before and after methodology
what happens is that the damages expert looks at the [company’s] performance in two
different periods. The first being what we called the benchmark period or the before period.
And that’s the period that is unaffected by whatever the alleged harm event, breach is. And
then that is [compared] to the after period or the loss period, which is the period that is
affected by whatever the alleged harm, breach, or event is. This methodology essentially
calculates what the [company’s] profits would have been but for again that alleged breach,
harm event.”

        The trial court did not question the general acceptance of the before-and-after
analysis as a reliable methodology for calculating lost profits. Nor did the trial court
explicitly state that the before-and-after methodology, as described by Ms. Cardell, cannot
reliably be applied to a limited liability company with a small number of revenue-
generating members. However, that seems to be the import of the concerns the trial court
articulated when discussing Ms. Cardell’s failure to consider the individual revenue-
generating abilities of the physicians who departed, as well as the two physicians who
remained at PNSI. On remand, in its discretion, the trial court may give additional
consideration to whether the before-and-after methodology, as a general matter, is a
reliable methodology for calculating lost profits of an entity such as PNSI. We express no
view concerning the answer to that question.

                                              49
                                           IV

                                      Conclusion

      The trial court, in its gatekeeping role under Daubert-Rochkind, acted within its

discretion in analyzing the data and other inputs and assumptions that implicated the

reliability of Ms. Cardell’s methodology. However, the court improperly considered Ms.

Cardell’s June 2021 normalizing adjustments relating to trauma/on-call payments as

reflecting on the reliability of Ms. Cardell’s methodology. We order a limited remand to

the circuit court to allow that court to decide whether to admit or exclude Ms. Cardell’s

expert testimony without such consideration of the June 2021 normalizing adjustments.

                                  JUDGMENT OF THE APPELLATE COURT OF
                                  MARYLAND VACATED; CASE REMANDED TO
                                  THAT COURT WITH THE DIRECTION TO
                                  REMAND THE CASE TO THE CIRCUIT COURT
                                  FOR   HOWARD       COUNTY,  WITHOUT
                                  AFFIRMING OR REVERSING THE JUDGMENT
                                  OF THE CIRCUIT COURT, FOR FURTHER
                                  PROCEEDINGS CONSISTENT WITH THIS
                                  OPINION. COSTS TO ABIDE.




                                           50
Circuit Court for Howard County
Case No. C-13-CV-18-000181
Argued: May 4, 2023

                                           IN THE SUPREME COURT
                                               OF MARYLAND*

                                                       No. 30

                                              September Term, 2022


                                      KATZ, ABOSCH, WINDESHEIM,
                                   GERSHMAN & FREEDMAN, P.A., ET AL.

                                                         v.

                                         PARKWAY NEUROSCIENCE
                                         AND SPINE INSTITUTE, LLC


                                        Fader, C.J.,
                                        Watts,
                                        Hotten,
                                        Booth,
                                        Biran,
                                        Gould,
                                        Eaves,

                                                        JJ.


                                         Concurring Opinion by Booth, J.


                                        Filed: August 30, 2023

                                  * During the November 8, 2022 general election,
                                  the voters of Maryland ratified a constitutional
                                  amendment changing the name of the Court of
                                  Appeals of Maryland to the Supreme Court of
                                  Maryland. The name change took effect on
                                  December 14, 2022.
       I agree with and join the Majority’s well-written opinion in this case. I write

separately to respond to the Majority’s invitation to “reflect on [the] flexibility and

deference” due to courts analyzing the admissibility of expert testimony. Maj. Slip Op.

at 1. I have observed that our traditional formulation of the abuse of discretion standard is

not the best or most accurate way of describing our abuse of discretion review in the context

of reviewing expert testimony admissibility determinations, and suggest that this Court re-

formulate the definition of our abuse of discretion standard in the context of appellate

review of expert witness testimony admissibility determinations.

       Since this Court’s adoption of Daubert in 2020, we have reviewed for abuse of

discretion four cases involving trial courts’ decisions to admit or preclude expert testimony:

State v. Matthews, 479 Md. 278 (2022); Abruquah v. State, 483 Md. 637 (2023); Oglesby

v. Baltimore School Associates, No. 26 Sept. Term, 2022, 2023 WL 4755689 (July 26,

2023); and this case.     The Court’s decisions concerning the admissibility of expert

testimony in Matthews and Abruquah were not unanimous ones.1 Not only have there been


       1
         In State v. Matthews, 479 Md. 278, 325–40 (2022) (Watts, J., dissenting), Justice
Watts filed a dissenting opinion stating that she would have upheld the Appellate Court’s
conclusion that the trial court abused its discretion in admitting the State’s expert witness
concerning her opinions involving reverse projection photogrammetry.

       In Abruquah v. State, 483 Md. 637, 699–711 (2023), Justice Hotten filed a
dissenting opinion, joined by Justice Gould and Justice Eaves, in which those justices
would have determined that the trial court did not abuse its discretion in permitting the
unqualified opinions expressed at trial by the firearms expert. In a separate dissenting
opinion, Justice Gould criticized the Majority’s application of the abuse of discretion
standard of review, asserting that the Majority “sidestep[ped] the deferential standard of
review by recasting its decision as establishing the outer bounds of what is acceptable
expert evidence in [the] area [of firearms identification].” Abruquah, 483 Md. at 712
(quotations omitted) (Gould, J. dissenting).
different views concerning the circuit courts’ discretion in making expert witness

admissibility determinations among jurists on this Court, it is also notable that, in three of

the four above-referenced cases, this Court reached opposite conclusions from the

Appellate Court of Maryland undertaking the same review.2 In Matthews—our maiden

voyage in appellate review of a trial court’s decision to admit expert testimony under the

application of the Daubert-Rochkind factors—we “reaffirmed” the sentiments expressed

in Rochkind, that “it is still the rare case in which a Maryland trial court’s exercise of

discretion to admit or deny expert testimony will be overturned.” 479 Md. at 306. Given

our post-Rochkind batting average, I am not sure that sentiment holds true.

       I was one of the members of this Court who voted in Rochkind to adopt the Daubert

standard. I joined the majority opinions written by my colleagues in Matthews, Abruquah,

Oglesby, and the instant case, and I agree with our analysis in each of them. That said,

with some time to reflect on the Court’s application of the abuse of discretion standard in

the context of appellate review of a trial court’s decision to admit or deny expert testimony


       2
          See Matthews v. State, 249 Md. App. 509 (2021) (reversing a criminal defendant’s
second-degree murder conviction after determining that the circuit court’s decision to
admit the photogrammetry expert’s opinion about the height of a suspect captured in a
surveillance video was inadmissible, and error was not harmless); Oglesby v. Baltimore
Sch. Assocs., No. 130 Sept. Term, 2021, 2022 WL 3211044 (App. Ct. Md. Aug. 9, 2022)
(holding that the circuit court did not abuse its discretion in excluding a plaintiff’s causation
expert in a lead paint case, or err in granting summary judgment in light of the exclusion
of the causation testimony); Parkway Neuroscience and Spine Inst., LLC v. Katz Abosch,
Windesheim, Gershman & Freeman, P.A., et. al., 255 Md. App. 596 (2022) (holding that
the trial court abused its discretion in excluding Ms. Cardell’s opinions concerning lost
profits, and therefore, erred in granting summary judgment in favor of the defendants). We
do not know whether the Appellate Court would have considered the application of the
Daubert-Rochkind factors in the same manner as this Court in Abruquah because this Court
granted certiorari while the case was pending in that court. See Abruquah, 483 Md. at 652.
                                               2
in its consideration and application of the Daubert-Rochkind factors, I have some unease

about our recitation of our traditional abuse of discretion formulation, which we developed

and apply in other contexts. I observe that this formulation appears to be inconsistent with

the abuse of discretion standard employed by the federal courts in the Daubert context, as

well as the careful and searching examination that this Court is conducting in reviewing

these cases. For the reasons expressed below, I believe that when this Court applies abuse

of discretion when reviewing expert witness admissibility determinations, we should

articulate an abuse of discretion standard that is in line with the federal courts’ formulation

and that reflects this Court’s current practice. When we adopted the Daubert standard, we

adopted it in full. That necessarily includes the federal courts’ application of the abuse of

discretion standard. I explain my reasoning more fully below.

       A. Rochkind—This Court’s Decision to Adopt Daubert

       In 2020, Maryland joined the supermajority of states that adopted Daubert3 as the

standard for admission of expert testimony. See Rochkind v. Stevenson, 471 Md. 1 (2020).

The change to the Daubert standard shifted the focus of the analysis from the general

acceptance of a methodology under Frye-Reed4 to the reliability of a methodology.

Rochkind, 471 Md. at 5, 31; Daubert, 509 U.S. at 594–95. In doing so, this Court adopted

the Supreme Court’s Daubert trilogy for analyzing admissibility of expert opinions. See

Daubert, 509 U.S. 579, 588–89 (replacing the “general acceptance” test under Frye, with


       3
           Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).
       4
        Before the adoption of Daubert in Maryland, Frye-Reed was the prevailing
standard for admissibility of expert evidence. See Reed v. State, 283 Md. 374 (1978).
                                              3
the more flexible standard in determining whether scientific evidence is reliable and

admissible); General Electric Co. v. Joiner, 522 U.S. 136, 139 (1997) (clarifying that the

proper scope of appellate review of a trial court’s rulings on expert admissibility is “abuse

of discretion”); Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137, 141–42 (1999)

(clarifying that the trial judge’s Rule 702 gatekeeping duties apply to all expert testimony,

whether such testimony is based upon scientific, technical, or other specialized

knowledge).

       In adopting the Daubert standard, we expressed doubt that adopting the standard

would “upend Maryland evidence law[,]” and we observed that, by adopting Daubert,

“Maryland courts will be able to ‘draw from and contribute to the broad base of case law

grappling with scientific testimony.’” Rochkind, 471 Md. at 34–35 (quoting Savage v.

State, 455 Md. 138, 185 (2017) (Adkins, J., concurring)).

       Prior to Rochkind, Maryland appellate courts reviewed a trial court’s decision

concerning the admissibility of expert testimony under two different standards of review—

conducting a de novo review of a trial court’s determinations under Frye-Reed and

reviewing the trial court’s determinations under Rule 5-702 for abuse of discretion. See

Rochkind, 471 Md. at 37. With the change to Daubert, we recognized that appellate review

of a trial court’s decision to admit or exclude expert opinion testimony would be the abuse

of discretion standard. Id. In abrogating the Frye-Reed standard in favor of Daubert, “we

reiterated that a trial court’s ruling to admit or to exclude expert witness testimony ‘will

seldom constitute a ground for reversal.’” State v. Matthews, 479 Md. 278, 306 (quoting

Rochkind, 471 Md. at 10 (quoting Roy v. Dackman, 445 Md. 23, 38–39 (2015))). We did

                                             4
not, however, grapple with the fact that the formulation of the abuse of discretion standard

that we have traditionally applied to evidentiary rulings and similar decisions is different

from the formulation employed by federal courts applying the Daubert standard. As

discussed below, since the transition to the Daubert standard, we have conducted searching

and exacting reviews of the record.

       B.     This Court’s Post-Rochkind Appellate Review of Circuit Courts’ Expert
              Testimony Rulings

              1. State v. Matthews

       Having set the stage in Rochkind for transition from a de novo standard of review

under Frye-Reed to an abuse of discretion standard under Daubert, we undertook our first

appellate review of a trial court’s Daubert-Rochkind analysis in Matthews. 479 Md. at 278.

Acknowledging that our review was for an abuse of discretion, the Court relied on cases

outside the expert testimony context and incorporated some of our traditional definitions

of that standard, stating that

       an appellate court does ‘not reverse simply because the . . . court would not
       have made the same ruling.’ Devincentz v. State, 460 Md. 518, 550 (2018).
       ‘Rather, the trial court’s decision must be well removed from any center mark
       imagined by the reviewing court and beyond the fringe of what that court
       deems minimally acceptable.’ Id.; see also Williams v. State, 457 Md. 551,
       563 (2018) (“An abuse of discretion occurs where no reasonable person
       would take the view adopted by the circuit court.”); Jenkins v. State, 375 Md.
       284, 295–96 (2003) (“Abuse occurs when a trial judge exercises discretion
       in an arbitrary or capricious manner or when he or she acts beyond the letter
       or reason of the law.”).

Matthews, 479 Md. at 305–06 (cleaned up). In addition, we “reaffirmed” the sentiments

expressed in Rochkind, stating that “it is still the rare case in which a Maryland trial court’s

exercise of discretion to admit or deny expert testimony will be overturned.” Id. at 306.

                                               5
       In Matthews, we held that the trial court did not abuse its discretion by failing to

exclude a photogrammetry expert’s testimony due to the expert’s inability to provide a

margin of error that accounted for several potential variables relating to height estimates

of a criminal defendant in connection with a reverse projection photogrammetry analysis.

Id. at 325. We determined that “[t]here [was] no dispute that [the expert’s] methodology

was reliable.” Id. at 313. “Nor was there any analytical gap in [the expert’s] proffered

testimony.”     Id.   Instead, we concluded that “[t]he unknown degree of uncertainty

concerning the accuracy of [the expert’s] height estimate went to the weight the jury should

give to the expert testimony, not to its admissibility.” Id. (footnote omitted). The Majority

opinion in this case describes our careful and searching examination of the record in

Matthews, see Maj. Slip Op. at 29–32, which I need not repeat here.

                2. Abruquah v. State

       Our second case involving the application of Daubert-Rochkind was Abruquah v.

State. In that case, we were asked to determine whether, in a murder case, the trial court

erred in permitting a firearms expert to testify, over objection, that “each of the four bullets

and the bullet fragment . . . ‘at some point’ ‘had been fired’ from or through ‘the Taurus

revolver[]’”—the gun that had been located at the criminal defendant’s home and was the

alleged murder weapon. 483 Md. at 680. The expert “testified neither that his opinion was

offered to any particular level of certainty nor that it was subject to qualifications or

caveats.” Id.

       In connection with our review, we “discuss[ed] general background on the firearms

identification methodology employed by the State’s expert witness, criticisms of the

                                               6
methodology, studies of the methodology, the testimony presented to the circuit court, and

caselaw from other jurisdictions.” Id. at 653; 656–79. We then considered the trial court’s

application of the Daubert-Rochkind factors. See id. at 680–98. After undertaking this

review, based on the evidence presented at the hearings, we held that the circuit court did

not abuse its discretion in ruling that the firearms expert “could testify about firearms

identification generally,” as well as “his examination of the bullets and bullet fragments

found at the crime scene, his comparison of that evidence to bullets known to have been

fired from the [criminal defendant’s] Taurus revolver, and whether the patterns and

markings on the crime scene bullets are consistent or inconsistent with the patterns and

markings on the known bullets.” Id. at 698. With regard to the circuit court’s decision to

“permit[] the State’s expert witness to opine without qualification that the crime scene

bullets were fired from [the defendant’s] firearm[,]” we determined that the circuit court

abused its discretion because the studies and other information in the record do not “support

the use of [the firearms identification methodology] to reliably opine without qualification

that the bullets of unknown origin were fired from the particular firearm.” Id. at 694–95,

698. We explained that “[b]ecause the court’s error was not harmless beyond a reasonable

doubt,” we reversed the circuit court’s ruling on the defendant’s motion in limine, vacated

his conviction, and remanded the case for a new trial. Id. at 698.

       In undertaking our review, we acknowledged that we were applying an abuse of

discretion standard. Id. at 652. We also observed that in Matthews, we applied the same

“frequently described” formulation of that standard from our case law—namely, that an

abuse of discretion occurs “when ‘no reasonable person would take the view adopted by

                                             7
the circuit court’ or when a decision is ‘well removed from any center mark imagined by

the reviewing court and beyond the fringe of what the court deems minimally acceptable.’”

Abruquah, 483 Md. at 652 n.5 (quoting Matthews, 479 Md. at 305 (first quoting Williams,

457 Md. at 563, and next quoting Devincentz, 460 Md. at 550)). Recognizing that the

description of this standard as articulated in our case law really did not fit with the trial

court’s careful and thoughtful approach to the application of the Daubert factors, we stated:

       In our view, the application of those descriptions to a trial court’s application
       of a newly adopted standard, such as that adopted by this Court in Rochkind
       as applicable to the admissibility of expert testimony, is somewhat unfair. In
       this case, in the absence of additional caselaw from this Court implementing
       the newly adopted standard, the circuit court acted deliberately and
       thoughtfully in approaching, analyzing, and resolving the question before it.
       This Court’s majority has come to a different conclusion concerning the outer
       bounds of what is acceptable expert evidence in this area.

Abruquah, 483 Md. at 652 n.5.

              3. Oglesby v. Baltimore School Associates

       Last month, this Court issued its opinion Oglesby v. Baltimore School Associates.

Oglesby was a lead paint case in which the plaintiff sued the owners and managers of an

apartment building for negligence, negligent misrepresentation, and a violation of the

Maryland Consumer Protection Act. Oglesby, 2023 WL 4755689 at *1. We were asked

to consider whether the circuit court erred in granting summary judgment in favor of the

property owners after the circuit court excluded the plaintiff’s expert testimony to establish

that the property in question was a source of lead exposure and a significant factor




                                              8
contributing to the plaintiff’s alleged injuries, including a loss of IQ points.5 Id. The

plaintiff’s causation expert, Steven Elliot Caplan, M.D., “concluded that [the plaintiff’s]

likely exposure to lead at the property was a significant contributing factor to bringing

about the [alleged] cognitive deficiencies and impairments[,]” and to a loss of IQ points.

Id. The property owners contended that Dr. Caplan lacked a sufficient factual basis for his

causation opinions, and that his methodology used to calculate the plaintiff’s IQ loss was

not reliable or generally accepted. Id. at *2. After a hearing at which the circuit court

considered the property owners’ motions to preclude expert opinions and motions for

summary judgment, the court granted both motions. Id. The circuit court agreed with the

property owners that Dr. Caplan lacked a sufficient factual basis for his opinions, and that,

without Dr. Caplan’s testimony as to causation, the plaintiff was unable to establish a prima

facie case for negligence. Id.

       After the Appellate Court affirmed the circuit court’s judgment, we granted

certiorari to determine whether the circuit court erred in ruling that Dr. Caplan’s opinions

lacked a sufficient factual basis. Id. In undertaking our review, we once again described

our abuse of discretion standard of review “as occurring ‘where no reasonable person

would take the view adopted by the trial court,’ or when ‘the decision under consideration




       5
         In Oglesby, the trial court analyzed the proffered expert testimony under the Frye-
Reed standard because the trial court proceedings took place before this Court issued its
decision in Rochkind, and, therefore, Oglesby did not involve review of the trial court’s
application of the Daubert-Rochkind factors. However, the parties petitioned and briefed
this Court for review of the trial court’s determination under the post-Rochkind abuse of
discretion standard, and this Court conducted a careful and searching review of the record.
                                             9
is well removed from any center mark imagined by the reviewing court and beyond the

fringe of what that court deems minimally acceptable[.]’” Id. at *12 (cleaned up).

       After conducting a “careful review of the record,” we determined that “Dr. Caplan’s

opinions had a sufficient factual basis,” and that “the circuit court resolved genuine

disputes of material fact.” Id. at *2. We further concluded that Dr. Caplan “had more than

an adequate supply of data from which to form the opinion and the methodology he

employed was reliable.” Id. As such, we held that the circuit court “abused its discretion

in granting the motion to preclude and in determining that Dr. Caplan’s testimony that [the

plaintiff’s] exposure to lead at the property was a significant contributing factor to her

injures was inadmissible.” Id.

       In undertaking our review, we determined that the court resolved certain facts in the

property owners’ favor, such as whether there was lead paint present at the property at the

time that the plaintiff lived there, whether the plaintiff had come into contact with lead at

the property, through peeling or chipping paint, and whether the plaintiff was potentially

exposed to lead at other locations and the condition of those properties. Id. at *16. We

stated, “[i]n short, a trial court is not permitted to resolve disputes of material fact in

determining whether a sufficient factual basis exists to support an expert’s opinion. Doing

so is a clear abuse of discretion.” Id.

       We proceeded to discuss in detail the evidence produced by the plaintiff, and the

data that Dr. Caplan relied upon to support the causation links necessary to avoid summary

judgment in a lead paint case. Id. at *17–22. After conducting our own review of the

evidence, we stated that:

                                             10
       Under the particular circumstances of this case, given the large quantity of
       data that [the expert] had available and reviewed and the nature of the
       challenge to the admissibility of his testimony, a remand for further
       proceedings as to Dr. Caplan’s opinion that [the plaintiff’s] exposure to lead
       at the property was a substantial contributing factor to her injuries (other than
       IQ loss) is not warranted.

Id. at *22.

       Concerning Dr. Caplan’s testimony that the plaintiff suffered an IQ loss of 3-4

points, in reliance upon particular lead paint studies—the Lanphear study and the Canfield

study—we discussed our Court’s previous case law in lead paint cases involving experts’

reliance upon the Lanphear study. Id. at *23–26. We concluded that “[t]he bottom line is

that these cases demonstrate that experts have been permitted to rely on the Lanphear study

and extrapolate from its findings and render an opinion that an individual suffered a

specified loss of IQ points as a result of exposure to lead.” Id. at *26.

       We observed that, “although our case law demonstrates that it is permissible for an

expert to rely on the Lanphear study to offer an opinion that exposure to lead resulted in a

specific loss of IQ points, the record in this case shows that neither Dr. Caplan’s report nor

his deposition testimony fully explains the basis for his calculations under either” the

Lanphear study or the Canfield study. Id. at *27. We stated that “without a full explanation

of his methodology, including the reasons for his choices, even though it is possible to

discern the basis of the calculations, we cannot determine whether it was an abuse of

discretion for the circuit court to preclude this aspect of Dr. Caplan’s testimony.” Id.

       We remanded the case for a Daubert-Rochkind hearing “for the circuit court to

determine whether the calculations that Dr. Caplan employed using the Lanphear study are


                                              11
reliable and to assess Dr. Caplan’s use of the Canfield study and the reliability of his

methodology with respect to it, should [the plaintiff] seek to introduce evidence concerning

her IQ loss at trial.” Id. at *28.

               4. The Case at Hand

       This case is our fourth opportunity since Rochkind to review expert witness

admissibility determinations for abuse of discretion. In undertaking this review, we state

that with the adoption of Daubert, “we promised the deference appropriate to courts

administering a flexible approach to analyzing the admissibility of expert testimony[]” and

that “[t]his case requires us to reflect on that flexibility and deference.” Maj. Slip Op. at 1.

       In connection with that deferential review, we once again recite our traditional

formulation of the abuse of discretion standard as described in our case law—such as

upholding a trial court’s decision unless it is “well removed from any center mark imagined

by the reviewing court and beyond the fringe of what that court deems minimally

acceptable” or “no reasonable person would take the view adopted by the circuit court.”

Maj. Slip Op. at 23 (citations omitted).

       In the context of our review in this case, we have considered the trial court’s stated

reason for its decision to exclude Ms. Cardell as an expert witness alongside our own

searching review of the record. In doing so, we carefully lay out the contents of the record,

including the data Ms. Cardell used in her before-and-after method calculations, how Ms.

Cardell reached her conclusions, the issues raised at the hearing, and the trial court’s stated

reasons for excluding Ms. Cardell as an expert. Maj. Slip Op. at 7–21. We ultimately

agree with the trial court that there were “speculative, insufficiently substantiated judgment

                                              12
calls that were central to Ms. Cardell’s application of the before-and-after method.” Maj.

Slip Op. at 42. Specifically, this Court concludes the trial court did not abuse its discretion

in determining that Ms. Cardell’s “judgment calls” regarding the selection of the 2015 base

year, treatment of member draws, and failure to factor in confounding variables in her

methodology were matters bearing on the reliability of the methodology and that each

weighed against admissibility. Maj. Slip Op. at 42.

       We explain that:

       We rely on trial courts that conduct Daubert-Rochkind hearings to determine
       where the line between data and methodology is in the specific cases before
       them, and whether the proffered expert’s choices relating to data,
       assumptions, and other inputs implicate the reliability of the expert’s
       methodology. In this part of the trial court’s analysis, the court did just that.

Maj. Slip Op. at 42–43. Although this Court “discern[s] no error in most of the trial court’s

application of the Daubert-Rochkind factors,” we conclude that the trial court erred in its

determination that Ms. Cardell’s treatment of the 2021 updates implicated the reliability of

her methodology. Maj. Slip Op. at 46. The trial court based this conclusion on its

observation that Ms. Cardell made this update without new information and “‘for fully

subjective reasons.’” Id. In determining that the trial court abused its discretion in

weighing this information against admissibility, we carefully look at the record and

conclude that Ms. Cardell was basing her decision on new information because she

“noticed something she had not noticed before on first examination[,]” and “[c]atching

something peculiar the second time around neither undermines the adequacy of the data . . .

nor the court’s understanding of the expert’s methodology[.]” Maj. Slip Op. at 46–47. In

other words, this Court rejects the trial court’s determination that Ms. Cardell’s June 2021

                                              13
adjustments were made for “fully subjective reasons” based on our own review of the

record. Maj. Slip Op. at 46. We go on to explain that Ms. Cardell’s decision to make the

adjustments “did not implicate the reliability of her methodology. At most, it went to the

care with which she applied her methodology, which is a matter to be explored on cross-

examination before the jury (if Ms. Cardell’s testimony is otherwise found to be

sufficiently reliable).” Maj. Slip Op. at 47–48.

       We also hold that “[t]he trial court . . . erred when it stated that Ms. Cardell’s lost-

profit calculations would not be helpful to the jury.” Maj. Slip Op. at 47, n.16. The trial

court’s reasoning for weighing this factor against admission was because “if the jury

concluded that one (or more) of the departing physicians left for reasons other than

KatzAbosch’s negligence, a lost-profits analysis that included that physician’s projected

revenue would overstate the practice’s lost profits and therefore not help the jury to

accurately calculate damages.” Id. On review, we determine that “there was no way to

know before the trial whether the jury would conclude that any of the physicians left for

reasons other than KatzAbosch’s negligence[,]” and, therefore, the trial court’s conclusion

“was speculative and not a proper ground for exclusion.” Id.

       Importantly, this Court points out that there is overlap between matters of reliability

and credibility, as well as data and methodology, which can create challenges for trial

courts applying Daubert.      Maj. Slip Op. at 32–43.       As such, making admissibility

determinations on these types of matters is necessarily fact-specific, both before the trial

court and on appellate review. These questions often require the reviewing court to do

more than simply accept a trial court’s justification for an admissibility determination as a

                                              14
matter within the trial court’s discretion. Indeed, when conducting a review of reliability

determinations, a reviewing court may need to look carefully at the record, including the

underlying facts of the case, methodology employed, data relied upon, and contents of

expert testimony and reports to determine if the trial court abused its discretion. This is

exactly what this Court does here, and this is in line with how federal courts apply the

abuse of discretion standard in the Daubert context.

       I take no issue with the manner in which we have undertaken our review in this case,

or in Matthews, Abruquah, and Oglesby, as I joined the majority opinions authored by my

colleagues in each of these cases. In the aftermath of Rochkind, I do not challenge the

nature of our review for an abuse of discretion, but I question whether our reliance upon

our frequently described formulation of this standard, in fact, reflects the nature and type

of review that we are undertaking in these cases and that federal courts have employed in

undertaking the same exercise.

       C. “Abuse of Discretion” as a Context-Dependent Standard

       In the context of appellate review, reviewing courts use one of three well-known

standards of review. An appellate court reviews questions of law de novo. Questions of

fact are reviewed under the clearly erroneous standard. Finally, matters that are committed

to the discretion of the trial court—such as whether to admit expert testimony—are

reviewed for an “abuse of discretion.” Of the three standards of review, the abuse of

discretion standard is typically considered to be the most deferential. However, it has been

described as “famously slippery,” and has been understood to have different meanings and

applications in different contexts. Zervos v. Verizon, 252 F.3d 163, 168 n.4 (2d Cir. 2001);

                                            15
Gasperini v. Center for Humanities, Inc., 149 F.3d 137, 141 (2d Cir. 1998) (noting that

abuse of discretion “may have different meanings in different contexts” depending on why

the decision is within the trial court’s discretion); Lawson Prods., Inc. v. Avnet, Inc., 782

F.2d 1429, 1438 (7th Cir. 1986) (describing abuse of discretion as a term “that seems to

escape easy definition”). In attempting to explain the difference between the abuse of

discretion standard and clearly erroneous standard, the Seventh Circuit has observed:

       Abuse of discretion is conventionally regarded as a more deferential standard
       than clear error, though whether there is any real or consistent difference has
       been questioned. The alternative view is that both standards denote a range,
       rather than a point, that the ranges overlap and maybe coincide, and that the
       actual degree of scrutiny in a particular case depends on the particulars of
       that case rather than on the label affixed to the standard of appellate review.

Haugh v. Jones & Laughlin Steel Corp., 949 F.2d 914, 916–17 (7th Cir. 1991).

       The type of ruling under review informs how appellate courts apply the abuse of

discretion standard. Dissenting in Calderon v. Thompson, Justice David Souter wrote that

“the variety of subjects left to discretionary decision requires caution in synthesizing abuse

of discretion cases.” 523 U.S. 538, 567 (1998) (Souter, J. dissenting) (citations omitted).

In the same vein, Judge Henry J. Friendly recommended that when applying the abuse of

discretion standard, appellate courts “must carefully scrutinize the nature of the trial court’s

determination and decide whether that court’s superior opportunities of observation or

other reasons of policy require greater deference than would be accorded to its formulations

of law or its application of law to the facts.” Henry J. Friendly, Indiscretion About

Discretion, 31 Emory L.J. 747, 784 (1982). In the decades since Daubert was decided,




                                              16
federal courts have shaped the abuse of discretion standard in reviewing admissibility

determinations of expert witness testimony.

       D. The Federal Courts’ Description of the Abuse of Discretion Standard in the
          Daubert Context

       As discussed, after Daubert, the Supreme Court made clear that the abuse of

discretion standard is the standard that a reviewing court must apply in undertaking a

review of a trial court’s decision to admit or exclude expert testimony. See Joiner, 522

U.S. at 136; Kumho, 526 U.S. at 142 (explaining that “the law grants a [trial] court the

same broad latitude when it decides how to determine reliability as it enjoys in respect to

its ultimate reliability determination[]”).

       Although the standard was established by these cases, it does not always appear to

be applied as the hands-off deferential approach that often accompanies appellate review

for an abuse of discretion. Instead, it involves a more searching and careful examination

of the records, data, studies, testimony, and trial court’s analysis of the same. Writing for

the American Bar Association, lawyers David F. Herr and Morgan L. Holcomb observed

that

       after [Joiner] the proper standard of review is clearly the “abuse-of-
       discretion” standard. However, even Joiner itself appeared to apply quite
       searching review. As Justice Stevens pointed out in his partially concurring
       opinion, Part III of the majority opinion went beyond announcing the
       appropriate standard of review, and carefully examined the basis on which
       the trial court excluded the expert evidence. Joiner, 522 U.S. at 151–55
       (Steven[s], J.[,] concurring).

       This careful examination has not gone unnoticed; for example, one
       commentator noted, “while they don’t say so, some appellate opinions come
       close to a de novo style of review of the proffered expert testimony.” Michael
       J. Saks, et al., Annotated Reference Manual on Scientific Evidence Second

                                              17
       23 (West 2004) (citing Mathis v. Exxon Corp., 302 F.3d 448 (5th Cir. 2002)
       (noting that the trial court had failed to “offer any reasons in support of
       admitting the expert’s testimony,” but nonetheless holding that the appellate
       court did not have to remand and could carry out the review itself on appeal:
       “Because admissibility is a legal question—one ill-suited to remand and
       further explication by the [trial] court—we will decide the question in this
       case without remand.”)).

David F. Herr & Morgan L. Holcomb, Opinion and Expert Testimony in Federal and State

Courts, Am. L. Inst. – Am. Bar Ass’n 629, 642 (Jan. 2007). Herr and Holcomb go on to

catalog additional examples of federal circuit courts conducting “searching” reviews of

admissibility determinations in the expert witness context. Id. (citing United States v.

Mitchell, 365 F.3d 215 (3d Cir. 2004) (conducting a searching analysis of the admission of

fingerprint evidence, and concluding that admission of the prosecution’s expert was not

error); Lauzon v. Senco Prods., Inc., 270 F.3d 681 (8th Cir. 2001) (conducting a fairly

searching review, and ultimately holding that “[t]hrough examination of the record in light

of the requirements of Daubert and its progeny, ineluctably we are led to conclude the

[trial] court’s exclusion of the testimony was an abuse of discretion and fell outside the

spirit of admissibility as set forth in Federal Rule of Evidence 702.”); Pride v. BIC Corp.,

218 F.3d 566, 578 (6th Cir. 2000) (noting review of Daubert ruling was for abuse of

discretion, but conducting “de novo review of the record” to conclude that the “record

supports the [trial] court’s finding on this issue”)). The authors also note the reluctance of

appellate courts to remand these issues because appellate courts tend to make admissibility

determinations themselves. Id. at 642–43.

       Herr and Holcomb’s description of how federal courts apply the abuse of discretion

standard in the expert witness context—like this Court’s application of the standard in

                                             18
Matthews, Abruquah, Oglesby, and here—feels out of sync with the even more deferential

approach that we take when reviewing other types of discretionary decisions by trial courts

and our “no reasonable person” and “well removed from the center mark” articulation of

the standard.

       In the Daubert context, various federal circuits have supplied their own descriptions

of the nature of the abuse of discretion standard, a few of which I highlight here. In its

description of the standard, the First Circuit explains that a trial court abuses its discretion

“when a material factor deserving significant weight is ignored, when an improper factor

is relied upon, or when all proper and no improper factors are assessed, but the court makes

a serious mistake in weighing them.” Lawes v. CSA Architects and Engineers, LLP, 963

F.3d 72, 90 (1st Cir. 2020) (quotations omitted). The Court notes that the abuse of

discretion “standard is not monolithic: within it, embedded findings of facts are reviewed

for clear error, questions of law are reviewed de novo, and judgment calls are subjected to

classic abuse-of-discretion review.” Id. (internal quotations omitted). Accordingly, the

reviewing court will reverse a trial court’s decision if it determines the trial court

“committed a material error of law or a meaningful error in judgment.” Id. (quotations

omitted).

       The Tenth Circuit has described the standard as follows:

       We review de novo the question of whether the [trial] court applied the
       proper legal test in admitting an expert’s testimony. Though the [trial] court
       has discretion in how it conducts the gatekeeper function, we have
       recognized that it has no discretion to avoid performing the gatekeeper
       function. Therefore, we review de novo the question of whether the [trial]
       court applied the proper standard and actually performed its gatekeeper role
       in the first instance. We then review the trial court’s actual application of the

                                              19
       standard in deciding whether to admit or exclude an expert’s testimony for
       abuse of discretion. The trial court’s broad discretion applies both in
       deciding how to assess an expert’s reliability, including what procedures to
       utilize in making that assessment, as well as in making the ultimate
       determination of reliability. Accordingly, we will not disturb the [trial]
       court’s ruling unless it is arbitrary, capricious, whimsical or manifestly
       unreasonable or when we are convinced that the [trial] court made a clear
       error of judgment or exceeded the bounds of permissible choice in the
       circumstances.

Dodge v. Cotter Corp., 328 F.3d 1212, 1223 (10th Cir. 2003) (quotations omitted) (cleaned

up). That court has also emphasized that a “natural requirement of the gatekeeper function

is the creation of a sufficiently developed record[,]” and has observed that “Kumho and

Daubert make it clear that the [trial] court must, on the record, make some kind of reliability

determination.” Id. at 1223 (cleaned up) (citations omitted). Thus, the Tenth Circuit has

held “when faced with a party’s objection, a [trial] court must adequately demonstrate by

specific findings on the record that it has performed its duty as gatekeeper.” Id. (cleaned

up). The court has stated that:

       Without specific findings or discussion on the record, it is impossible on
       appeal to determine whether the [trial] court carefully and meticulously
       reviewed the proffered scientific evidence or simply made an off-the-cuff
       decision to admit the expert testimony. In the absence of such findings, we
       must conclude that the court abused its discretion in admitting such
       testimony.

Id. (quotations omitted).

       The Seventh Circuit describes a “two-step standard of review in cases challenging

a [trial] court’s admission or exclusion of the testimony of an expert.” C.W. ex rel. Wood

v. Textron, Inc., 807 F.3d 827, 835 (7th Cir. 2015). First, the court “review[s] de novo a

[trial] court’s application of the Daubert framework.” Id. (citing United States v. Brumley,


                                              20
217 F.3d 905, 911 (7th Cir. 2000)). If the reviewing court determines that “the [trial] court

properly adhered to the Daubert framework,” then it reviews the “decision to exclude (or

not to exclude) expert testimony for abuse of discretion.” Id.

       The Eleventh Circuit explained the rationale behind the abuse of discretion standard

in Daubert, as well as that Court’s application of the standard, in United States v. Brown,

415 F.3d 1257 (11th Cir. 2005). That Court explained that the standard is proper because

the trial court is “[i]mmersed in the case as it unfolds” and is “more familiar with the

procedural and factual details and is in a better position to decide Daubert issues.” Id. at

1266. That court explained:

       The rules relating to Daubert issues are not precisely calibrated and must
       be applied in case-specific evidentiary circumstances that often defy
       generalization. And we don’t want to denigrate the importance of the trial
       and encourage appeals of rulings relating to the testimony of expert
       witnesses. All of this explains why the task of evaluating the reliability of
       expert testimony is uniquely entrusted to the [trial] court under Daubert,
       and why we give the [trial] court considerable leeway in the execution of
       its duty.

Id. (quotations omitted). However, the court goes on to clarify,

       [t]o be sure, review under an abuse of discretion standard does entail review,
       and granting considerable leeway is not the same thing as abdicating
       appellate responsibility. We will reverse when the [trial] court’s Daubert
       ruling does amount to an abuse of discretion that affected the outcome of a
       trial. An abuse of discretion can occur where the [trial] court applies the
       wrong law, follows the wrong procedure, bases its decision on clearly
       erroneous facts, or commits a clear error in judgment. In the Daubert
       context, if one of those types of serious error occur, we may conclude that
       the [trial] court has not properly fulfilled its role as gatekeeper. We have also
       found that a [trial] court abuses its discretion where it fails to act as a
       gatekeeper by essentially abdicating its gatekeeping role.

Id. (quotations omitted).


                                              21
       E.     Some Concluding Thoughts on Our Standard of Review in the
              Daubert Context

       When we made the decision in Rochkind to adopt Daubert, we did so with the

knowledge that there is a “broad base of case law” that went along with the new standard.

Rochkind, 471 Md. at 34–35 (quoting Savage, 455 Md. at 185 (Adkins, J., concurring)).

However, in describing the abuse of discretion standard of review, we transplanted our

formulation of the standard we apply in other contexts. See, e.g., Devincentz, 460 Md. at

539, 550 (applying the abuse of discretion standard to the review of a trial court’s decision

to admit or exclude a character witness’s opinion); Williams, 457 Md. at 562–63 (applying

the abuse of discretion standard to the review of a trial court’s decision to admit relevant

evidence); Jenkins, 375 Md. at 295–96 (applying the abuse of discretion standard to the

review of a trial court’s decision to deny a motion for a new trial). In my view, our

traditional formulation of the abuse of discretion standard—as invoking the articulation of

“no reasonable person” or “well removed from any center mark imagined by a reviewing

court and beyond the fringe of what the court deems minimally acceptable”—is not the

best or most accurate way of describing our abuse of discretion review in the Daubert-

Rochkind context. See Williams, 457 Md. at 563; Devincentz, 460 Md. at 550.

       I believe that this Court should re-formulate a definition of our abuse of discretion

standard in the context of appellate review of a trial court’s application of the Daubert-

Rochkind factors when analyzing whether to admit or exclude expert testimony. In doing

so, we should look to the federal courts’ articulation of the standard of review in the same

manner that we will look to the federal courts’ jurisprudence in its application. I agree with


                                             22
the observation made by Judge Friendly and various federal circuits that the abuse of

discretion standard is contextual. See, e.g., Gasperini, 149 F.3d at 141 (noting that abuse

of discretion may have different meanings in different contexts depending on why the

decision is within the trial court’s discretion); Lawes, 963 F.3d at 90 (explaining that the

abuse of discretion standard is not monolithic).

       As is clearly reflected in our appellate review of the trial court’s expert admissibility

determinations in Matthews, Abruquah, Oglesby, and the instant case, our abuse of

discretion standard of review involves a searching and careful examination of the record,

data, studies, testimony, and trial court’s analysis of the same. Our traditional formulation

of the abuse of discretion standard does not comport with the type of review that we are

undertaking here. I believe that the parties, their counsel and experts, trial courts and our

Appellate Court would all benefit from this Court re-formulating our description of the

abuse of discretion standard of review that is undertaken in this context and, in so doing,

utilize the federal courts’ discussion of that standard.

       Our standard should reflect that the hallmark of the abuse of discretion standard

regardless of context is the high degree of deference to a court exercising discretionary

authority. Joiner, 522 U.S. at 137, 143. “Accordingly, we will not disturb the [trial] court’s

ruling unless it is arbitrary, capricious, whimsical, or manifestly unreasonable or when we

are convinced that the [trial] court made a clear error of judgment or exceeded the bounds

of permissible choice in the circumstances.” Dodge, 328 F.3d at 1223 (quotations omitted).

But this broad discretion “is not the same thing as abdicating appellate responsibility.”

Brown, 415 F.3d at 1266. In the context of admissibility of expert testimony, this means

                                              23
that while trial courts are given leeway on matters within the range of their discretion, on

review, the appellate court will find an abuse of discretion where the trial court’s rulings

are “manifestly erroneous.” Joiner, 522 U.S. at 142. That is, when applying the abuse of

discretion standard in the expert witness context, we reverse only where there is “a

meaningful error in judgment.” Lawes, 963 F.3d at 90 (emphasis added). In determining

whether the trial court has made such a meaningful error, reviewing courts may need to

conduct a searching review of the record.

       With these points in mind, our standard should reflect that the “[abuse of discretion]

standard is not monolithic: within it, embedded findings of fact are reviewed for clear error,

questions of law are reviewed de novo, and judgment calls are subjected to a classic abuse-

of-discretion review.” Lawes, 963 F.3d at 90 (quotations omitted). A reviewing court may

find a trial court abused its discretion where the trial court (1) “applies the wrong law;” (2)

“follows the wrong procedure;” (3) “bases its decision on clearly erroneous facts;” (4)

“commits a clear error in judgment;” or (5) abdicates its gatekeeping role altogether.

Brown, 415 F.3d at 1266; see also Lawes, 963 F.3d at 90. Although trial courts have

discretion for determining the best procedure for the proper resolution of any expert

admissibility issues, see Kumho, 526 U.S. at 152, trial judges should be reminded that a

reviewing court will find an abuse of discretion where they fail to act as a gatekeeper by

essentially abdicating their gatekeeping role, or where the trial court assumes the role of

the jury. Part of this gatekeeping role is developing a clear record of the court’s basis for

its determination to admit or exclude expert testimony. See Dodge, 328 F.3d at 1223.



                                              24
       Our formulation of the standard should also take into account that appellate review

of expert admissibility determinations encompasses many types of challenges, including

whether: expert testimony is necessary or appropriate; the expert is qualified; the expert’s

area of expertise is unreliable; the expert’s conclusions are invalid or unreliable; the

testimony exceeds the expert’s area of expertise; and the expert testimony is relevant. In

my view, “the actual degree of scrutiny in a particular case [may] depend[] on the

particulars of that case rather than on the label affixed to the standard of appellate review.”

Haugh, 949 F.2d at 916–17; see also Brown, 415 F.3d at 1265. A re-formulated standard

should reflect this flexibility.

       As far as our statements in Rochkind and Matthews that it will be the “rare case in

which a Maryland trial court’s exercise of discretion to admit or deny expert testimony will

be overturned,” I suppose only time will tell. Matthews, 479 Md. at 286, 306 (emphasis

added).




                                              25
Circuit Court for Howard County
Case No. C-13- CV-18-000181
Argued: May 4, 2023                                 IN THE SUPREME COURT

                                                         OF MARYLAND*

                                                               No. 30

                                                     September Term, 2022
                                             __________________________________

                                                KATZ, ABOSCH, WINDESHEIM,
                                                GERSHMAN & FREEDMAN, P.A.,
                                                          ET AL.

                                                                   v.

                                               PARKWAY NEUROSCIENCE AND
                                                    SPINE INSTITUTE, LLC
                                             __________________________________

                                                    Fader, C.J.,
                                                    Watts,
                                                    Hotten,
                                                    Booth,
                                                    Biran,
                                                    Gould,
                                                    Eaves,

                                                             JJ.
                                             __________________________________

                                              Concurring and Dissenting Opinion by
                                                           Gould, J.
                                             __________________________________

                                                    Filed: August 30, 2023




*During the November 8, 2022 general election, the voters of Maryland ratified a
constitutional amendment changing the name of the Court of Appeals to the Supreme Court
of Maryland. The name change took effect on December 14, 2022
       I respectfully concur in part and dissent in part from the Majority’s well-written and

thorough opinion. As I understand it, the Majority concludes that the trial court’s analysis

of the various factors under Daubert-Rochkind could very well support the trial court’s

discretionary decision to exclude Ms. Cardell’s testimony, but the Majority is concerned

that the trial court may have unduly considered Ms. Cardell’s updated calculations in

assessing those factors. I agree with the Majority that Ms. Cardell’s updated calculations

and her reasons for them do not go to the reliability of her methodology, but are instead

grist for the cross-examination mill. I write separately because I would not remand for the

trial court to reconsider its analysis. In my view, Ms. Cardell’s methodology is so

fundamentally flawed as to constitute “the rare case” in which a trial court’s admission of

expert testimony would have been an abuse of discretion. See State v. Matthews, 479 Md.

278, 306 (2022).

       Parkway Neuroscience and Spine Institute, LLC’s (“PNSI”) theory of the case was

that, beginning in 2013, Katz, Abosch, Windesheim, Gershman & Freedman, P.A.’s and

Mark Rapson’s (collectively, “KatzAbosch”) negligent advice caused seven of the nine

member-physicians, who were still with PNSI at the end of 2014, to leave the practice by

mid-2016, and that their departures caused damages to the company in the form of lost

profits. Ms. Cardell’s job was to quantify the lost profits caused by those departures. She

utilized the “before-and-after” method. As the Majority aptly notes, despite the seemingly

straightforward nature of this method, as an expert, Ms. Cardell was required to make

decisions requiring judgment and to sufficiently explain those decisions. See Maj. slip op.

at 33, 42-43 & n.15.
       Ms. Cardell assumed that, if not for the departure of member-physicians, PNSI’s

profits in 2017 through 2019 would have been, at a minimum, the same as in 2015. So,

according to Ms. Cardell’s theory, to calculate the lost profits for, say, 2017, you take the

difference between the profit achieved in 2015 ($321,751) and the loss incurred in 2017

($1,515,436) and, voila, you have $1,837,186 of lost profits for 2017.1 Ms. Cardell used

the same analysis for 2018 and 2019 to arrive at a total lost profits of $4,956,080 for all

three years. Ms. Cardell thus attributed the entire change in profitability at PNSI to the

departure of the member-physicians. It is that assumption that requires the exclusion of

her testimony.

       A critical component of the “before-and-after” method for calculating lost profits is

accounting or controlling for confounding variables—changes in conditions that might

have affected the plaintiff’s profitability during the “loss period.”       Accounting for

confounding variables may be considered, in the lost profits context, akin to “whether the

expert has adequately accounted for obvious alternative explanations” under the Daubert-

Rochkind rubric. See Rochkind v. Stevenson, 471 Md. 1, 36 (2020). Confounding variables

could include, for instance, the prices of goods and services, consumer demand and

preferences, technological developments, personnel turnover, and broader industry and




       1
           Due to rounding, the numbers do not add up exactly.

                                             2
economic conditions. The failure to account for relevant confounding variables has

routinely resulted in the exclusion of such testimony.2, 3

       Here, the circuit court determined that Ms. Cardell’s failure to evaluate revenue

generation by individual member-physicians and failure to consider insurance

reimbursement rates favored exclusion. In my view, Ms. Cardell’s wholesale failure to

evaluate and consider those and other relevant factors required the exclusion of her

testimony.




       2
         See, e.g., E. Auto Distribs., Inc. v. Peugeot Motors of Am., Inc., 795 F.2d 329 (4th
Cir. 1986) (holding expert testimony properly excluded because it failed to account for
changes in efficiency of sales force and demand for products); Craftsmen Limousine, Inc.
v. Ford Motor Co., 363 F.3d 761 (8th Cir. 2004) (holding expert testimony should have
been excluded because it failed to analyze the effects of general economic conditions and
increased competition); Schiller & Schmidt, Inc. v. Nordisco Corp., 969 F.2d 410 (7th Cir.
1992) (holding expert testimony inadequate because it failed to distinguish between the
damages caused by the lawful entry of a powerful competitor and those caused by the
competitor’s misconduct); Isaksen v. Vermont Castings, Inc., 825 F.2d 1158, 1165 (7th
Cir. 1987) (holding expert testimony inadequate when the plaintiff “made no effort to
establish how much of the loss was due to [the defendant’s unlawful] activity as distinct
from unrelated business factors” and “[a]ll [the plaintiff] did to prove damages was to
compare his average profits for several years before and several years during the period of
unlawful activity”); Farley Transp. Co. v. Santa Fe Trail Transp. Co., 786 F.2d 1342 (9th
Cir. 1985) (holding expert testimony inadequate because it failed to distinguish between
losses caused by lawful and unlawful competition); Coleman Motor Co. v. Chrysler Corp.,
525 F.2d 1338 (3rd Cir. 1975) (holding expert testimony inadequate for failing to account
for lawful competition, market changes, and changes in the character of the neighborhood
where plaintiff’s auto dealership was located).
       3
        Asking experts to provide analysis that implicates relationships between profits
and confounding variables does not infringe upon the jury’s responsibility to determine
whether the defendant caused the alleged harm. Here, Ms. Cardell was not called upon to
determine as part of her analysis whether KatzAbosch’s allegedly poor financial advice
caused the member-physicians to depart. Rather, she was asked to quantify the lost profits
caused by those departures.
                                              3
      It is inconceivable that 100 percent of the changes in profitability between the

benchmark year of 2015 and the loss years of 2017, 2018, and 2019 were attributable to

the departures of the member-physicians. Stated differently, it’s inconceivable that had

none of the member-physicians departed, PNSI’s profits would have remained exactly the

same. PNSI’s own allegations in its complaint establish this very point: the company’s

“overhead, expenses and revenue generated by the Members’ practices during the relevant

time period differed markedly by specialty.” That alone indicates that the impact on the

company’s profitability from the departure of a particular member-physician would depend

on the specific costs and revenue structures of that member-physician’s practice.

      As the trial court noted, Ms. Cardell virtually ignored changes in reimbursement

rates. To the extent she considered them at all, she did not analyze how such changes

would have affected the total revenues had each of the doctors stayed. Instead, she merely

speculated that the observed decrease in actual reimbursement rates—which defendant’s

counsel presented and she acknowledged on cross-examination—would have been offset

by general increases in demand for medical services.4




      4
        That may or may not be so, but her speculation only reinforces why an expert must
drill down on the figures. Which specialties were expected to experience increased
demand? The high profit margin specialties? The low profit margin specialties? What
marginal expenses would PNSI have incurred in association with that increased demand?
Did PNSI have the capacity to accommodate such increased demand, and if so, by how
much? How many additional services would PNSI have had to provide to offset the
decreased reimbursement rates? That is, where was the break-even point? Ms. Cardell’s
mere speculation that decreased reimbursement rates would have been offset by increased
demand for services is no substitute for an actual analysis that accounts for confounding
variables.
                                            4
       You do not have to be an expert to know that expenses of any business are subject

to fluctuations.   Indeed, accounting for such fluctuations was factored into PNSI’s

compensation model. PNSI’s complaint alleges that a new compensation model developed

and recommended by KatzAbosch was approved and adopted by its members. Here, PNSI

alleges that under this new model:

       in determining Member cash flow distributions, each Member regardless of
       specialty was allocated an equal portion of 2/3 of [PNSI’s] net increase or
       decrease in expenses (the “fixed portion”) and a portion of the remaining 1/3
       of [PNSI’s] net increase or decrease in expenses based on the Member’s net
       collections for the period (the “variable portion”).

       Thus, PNSI’s own complaint shows that its revenues and expenses were not static.

Indeed, PNSI assumed as much in restructuring its compensation model. There is no

evidence Ms. Cardell analyzed the extent to which changes in expenses or increased

demand for services could have affected profits had the seven member-physicians

remained. It would be as if a lost profits analysis of a gas station failed to account for

changes in the wholesale and retail prices of gasoline.

       Ms. Cardell’s failure to make any attempt to account for such confounding variables

requires the exclusion of her testimony. Accordingly, I dissent to that part of the Majority’s

opinion that requires a remand for further proceedings. I would reverse the judgment of

the Appellate Court.




                                              5
 Circuit Court for Howard County
 Case No. C-13-CV-18-000181

 Argued: May 4, 2023
                                                   IN THE SUPREME COURT

                                                        OF MARYLAND*

                                                               No. 30

                                                    September Term, 2022
                                          ______________________________________

                                              KATZ, ABOSCH, WINDESHEIM,
                                            GERSHMAN & FREEDMAN, P.A., AND
                                                    MARK E. RAPSON

                                                                 v.

                                                PARKWAY NEUROSCIENCE
                                                AND SPINE INSTITUTE, LLC
                                          ______________________________________

                                                        Fader, C.J.
                                                        Watts
                                                        Hotten
                                                        Booth
                                                        Biran
                                                        Gould
                                                        Eaves,

                                                          JJ.
                                          ______________________________________

                                                Dissenting Opinion by Watts, J.
                                          ______________________________________

                                                        Filed: August 30, 2023




*At the November 8, 2022 general election, the voters of Maryland ratified a constitutional
amendment changing the name of the Court of Appeals of Maryland to the Supreme Court
of Maryland. The name change took effect on December 14, 2022.
       Respectfully, I dissent. By and large, I agree with much of the Majority’s well-

written opinion, but I disagree with its disposition of the case. I agree with the Majority

that, in some instances, the choices made by an expert and the information that an expert

inputs to a methodology can be part of the methodology. See Maj. Slip Op. at 3. I also

agree that, in determining whether expert testimony has a sufficient factual basis, this Court

should not adopt “an unduly rigid dividing line between ‘data’ and ‘methodology[.]’” Maj.

Slip Op. at 3-4. Additionally, I agree that, in assessing the testimony of Meghan Cardell,

an expert for Parkway Neuroscience and Spine Institute, LLC (“PNSI”), Respondent, the

Circuit Court for Howard County abused its discretion in its “consideration of the [June

2021] normalizing adjustments as reflecting on the reliability of Ms. Cardell’s

methodology, as opposed to the credibility (or reliability) of Ms. Cardell herself.” Maj.

Slip Op. at 4. I part ways with the Majority as to the outcome of the case, though.

       Rather than remand the case to the circuit court for it to decide whether to admit Ms.

Cardell’s testimony without considering the June 2021 normalizing adjustments as

affecting the reliability of her methodology, I would affirm the decision of the Appellate

Court of Maryland. Like the Appellate Court, I would conclude that Ms. Cardell’s expert

opinion was admissible and that, in assessing the reliability of her methodology, the circuit

court abused its discretion by determining that choices of data and other inputs rendered

her methodology unreliable. See Parkway Neuroscience & Spine Inst., LLC v. Katz,

Abosch, Windesheim, Gershman & Freedman, P.A., 255 Md. App. 596, 630, 283 A.3d

753, 773 (2022). In my view, in assessing the reliability of Ms. Cardell’s methodology,

the circuit court took issue with matters such as the soundness of the data that Ms. Cardell
used and the conclusions she reached, the nature of her experience, and whether she was

biased or not. Unlike the Majority, I would not order a limited remand regarding the

admissibility of Ms. Cardell’s expert testimony. See Maj. Slip Op. at 50. Instead, I would

affirm the judgment of the Appellate Court, which reversed and remanded the case for

further proceedings, with Ms. Cardell’s testimony admissible. See Parkway, 255 Md. App.

at 639, 283 A.3d at 778.

       Determining the amount that a business would have earned or lost in any given

period will generally require both an in-depth analysis of the business’s revenues and

expenses over a period of time and looking at meaningful events affecting the business,

such as employee changes and other data, culminating in projections made through use of

a methodology deemed reliable for calculating profit or loss. In this case, it is undisputed

that the method for calculation of lost profit that Ms. Cardell used—the before-and-after

method—is generally accepted “as a reliable methodology.” Maj. Slip Op. at 49 n.19.

       Here, however, in an analysis of the factors set forth in Daubert v. Merrell Dow

Pharms., Inc., 509 U.S. 579, 593-94 (1993) and Rochkind v. Stevenson, 471 Md. 1, 35-36,

236 A.3d 630, 650 (2020), the circuit court took issue with the quality of the data that Ms.

Cardell relied on, not the reliability of the methodology that she used, and, in ruling, faulted

her level of experience, and questioned her impartiality. To briefly recap the circuit court’s

ruling, at the conclusion of the hearing on KatzAbosch’s1 motion to exclude Ms. Cardell’s

testimony, the circuit court granted the motion, finding that PNSI did not meet the burden


       1
      Petitioners, Abosch, Windesheim, Gershman & Freedman, P.A., and Mark E.
Rapson, together, are referred to as “KatzAbosch.”

                                             -2-
of establishing that Ms. Cardell’s testimony was admissible. The circuit court summarized

its major concerns with Ms. Cardell’s testimony before delving into the Daubert-Rochkind

factors. A primary concern of the circuit court was an absence specialized knowledge,

experience, or training in Ms. Cardell’s background. The circuit court stated that PNSI is

a medical practice that required a distinct knowledge set, which Ms. Cardell lacked.

Because of this, the circuit court questioned Ms. Cardell’s judgment in designating 2015

as the base year for calculating lost profits, although the circuit court found that her choice

of the before-and-after method was correct. Finding this base period to be “speculative[,]”

the circuit court concluded that subjectivity affected Ms. Cardell’s rationale.

       The circuit court took issue with the overall quality of data used in Ms. Cardell’s

methodology, particularly the circumstance that, in its view, “she relie[d] entirely on

representations from within the business and the Plaintiffs[,]” i.e., PNSI. Finding her

testimony to contain “inherent bias” due to her “support” for PNSI’s claims, the circuit

court critiqued Ms. Cardell’s judgment calls, including her decision to make a normalizing

adjustment to the calculations in June 2021 when there had been no change in the

underlying facts.

       The circuit court noted what it called Ms. Cardell’s failure to examine the individual

economic impact of each doctor’s departure, stating that “[n]ot every doctor[] is going to

make or contribute the same amount of money each . . . year.” This contributed to the




                                             -3-
circuit court’s conclusion that Ms. Cardell’s opinion would not be helpful to the jury.2

Continuing to express concern about Ms. Cardell’s “judgment calls,” the circuit court

faulted her failure to consider data that would have provided alternative explanations for

lost profits, such as changes in insurance reimbursement rates. Another problem, according

to the circuit court, was Ms. Cardell’s decision to consider member draws as reductions of

profit, i.e., expenses, in her calculations without identifying an objective industry standard

for such a classification.

       It was only after the circuit court set forth the above reasoning for excluding Ms.

Cardell’s testimony that it turned to the Daubert-Rochkind factors, indicating that perhaps

some of the factors may not apply. The circuit court quoted our statement in Rochkind,

471 Md. at 37, 236 A.3d at 651, that “[a] trial court may apply some, all, or none of the

factors depending on the particular expert testimony at issue” (citation omitted), then

added: “[T]hat means that not each and every factor has to be resolved, and not each and

every factor of the ten have to be resolved against the witness.” (Internal quotation marks

omitted).

       In addressing the factors, the circuit court repeated much of the reasoning that it had

already discussed. Of the 10 Daubert-Rochkind factors, the circuit court appeared to

determine that 7 factors weighed against the admissibility of Ms. Cardell’s testimony: (1)



       2
        The circuit court reiterated this conclusion in its final holding that PNSI had not
met its burden under Daubert and Rochkind to show Ms. Cardell’s testimony’s reliability
and usefulness to the jury. However, the circuit court promptly limited that holding, stating
that “usefulness is a very, very slight factor and I don’t want to dwell on it because I don’t
want to create reversible error[.]”

                                            -4-
testability, (2) the rate of error, (3) standards and controls, (4) the purpose of the opinion,

(5) unjustifiable extrapolation from an accepted premise, (6) accounting for obvious

alternative explanations, and (7) the ability of the field to reach reliable results for the type

of opinion. The circuit court appeared to determine that 2 factors were neutral or not

applicable: (1) peer review, and (2) being as careful as the expert would be in non-litigation

work. Although the circuit court did not expressly address the factor as to general

acceptance, the circuit court agreed that the before-and-after method was the appropriate

procedure to use. Of the 7 factors that appeared to weigh against admissibility, 2 factors—

the rate of error and the ability of the field to reach reliable results for the type of opinion—

were construed against the admissibility of Ms. Cardell’s testimony based on the June 2021

update, i.e., the normalizing adjustment, as to which the Majority concludes that the circuit

court erred. See Maj. Slip Op. at 50.

       In discussing the other 5 factors that appeared to weigh against admissibility, the

circuit court criticized Ms. Cardell’s choice of 2015 as the base year, her reliance on

information from PNSI, and her failure to consider other information, such as changing

insurance reimbursement rates, as an alternative explanation for lost profits—all of these

pertained to the soundness of the data that Ms. Cardell used in formulating her opinion and

the circuit court’s disagreement with her conclusions, or the court’s previous findings that

Ms. Cardell lacked the experience necessary to accurately determine the base year for

PNSI’s type of medical practice, under the before-and-after method, and that her testimony

was inherently biased. To be sure, the use of data cannot always be surgically excised from

the reliability of a methodology. That said, while “sometimes blurred[,]” Maj. Slip Op. at


                                              -5-
41, there is a distinction between data and methodology. Under Maryland Rule 5-702(3),

in determining whether expert testimony is admissible, a trial court must assess “whether

a sufficient factual basis exists to support the expert testimony.” “‘[S]ufficient factual

basis’ includes two sub-elements: (1) an adequate supply of data; and (2) a reliable

methodology.” Rochkind, 471 Md. at 22, 236 A.3d at 642 (citation omitted).

       Different definitions apply to these two different elements. Among other things, for

there to be an adequate supply of data, there cannot be “too great an analytical gap between

the data and the opinion proffered.” Id. at 36, 236 A.3d at 651 (cleaned up). Meanwhile,

“to constitute reliable methodology, an expert opinion must provide a sound reasoning

process for inducing its conclusion from the factual data and must have an adequate theory

or rational explanation of how the factual data led to the expert’s conclusion.” Id. at 32,

236 A.3d at 648-49 (cleaned up).

       Our case law shows that there is a distinction between data and methodology. Of

course, a given circumstance cannot always be neatly categorized as one or the other and

might instead pertain to both. Here, however, it is clear that the circuit court’s analysis did

not implicate the reliability of the widely accepted before-and-after method of calculating

lost profits and exceeded a review of the soundness of Ms. Cardell’s reasoning process.

Although the circuit court appeared to fault the soundness of the process that Ms. Cardell

employed, the circuit court’s underlying criticism was with the data Ms. Cardell relied on,

her background and experience, and whether she was too closely associated with PNSI.

       Although KatzAbosch contended that Ms. Cardell’s testimony was unreliable

because she mistreated member draws and insurance reimbursement rates, which in turn


                                             -6-
influenced her to incorrectly identify 2015 as the benchmark year, Ms. Cardell’s testimony

at the hearing established an evidentiary basis for categorizing member draws as expenses

or profits based on the nature of the business, and her research and consultation with

another expert in damages. She then followed that approach by examining the different

types of member draws and categorizing them based on the factual circumstances. Nothing

in the record supports the conclusion that this was the wrong approach or that this approach

was unreliable. Rather, the dispute arises from KatzAbosch challenging the result or

opinion that Ms. Cardell reached.3 I agree with the Appellate Court that determining

whether to label member draws as expenses or profits is a fact-laden decision suitable for

cross-examination. See Parkway, 255 Md. App. at 633, 283 A.3d at 774.

       The record demonstrates that Ms. Cardell explained the “judgment calls” that she

made in selecting 2015 as the base year. Unlike in CDW LLC v. NETech Corp., 906 F.

Supp. 2d 815, 823-24 (S.D. Ind. 2012)—where the expert for the plaintiff technology

company offered no explanation for his use of the plaintiff’s other branches in the Great

Lakes region as a benchmark for determining how much the plaintiff’s Indianapolis branch

would have grown but for the defendant’s alleged wrongful conduct—here, Ms. Cardell




       3
        In my view, categorizing all member draws as profits would have resulted in a
higher estimate of lost profits—so, Ms. Cardell’s decision on this point is consistent with
her other choices designed to arrive at reasonable but conservative estimates of PNSI’s lost
profits. And, contrary to KatzAbosch’s contention, it is unlikely that treating all member
draws as profits would have affected Ms. Cardell’s choice of 2015 as the benchmark,
because such a classification would not have changed the underlying facts that she relied
on in that decision: the unlikelihood of expenses repeating in the future, the growth of the
business and its revenue trends, and the growth of the market.

                                           -7-
explicitly described her process for concluding that 2015 was the appropriate benchmark.4

She testified about PNSI’s revenue trends, the growth-associated costs, her market

analysis, and the profit in 2016, despite doctors leaving, as the basis for her conclusion that

2015 reflected a reasonable prediction for the future of PNSI absent the doctors’ departures.

       Ms. Cardell’s description of her method is not the kind of unsupported conjecture

excluded in CDW, where the expert: (1) “offer[ed] no reason” for choosing the plaintiff’s

“other branches as appropriately comparable to [its] Indianapolis branch[,]” for which he

was calculating lost profits; (2) “avoid[ed] explaining how another branch’s experience in

any year” was an appropriate point of comparison; (3) failed to establish that the market

forces for the Indianapolis branch and other branches were comparable; (4) used the

average of all of the other branches’ growths in revenue, even though there were “wide

variations in branch performance from year to year” and, thus, “the cumulative growth of


       4
         The circuit court supplemented its ruling from the bench with a written order
discussing the persuasive value of a decision from the United States District Court for the
Southern District of Indiana, CDW, 906 F. Supp. 2d 815. The circuit court reasoned that
the base year established by Ms. Cardell was akin to the benchmark established by the
excluded expert in CDW, which the District Court in that case had rejected as too
speculative because it was based on average earnings at other branches of the affected
business that were not sufficiently comparable to the branch that allegedly suffered lost
profits.
        In CDW, the District Court excluded the expert’s testimony on lost profits
calculated under a different method of calculating lost profits—the yardstick method. See
CDW, id. at 823-25. The District Court noted that the yardstick method requires use of
benchmarks “sufficiently comparable that they may be used as accurate predictors of what
the target would have done.” Id. at 824. However, the District Court concluded that the
expert did not select such a benchmark when he averaged the growth rates of plaintiff’s
other branches in the Great Lakes region without any explanation of why those branches,
much less their average, was appropriate for comparison. See id. The District Court also
held that other information analyzed by the expert tended to demonstrate that his analysis
was “way off base.” Id. at 825.

                                             -8-
all the included branches d[id] not approximate the growth of any individual branch”; and

(5) analyzed, as an alternative measure of lost profits, information that tended to show that

his predicted revenue growth was “way off base.” Id. at 824-25 (cleaned up).

       In contrast, here, Ms. Cardell explained why 2015 and not 2014 was the appropriate

base year, described how her market research led her to incorporate market forces into her

analysis, and indicated that PNSI’s profitability, its period of growth, and other factors led

her to believe that the profitability in 2015 would have been replicated in the ensuing years

but for the doctors’ departures, and no other evidence undermined her analysis. This is not

a case of the expert arriving at a conclusion by ipse dixit—as discussed above, Ms. Cardell

fully explained her reasoning for selecting 2015 as the base year and classifying some

member draws as expenses based on her years of experience as a certified public accountant

conducting similar analyses, doing market research, and consulting another expert. The

circuit court rejected Ms. Cardell’s conclusion about 2015 being the appropriate base year

by faulting her data, reasoning that she received too much input from PNSI, and

questioning the nature of her experience.

       Ms. Cardell’s use of PNSI’s accounting data and consultation with its staff did not

render her methodology unreliable. As noted by the Appellate Court, “‘[t]hese are the

kinds of data that accountants normally rely on in calculating future earnings [and lost

profits].’” Parkway, 255 Md. App. at 630, 283 A.3d at 772-73 (quoting Manpower, Inc. v.

Ins. Co. of Pa., 732 F.3d 796, 809 (7th Cir. 2013)) (second alteration in original). It is

difficult to imagine what other data might be more reliable or relevant to determining the

trajectory of a particular business. Indeed, even in CDW, 906 F. Supp. 2d at 822, the


                                            -9-
District Court was unpersuaded by the defendant’s contention that the plaintiff’s expert

“blindly accepted as accurate, and without any verification, the figures supplied to him by”

the plaintiff. The District Court concluded that CDW was unlike cases in which “experts

were merely ‘parroting’ unsubstantiated views of their clients[.]” Id. The same logic

applies here.

       To the extent that the circuit court expressed concern about Ms. Cardell potentially

being biased, this clearly was a matter for the jury’s consideration. As we observed in

Falik v. Hornage, 413 Md. 163, 179, 991 A.2d 1234, 1244 (2010), “[a]n expert’s testimony

can have a compelling effect on a jury[,]” and “[t]hat is why, especially with expert

witnesses, wide latitude must be given a cross-examiner in exploring a witness’s bias or

motivation in testifying[.]” (Cleaned up).

       The circuit court undeniably questioned Ms. Cardell’s “experience for this particular

matter,” which was a consideration for the court in assessing the reliability of her

methodology. Like the Appellate Court, I would reject the premise that “knowledge of the

accounting principles . . . required to analyze profits for a . . . medical practice differs so

markedly from that required for other small businesses . . . that the proffered [certified

public accountant] must demonstrate particularized experience[.]” Parkway, 255 Md. App.

at 622, 283 A.3d at 768 (citations omitted). In addition, consideration of an expert’s level

of experience is not one of the Daubert-Rochkind factors adopted by this Court. Although

the list of factors has been described as not being “exhaustive[,]” State v. Matthews, 479

Md. 278, 314, 277 A.3d 991, 1012 (2022) (citations omitted), Maryland Rule 5-702(1)

provides that, in assessing the admissibility of expert testimony, a trial “court shall


                                             - 10 -
determine[] whether the witness is qualified as an expert by knowledge, skill, experience,

training, or education[.]” In this case, the circuit court did not do that. My concern is that

the Daubert-Rochkind factors should not be used to exclude an expert’s testimony based

on the expert’s experience or training without making a finding under Maryland Rule 5-

702(1).

       Finally, although the circuit court emphasized that its conclusion that Ms. Cardell’s

testimony would not be useful to the jury was “a very, very slight factor” in its ruling, one

of the reasons that the circuit court expressed for finding that Ms. Cardell’s testimony

would not be useful was that the jury might not find that all of the departing doctors left

because of KatzAbosch’s actions, which it reasoned would render Ms. Cardell’s all-or-

nothing lost profits estimate unhelpful. Aside from this being another criticism of the data

that Ms. Cardell relied on, there is a fundamental problem with the circuit court’s

conclusion that Ms. Cardell’s testimony was not helpful because the jury might not accept

PNSI’s contention that all of the doctors left because of KatzAbosch. “[A] trial court is

not permitted to resolve disputes of material fact in determining whether a sufficient factual

basis exists to support an expert’s opinion. Doing so is a clear abuse of discretion.”

Jamaiya Oglesby v. Balt. Sch. Assocs., et al., No. 26, Sept. Term, 2022, 2023 WL 4755689,

at *16 (Md. July 26, 2023). Under the circuit court’s logic, in Gen. Elec. Co. v. Joiner, 522

U.S. 136, 146-47 (1997), the trial court could have excluded the expert testimony, not

because of any analytical gap between the data and the opinion, but instead because the

jury might have found that the plaintiff never encountered the chemicals that allegedly

contributed to his cancer.


                                            - 11 -
       Under the circumstances of the limited remand due solely to the circuit court’s

consideration the 2021 normalizing adjustment, the circuit court would be free to continue

to consider matters such as Ms. Cardell’s lack of specialized experience, whether she may

be inherently biased, whether she relied too heavily on data from PNSI, and whether she

relied on information that a jury might reject.5 All of these circumstances go to the weight

to be given to an expert’s testimony, not admissibility. All of them are for the jury, not the

circuit court, to consider. The limited remand will essentially be a redo of the motion

hearing, giving the circuit court an opportunity to once again take these matters into

account as the record establishes that these considerations reflect what the circuit court

thinks of Ms. Cardell’s testimony. In determining admissibility of expert testimony, a trial

court is “a gatekeeper,” not “an armed guard[,]” and, when expert “testimony rests upon

good grounds, based on what is known, it should be tested by the adversary process –

competing expert testimony and active cross-examination – rather than excluded from

jurors’ scrutiny for fear that they will not grasp its complexities or satisfactorily weigh its

inadequacies.’” Matthews, 479 Md. at 322-23, 277 A.3d at 1017 (quoting Ruiz-Troche v.

Pepsi Cola of Puerto Rico Bottling Co., 161 F.3d 77, 85-86 (1st Cir. 1998)) (internal

quotation marks omitted). In this case, the better course of action is to affirm the decision

of the Appellate Court, which correctly reversed and remanded for further proceedings in


       5
        In my view, it was not for lack of effort or acumen that the circuit court veered off
course by taking these considerations into account—to the contrary, this is a complicated
case with a lot of moving parts. The record reflects that the circuit court set forth its ruling
on the motion to exclude, carefully laid out what it genuinely perceived to be issues with
Ms. Cardell’s testimony, turned to thoroughly addressing the Daubert-Rochkind factors,
and attempted to render the best possible decision that it could under the circumstances.

                                             - 12 -
which Ms. Cardell’s testimony would be admissible. See Parkway, 255 Md. App. at 639,

283 A.3d at 778.

      For the above reasons, respectfully, I dissent.




                                          - 13 -