In the United States Court of Federal Claims
BID PROTEST
Nos. 19-1115C; 19-1162C; 19-1168C;
19-1169C; 19-1178C;
19-1189C; 19-1296C
(Consolidated)
(Filed Under Seal: June 4, 2020 | Reissued: June 25, 2020) ∗
)
TECHNOLOGY INNOVATION )
ALLIANCE LLC, et al. )
) Keywords: Bid Protest; DISA; Technical
Plaintiffs, ) Evaluations; Substantively
) Indistinguishable; Office Design Group v.
v. ) United States; Best Value Tradeoff
)
THE UNITED STATES OF AMERICA, )
)
Defendant, )
)
and )
)
TIBER CREEK CONSULTING, INC.; )
VALIDATEK, INC.; INNOVATIVE )
GOVERNMENT SOLUTIONS JV, LLC; )
MISSION1ST GROUP, INC.; and )
REDTEAM, LLC, )
)
Defendant-Intervenors. )
)
W. Brad English, Maynard, Cooper & Gale, P.C., for Plaintiff Technology Innovation Alliance
LLC, Huntsville, AL. Emily J. Chancy and Michael W. Rich, Maynard, Cooper & Gale, P.C.,
Huntsville, AL, Of Counsel.
Ryan C. Bradel, Ward & Berry, PLLC, Washington, DC, for Plaintiff DirectViz Solutions, LLC.
∗ This opinion was originally issued under seal and the parties were given the opportunity to
request redactions. As requested by the parties, certain proprietary and pricing information has
been redacted, as noted in brackets, for this public version.
Stuart B. Nibley, K&L Gates, LLP, Washington, DC, for Plaintiff Foxhole Technology, Inc. Amy
C. Hoang, Erica L. Bakies, and Sarah F. Burgart, K&L Gates LLP, Washington, DC, Of
Counsel.
Ronald S. Perlman, Holland & Knight LLP, Washington, DC, for Plaintiff TENICA &
Associates, LLC. Mary B. Bosco and Daniel P. Hanlon, Holland & Knight LLP, Miami, FL, Of
Counsel.
David B. Dixon, Pillsbury Winthrop Shaw Pittman LLP, McLean, VA, for Plaintiff CollabraLink
Technologies, Inc. Meghan D. Doherty and Robert C. Starling, Pillsbury Winthrop Shaw
Pittman LLP, McLean, VA, Of Counsel.
Paul A. Debolt, Venable LLP, Washington, DC, for Plaintiff Tapestry Technologies, Inc.
Christopher G. Griesedieck and Christina E. Wood, Venable LLP, Washington, DC, Of Counsel.
Michael J. Gardner, Greenberg Traurig, LLP, McLean, VA, for Plaintiff Sealing Technologies,
Inc. Shomari B. Wade and Brett A. Castellat, Greenberg Traurig, LLP, McClean, VA, Of
Counsel.
Joshua A. Mandlebaum and Richard P. Schroeder, Trial Attorneys, Commercial Litigation
Branch, Civil Division, U.S. Department of Justice, Washington, DC, for Defendant, with whom
were Douglas G. Edelschick, Mikki Cottet, and Elizabeth A. Speck, Senior Trial Counsel,
Deborah A. Bynum, Assistant Director, Robert E. Kirschman, Jr., Director, and Joseph H. Hunt,
Assistant Attorney General. Travis L. Vaughan, Procurement and Intellectual Property Attorney,
Office of General Counsel, Defense Information Systems Agency, Ft. Meade, MD, Of Counsel.
Beth V. McMahon, ReavesColey, PLLC, Chesapeake, VA, for Defendant-Intervenor Innovative
Government Solutions JV, LLC. J. Bradley Reaves, ReavesColey, PLLC, Chesapeake, VA, Of
Counsel.
Laurel A. Hockey, Cordatis LLP, Arlington, VA, for Defendant-Intervenor ValidaTek, Inc.
David Cohen and John J. O’Brien, Cordatis LLP, Arlington, VA, Of Counsel.
Devon E. Hewitt and Scott E. Dinner, Protorae Law PLLC, Tysons, VA, for Defendant-
Intervenor Tiber Creek Consulting, Inc.
Justin A. Chiarodo, Blank Rome LLP, Washington, DC, for Defendant-Intervenor Mission1st
Group, Inc. Adam Proujansky and Carolyn Cody-Jones, Blank Rome LLP, Washington, DC, Of
Counsel.
Cameron Hamrick, Miles & Stockbridge P.C., Washington, DC, for Defendant-Intervenor
RedTeam, LLC. C. Peter Dungan, Miles & Stockbridge P.C., Washington, DC, Of Counsel.
2
OPINION AND ORDER
KAPLAN, Judge.
These seven consolidated bid protests arise out of award decisions made by the Defense
Information Systems Agency (“DISA” or “the agency”) in connection with its “Systems
Engineering Technology and Innovation” (“SETI”) procurement. Each of the protesters
submitted proposals in response to DISA’s solicitation. None were among the twenty-five
offerors selected as recipients of the multiple-award task-order contracts that DISA set aside for
small businesses.
The protests have several common themes. First and foremost, each of the protesters
contends that—in evaluating the technical merit of their proposals—the agency either failed to
recognize strengths and/or assigned the proposals undeserved weaknesses. The protesters
characterize the agency’s technical determinations as arbitrary and, in many instances, claim that
they reflect disparate treatment.
A second common theme of the consolidated protests concerns the agency’s use of
adjectival ratings when evaluating the technical merits of the proposals. According to many of
the protesters, the agency applied those ratings mechanically and gave them undue importance.
These protesters complain in particular about the substantial advantage enjoyed by competitors
that received an “Outstanding” rating on the first of the four evaluation factors (entitled
“Innovation”). They contend that the agency gave insufficient consideration to the other
technical factors and/or to price when making its award decisions. Some of the protesters also
contend that the agency’s price reasonableness analysis was flawed.
The Court has carefully reviewed the voluminous briefs and the administrative record in
these cases. It concludes that—given its narrow scope of review, the highly technical subject
matter of the procurement, the careful and well-documented multi-stage evaluation process the
agency employed, and the Plaintiffs’ failure to show that the agency committed any legal error—
none of the Plaintiffs have established entitlement to relief. Plaintiffs’ motions for judgment on
the administrative record must therefore be denied, and the government’s motion granted in its
entirety.
BACKGROUND
I. The Solicitation
A. Overview
DISA is a combat support element of the U.S. Department of Defense (“DoD”). On
February 22, 2017, it issued Solicitation No. HC1047-17-R-0001 (“the Solicitation”), which
requested proposals for indefinite delivery/indefinite quantity (“IDIQ”) multiple-award task
order contracts for SETI projects in support of DISA and DoD. Admin R. (“AR”) Tab 1 at 1, 8.
The services to be provided included “information technology [] engineering services, expertise,
and support in the planning, research, development, integration, and implementation activities
for future, proposed, current and legacy [DoD] and [DISA] IT capabilities, services, and
systems.” Id. at 11.
3
It was DISA’s stated intent to award ten contracts on an “unrestricted” basis and twenty
contracts restricted to small-business offerors. Id. at 103. DISA reserved the right, however, to
issue “more, less, or no contracts at all.” AR Tab 5 at 366 (Amendment 0004 of the Solicitation
issued on March 20, 2017). 1 All of the plaintiffs in these consolidated cases competed
unsuccessfully for small-business awards.
B. Evaluation Criteria
DISA announced that it would make awards to the offerors whose proposals presented
the best value to government based upon an integrated assessment of five evaluation factors:
Innovation (Factor 1), Past Performance (Factor 2), Problem Statements (Factor 3), Utilization of
Small Business (Factor 4), and Price/Cost (Factor 5). Id. at 386–87. The non-price factors were
ranked in descending order of importance. Thus, “Factor 1 [wa]s more important than Factor 2
which [wa]s more important than Factor 3, [etc.].” Id. at 387. The four non-price factors, when
combined, were “significantly more important than cost or price,” in accordance with
FAR 15.304(e). Id.
Innovation (Factor 1)
As noted, Factor 1 (Innovation) was identified as the most important of the five
evaluative criteria. The Solicitation explained that DISA considered “fostering a creative culture
and driving Innovation in defense of the country,” to be the “paramount success criteria in
executing the SETI [c]ontract.” Id. at 375. It advised, therefore, that the government sought the
services of “innovative companies that accelerate attainment of new information system
capabilities.” Id.
For purposes of evaluation, the Solicitation defined “innovative” as “(1) any new
technology, process, or method, including research and development; or (2) any new application
of an existing technology, process, or method.” Id. The Solicitation identified three levels of
innovation: 1) “Incremental Improvements” which are “[t]ypically representative of smaller
tweaks that advance the core mission and/or a focused effort on continuous improvements to
current processes, customer experiences, and mission services”; 2) “Major Advancements,”
which are “[t]ypically representative of creating a new and enhanced way of doing business
and/or a new and enhanced way for the customer to interact with the system”; and 3) “Disruptive
Innovation,” which is “[t]ypically representative of an innovation that transforms an existing
market or sector by introducing simplicity, convenience, accessibility, and affordability where
complication and high cost are the status quo.” Id.
The Solicitation instructed offerors to address five innovation-related topics in their
proposals: (1) Corporate Philosophy/Culture on Innovation; (2) Investment in Innovation; (3)
History of Engineering and Deploying Innovative Solutions; (4) Outreach and Participation; and
1
The Solicitation was amended five times, most recently on March 23, 2017. See AR Tab 6. The
substance of these amendments is not relevant to the present protests. The Court cites to the
version of the Solicitation contained in the record as Amendment 0004 because it is the most
recent and complete version available.
4
(5) Certifications, Accreditations, Awards, Achievements, and Patents. Id. at 376–78. For each
topic, the offerors were required to answer a series of questions.
For example, under the topic of Corporate Philosophy/Culture on Innovation, offerors
were directed to provide their definition of innovation and to explain, among other things: how
they managed risk and would share that risk with DISA; how they supported their employees’
pursuit of innovation; and how they tracked innovation. Id. at 376–77. Under the Investment in
Innovation topic, offerors were required to describe their philosophy on innovation investments;
how they trained employees regarding their philosophy; their investment in laboratory and
testing space; and their knowledge management methodology. Id. at 377. With regard to the
History of Engineering and Deploying Innovative Solutions topic, offerors were asked to
describe: their history of innovation, including any failures; their experience of developing
solutions and sustaining them from infancy through delivery (especially if related to DISA
missions); and other company models for integrating innovation. Id. Under the topic of Outreach
and Participation, offerors were to identify any relationships with and contributions made to
research organizations. Id. at 377–78. Finally, offerors were required to list any certifications,
accreditations, awards, achievements, or patents received for innovation. Id. at 378.
The rating under Factor 1 would be based on the “consideration of the strengths,
weaknesses, significant weaknesses, uncertainties and deficiencies assessed.” Id. at 387.
The agency would use the color/adjective and risk ratings set forth in the following table:
Combined Technical/Risk Ratings for Innovation
Color Rating Description
Proposal addresses all Innovation elements and
indicates an exceptional approach and understanding
BLUE Outstanding of Innovation. Strengths far outweigh any
weaknesses. Risk of unsuccessful performance is
very low.
Proposal addresses all Innovation elements and
indicates a thorough approach and understanding of
PURPLE Good Innovation. Proposal contains strengths which
outweigh any weaknesses. Risk of unsuccessful
performance is low.
Proposal addresses all Innovation elements and
indicates an adequate approach and understanding of
Innovation. Strengths and weaknesses, if any, are
GREEN Acceptable
offsetting or will have little or no impact on Contract
performance. Risk of unsuccessful performance is no
worse than moderate.
5
Proposal does not clearly address all Innovation
elements and has not demonstrated an adequate
approach and understanding of Innovation. The
YELLOW Marginal
proposal has one or more weaknesses which are not
offset by strengths. Risk of unsuccessful
performance is high.
Proposal does not address all Innovation elements
RED Unacceptable and contains one or more deficiencies. Proposal is
unawardable.
Id. at 387–88.
According to the Solicitation, a proposal could receive a more favorable rating by
demonstrating any of the following: “long term corporate philosophy regarding
[i]nnovation”; “continuous investment in [i]nnovation through evidence of sustained
year-after-year investment in technologies and innovative ways to develop new
capability, improve service, reduce costs, and create efficiencies”; “validated processes
and procedures” that yielded results “based on innovative processes”; “evidence of
ongoing corporate investment in tools, training, facilities, personnel and equipment”;
“development of prototypes and solutions to mitigate issues and risk relevant to the SETI
PWS [(Performance Work Statement)]”; and “[e]xtensive publications on the topic of
[i]nnovation, including books and white papers.” Id. at 388.
Past Performance (Factor 2)
Under Factor 2, offerors were required to supply a summary of up to three relevant past
performance examples setting forth “what aspects of the [referenced c]ontracts they deem
relevant and to what specific task areas of the proposed effort they relate.” Id. at 378. Relevant
experience was to be included in a “Past Performance Information Template,” which was
Attachment 4 to the Solicitation. Id.; see also AR Tab 1 at 143–45 (Past Performance
Information Template).
The Solicitation advised that “[p]rojects [which] demonstrate [i]nnovation, multiple
experiences or more technically difficult work may receive higher relevancy ratings.” AR Tab 5
at 378. For each past performance example, offerors were required to submit formal performance
evaluations if available. Id. If no evaluation was available, the offeror was directed to use the
“Past Performance Questionnaire and Cover Letter” accompanying the Solicitation as
Attachment 5. Id. at 379–80; see also AR Tab 1 at 146–51 (Past Performance Evaluation
Questionnaire Form).
The Solicitation provided that the contract efforts supplied as references “should
demonstrate as many of the project types included in the PWS (either individually or in
combination thereof) as possible.” AR Tab 5 at 379. It specified that such project types “may
include,” among others, “[p]rojects that engineered, implemented and tested a [s]olution,”
“[e]fforts that delivered or provided an innovative solution,” and “[p]rojects that delivered
innovative technical solutions designed to deliver new or enhanced technologies and services
6
faster and more efficiently.” Id. DISA cautioned that, ultimately, “the burden of providing
detailed, current, accurate and complete past performance information rest[ed] with the
[o]fferor.” Id. at 380.
The past performance evaluation was designed to “assess[] the degree of confidence the
Government h[ad] in an [o]fferor’s ability to supply solutions[] and services that me[t] user’s
needs, based on a demonstrated record of performance.” Id. at 388. It would include an
assessment of recency; experience earned more than three years before the date of the
Solicitation would not be evaluated. Id. The government would then rate the relevancy of the
past performance reference, using the following criteria:
Past Performance Relevancy Ratings
Rating Definition
Present/past performance effort involved essentially the same
VERY RELEVANT scope and magnitude of effort and complexities this solicitation
requires.
Present/past performance effort involved similar scope and
RELEVANT
magnitude of effort and complexities this solicitation requires.
SOMEWHAT Present/past performance effort involved some of the scope and
RELEVANT magnitude of effort and complexities this solicitation requires.
Present/past performance effort involved little or none of the
NOT RELEVANT scope and magnitude of effort and complexities this solicitation
requires.
Id. at 389. Contracts that contained innovative solutions were considered “more relevant than
those that did not.” Id.
Past performance examples found “Not Relevant” were not further evaluated. Id. Those
found relevant (at any level) would receive a performance quality assessment rating using the
following criteria:
Past Performance Quality Assessment
Quality Assessment Rating/Color Description
During the Contract period, Contractor performance
meets or met Contractual requirements and exceeds or
exceeded many to the Government’s benefit. The
EXCEPTIONAL (E)/BLUE Contractual performance of the element or sub-element
being assessed was accomplished with few minor
problems for which corrective actions taken by the
Contractor were highly effective.
7
During the Contract period, Contractor performance
meets or met Contractual requirements and exceeds or
exceeded some to the Government’s benefit. The
VERY GOOD (VG)/PURPLE Contractual performance of the element or sub-element
being assessed was accomplished with some minor
problems for which corrective actions taken by the
Contractor were effective.
During the Contract period, Contractor performance
meets or met Contractual requirements. The Contractual
performance of the element or sub-element being assessed
SATISFACTORY (S)/GREEN
contained some minor problems for which corrective
actions taken by the Contractor appear or were
satisfactory.
During the Contract period, Contractor performance does
not or did not meet some Contractual requirements. The
Contractual performance of the element or sub-element
MARGINAL (M)/YELLOW being assessed reflects a serious problem for which the
Contractor has not yet identified corrective actions. The
Contractor’s proposed actions appear only marginally
effective or were fully or not implemented.
During the Contract period, Contractor performance does
not or did not meet most Contractual requirements and
recovery in a timely manner is not likely. The Contractual
UNSATISFACTORY (U)/RED
performance of the element or sub-element contains
serious problem(s) for which the Contractor’s corrective
actions appear or were ineffective.
Unable to provide a rating. Contract did not performance
NOT APPLICABLE (N)/WHITE
for this aspect. Do not know.
Id. at 389–90.
Based on the relevancy and quality assessments of the past performance examples
submitted, the agency then assigned an integrated performance confidence assessment rating as
follows:
Performance Confidence Assessments
Rating Description
Based on the Offeror’s recent/relevant performance
SUBSTANTIAL CONFIDENCE record, the Government has a high expectation that the
Offeror will successfully perform the required effort.
8
Based on the Offeror’s recent/relevant performance
SATISFACTORY CONFIDENCE record, the Government has a reasonable expectation that
the Offeror will successfully perform the required effort.
No recent/relevant performance record is available or the
Offeror’s performance record is so sparse that no
NEUTRAL CONFIDENCE meaningful confidence assessment rating can be
reasonably assigned. The offeror may not be evaluated
favorable or unfavorably on past performance.
Based on the Offeror’s recent/relevant performance
LIMITED CONFIDENCE record, the Government has a low expectation that the
Offeror will successfully perform the required effort.
Based on the Offeror’s recent/relevant performance
record, the Government has no expectation that the
NO CONFIDENCE
Offeror will be able to successfully perform the required
effort.
Id. at 390.
Problem Statement Narratives (Factor 3)
Under Factor 3, the government evaluated offerors’ responses to hypothetical problems
included with the Solicitation at Attachment 7. Id. at 380. Offerors were required to “provide as
specifically as possible the actual methodology to be used for accomplishing/satisfying the[]
requirements” in the narratives. Id. Though the problem statements were “notional in nature,” the
Solicitation explained, “[t]hey give the Government insights into each offeror’s ability to meet
DISA requirements in numerous and diverse technical areas and into each offeror’s problem
solving methodologies related to broad problems that could be solved by any number of
technologies.” Id.
Offerors for the small business restricted suite of awards were required to respond to
Problem Statement No. 3 and Problem Statement No. 4. Id. The problem statements were of
equal weight. Id. at 391.
The government’s evaluation of Factor 3 “us[ed] a combined technical/management
rating and risk rating,” which considered “risk in conjunction with the strengths, weaknesses, and
deficiencies in determining technical ratings.” Id. at 390. The ratings assigned would be based on
the criteria set forth in the following table:
9
Combined Technical/Risk Rating
Color Rating Description
Proposal meets requirements and indicates an
exceptional approach and understanding of the
BLUE Outstanding requirements. Strengths far outweigh any
weaknesses. Risk of unsuccessful performance is
very low.
Proposal meets requirements and indicates a
thorough approach and understanding of the
PURPLE Good requirements. Proposal contains strengths which
outweigh any weaknesses. Risk of unsuccessful
performance is low.
Proposal meets requirements and indicates an
adequate approach and understanding of the
requirements. Strengths and weaknesses are
GREEN Acceptable
offsetting or will have little or no impact on Contract
performance. Risk of unsuccessful performance is no
worse than moderate.
Proposal does not clearly meet requirements and has
not demonstrated an adequate approach and
YELLOW Marginal understanding of the requirements. The proposal has
one or more weaknesses which are not offset by
strengths. Risk of unsuccessful performance is high.
Proposal does not meet requirements and contains
RED Unacceptable
one or more deficiencies. Proposal is unawardable.
Id. at 391.
Small Business Participation and Commitment Plan (Factor 4)
Under Factor 4, proposals were evaluated “on the level of proposed participation of U.S.
small businesses in the performance of th[e] acquisition.” Id. at 382. Offerors were required to
“articulate how small businesses will participate through performance as a small business Prime
Offeror and/or ‘first tier’ small business subcontracting only.” Id. (emphasis removed). In
evaluating an offeror’s small business participation plan, the agency considered: the extent to
which small businesses were identified in the proposal; the commitment to use small businesses;
whether the offeror identified the complexity and variety of work to be performed by small
businesses; the extent of participation of small business prime offerors and small business
subcontractors as defined by “the percentage of the value of the total acquisition”; and the goals
proposed by the offeror in terms of a “percentage of the total acquisition value (TAV).” Id. at
383. The government suggested, but did not require, that offerors use the “Small Business
10
Participation Proposal” included as Attachment 8 of the Solicitation. Id.; see also AR Tab 1 at
160–61 (Small Business Participation and Commitment Proposal Format).
Offerors would be assigned one of the following ratings for Factor 4:
Small Business Rating Method
Color Rating Adjectival Rating Description
Proposal indicates an exceptional approach and
BLUE Outstanding
understanding of the small business objectives.
Proposal indicates a thorough approach and
PURPLE Good
understanding of the small business objectives.
Proposal indicates an adequate approach and
GREEN Acceptable
understanding of the small business objectives.
Proposal has not demonstrated an adequate approach
YELLOW Marginal
and understanding of the small business objectives.
RED Unacceptable Proposal does not meet small business objectives.
AR Tab 5 at 392.
Price/Cost (Factor 5)
Offerors were to submit their proposed rates in a spreadsheet appended to the Solicitation
as Attachment 9. Id. at 384. The total proposed price would “consist of the [contractor’s]
proposed rates for the base period, all option periods, to include the option pricing for [an]
additional six-month period.” Id. Only the total proposed price would be used to calculate
tradeoffs between price and non-price factors. Id. The Solicitation stated that price proposals
would be evaluated using at least one of the techniques detailed in FAR 15.404 to determine
reasonableness and completeness. Id. at 393.
C. The Source Selection Review Process
The Solicitation prescribed a multi-level evaluation and selection process. See id. at 393–
94. At the first stage, five separate technical evaluation boards (“TEBs”) would review each
proposal. These were the Innovation Evaluation Board (“IEB”), the Past Performance Evaluation
Board (“PPEB”), the Problem Statement Evaluation Board (“PSEB”), the Small Business
Evaluation Board (“SBEB”), and the Price Evaluation Board (“PEB”). Id.
Upon completion of their evaluations, the Chair of each of these TEBs would brief the
Source Selection Evaluation Board (“SSEB”). See AR Tab 65d at 4690. The SSEB would then
measure the TEBs’ evaluations “against the solicitation requirements and the approved
evaluation criteria to ensure an equitable, impartial, and comprehensive evaluation against the
solicitation requirements.” AR Tab 5 at 393. According to the Solicitation, “[t]he fundamental
11
responsibility of the SSEB [wa]s to provide the Source Selection Advisory Council (SSAC) and
the Source Selection Authority (SSA) with information to make informed and reasoned
selections.” Id. at 393–94. The SSEB would therefore “prepare summary reports containing
adjectival assessments for each factor and their supporting rationale, including the costs/prices
for each Offeror and brief the SSAC.” Thereafter, the SSAC would prepare “a comparative
analysis and brief the SSA.” Tab 5 at 394.
D. The Evaluations and Award Decisions
The agency received 112 proposals for the restricted suite. AR Tab 65d at 4688. Thirteen
of those proposals were removed from consideration because they were either late, incomplete,
or submitted by mail rather than electronically, leaving ninety-nine to be evaluated. Id.
The TEBs convened from April 10, 2017 through February 20, 2018 to perform the
technical evaluations. AR Tab 63 at 4566 (SSAC report). The SSEB began meeting on
December 4, 2017 to review those evaluations. Id. Its review included a briefing by each TEB
Chair. Id. Ultimately, the SSEB prepared a report of almost 1100 pages which summarized the
evaluations of every proposal under each of the five factors. See generally AR Tab 65d.
In the meantime, the contracting officer (“CO”) recorded the results of the PEB’s price
fairness and reasonableness analysis in a memorandum dated March 6, 2019. AR Tab 65c at
4678. The PEB had compared the total proposed prices of each offeror against one another and
against the Independent Government Cost Estimate (“IGCE”). Id. It also determined the average
total proposed price of all offerors ($193,870,790) and the median price ($187,899,682). Id.
Comparatively, seventy-nine offerors were lower than the IGCE and twenty were higher. Id.
Forty-three offerors were above the average and fifty-six were below the average. Id. The highest
proposed price was $381,206,594 and the lowest was $99,924,321, a difference of $281,282,273.
Id.
The CO acknowledged a “large variance amongst the total proposed prices and the
individual labor rates themselves.” Id. at 4679. He stated, however, that the large variance “was
not unanticipated and did not cause a concern that the Government would pay an unreasonably
high price.” Id. The CO explained that “the solicitation introduced a great amount of risk of
paying a high, but not unreasonable, price, based upon the need to obtain a range of potentially
innovative solutions.” Id. He further noted that the Solicitation also “included significant risk for
any offeror” and for that reason “offerors had to make business decisions on how to strategize
their pricing which led to the wide dispersion of prices.” Id. at 4680. The government, he noted,
“accepted the potential risk of paying higher prices, but not unreasonable prices, for higher
quality solutions.” Id. In short, the CO concluded, “the risk of paying too much for requirements
is low compared to the needs of the Government.” Id. This was “particularly” true, he found,
“given the mitigating factor that there will be competition to refine prices at the order level.” Id.
The SSAC reviewed the SSEB report and the CO’s price memorandum. It prepared a
“written comparative analysis of offers and recommendations” for the Source Selection
Authority (“SSA”). AR Tab 5 at 394. The SSAC’s report explains the basis for its
recommendations in detail.
12
The SSAC explained that it had removed forty offerors from consideration at the outset
because they received “Unacceptable” ratings for either the Innovation or Problem Statement
factors. AR Tab 63 at 4568–69. Of the fifty-nine proposals still under consideration, twenty had
received an “Outstanding” rating for Factor 1, seventeen were rated “Good,” seven were rated
“Acceptable,” and fifteen were rated “Marginal.” Id. at 4567, 4569; AR Tab 63a (final ratings
table).
The SSAC started its comparative analysis by reviewing the proposals that had received
an “Outstanding” rating for Factor 1 “from the highest technically rated to the lowest technically
rated [] considering the specific number and individual benefits provided by the proposal’s
strengths.” AR Tab 63 at 4569. Ultimately, the SSAC recommended that eighteen of these
offerors be awarded a contract. The other two offerors with “Outstanding” ratings on Factor 1
received “Marginal” ratings on both Problem Statements, and were not recommended for an
award.
The SSAC next reviewed the proposals that received a “Good” rating for Factor 1, again
in descending order from the highest to the lowest technically rated. It recommended an award to
five of the twenty-five offerors whose proposals received such a rating. Id. at 4656. Therefore, it
recommended a total of twenty-three proposals for award. Id. The SSA reviewed and concurred
with the CO’s conclusions as to price fairness and reasonableness as well as the SSAC’s
recommendations. AR Tab 65 (SSA report). As a result, it concluded that the following offerors
should receive awards:
13
Id. at 4659.103.
The Agency notified successful offerors of its award decision on July 8, 2019. See
generally AR Tab 66. Unsuccessful offerors were notified by letter dated July 9, see generally
AR Tab 69, and received debriefing letters the next day, see AR Tabs 70–80.
II. GAO Proceedings
CEdge Software Consultants, LLC and Tapestry Technologies, Inc. filed protests
challenging the Agency’s award decision with the Government Accountability Office (“GAO”)
on July 19, 2019. AR Tab 104 at 9013; AR Tab 106 at 9238. Shortly thereafter, CyberData
Technologies, Inc., Sealing Technologies, Inc., RedTeam LLC, Tenica & Associates LLC,
Mission1st Group, Inc., Foxhole Technology, Inc., DirectViz Solutions, LLC, and CollabraLink
Technologies, Inc. filed GAO protests. See AR Tabs 108–125.
14
III. The Present Suit
Lead Plaintiff Technology Innovation Alliance LLC (“TIA”) (Case No. 19-1115C) filed a
protest in the Court of Federal Claims on July 31, 2019, while proceedings were still pending
before GAO. ECF No. 1. 2 GAO therefore dismissed the protests before it on August 8, 2019. AR
Tab 127 at 10961 (GAO decision). As a result, most of the offerors that had filed GAO protests
filed suit in this court: DirectViz Solutions, LLC (Case No. 19-1162C) (filed August 9, 2019);
Foxhole Technology, Inc. (Case No. 19-1168C), Tenica and Associates, LLC (Case No. 19-
1169), CollabraLink Technologies, Inc. (Case No. 19-1178), and RedTeam, LLC (Case No. 19-
1179) (filed August 12, 2019); Tapestry Technologies, Inc. (Case No. 19-1189) (filed August 13,
2019); Mission1st Group, Inc. (Case No. 19-1211) (filed August 15, 2019); CEdge Software
Consultants, LLC (Case No. 19-1231) (filed August 19, 2019); 3 and Sealing Technologies, Inc.
(Case No. 19-1296C) (filed August 27, 2019).
The Court consolidated all ten protests on August 28, 2019, and designated the TIA
protest as the lead case. See, e.g., Consolidation Order, Sealing Techs., Inc. v. United States, No.
19-1296C (Fed. Cl. Aug. 28, 2019), ECF No. 13. On September 5, 2019, the Court granted
motions to intervene filed by three successful offerors: Innovative Government Solutions JV,
LLC, ValidaTek, Inc., and Tiber Creek Consulting, Inc. ECF No. 54. In accordance with the
Court’s scheduling order, the Plaintiffs filed their amended complaints and motions for judgment
on the administrative record on October 29, 2019. ECF Nos. 70–86. 4
On December 3, 2019, the government filed a motion to remand the case in part to the
agency “to reconsider two aspects of the challenged [] decision.” Def.’s Partial Consent Mot. for
Voluntary Partial Remand at 1, ECF No. 92. Specifically, the government sought a remand for
further consideration of 1) whether “the agency erred in the application of solicitation criteria to
the proposed solutions for problem statement 3” and 2) whether “the agency erred in not
evaluating two past performance references [submitted by Sealing Technologies] as non-
compliant.” Id. at 2. The Court granted the motion for partial remand and required the parties to
file a joint status report regarding the need for further litigation by December 23, 2019. ECF
No. 94.
The parties filed the joint status report as requested on December 23, 2019, informing the
Court that, as a result of the remand, the agency planned to make awards to Plaintiffs Mission1st.
and RedTeam. ECF No. 97. The agency reaffirmed its decision not to make an award to either
TIA or Sealing. Id.
2
Unless otherwise noted, all citations to the Court’s electronic case filing system refer to the
docket of the lead case, No. 19-1115C.
3
CEdge Software Consultants LLC voluntarily dismissed its protest on October 25, 2019. ECF
No. 69.
4
On August 19, 2019, the government informed the Court and the parties that DISA had agreed
not to award any task order in excess of $500 under the protested procurement before April 30,
2020, unless the Court had reached a decision permitting it to do so. ECF No. 25.
15
The parties also proposed an amended briefing schedule, id., which the Court adopted,
ECF No. 98. The Court dismissed the claims of Mission1st and RedTeam as moot on January 3,
2020. Id. Mission1st and RedTeam subsequently filed motions to intervene on the government’s
side, ECF Nos. 99–100, which the Court granted, ECF Nos. 109–10.
After the government issued its decision on remand, ECF No. 96, several plaintiffs
amended their complaints and pending motions for judgment on the administrative record, ECF
Nos. 101–08. The government filed its cross-motion for judgment on the administrative record
on February 11, 2020. ECF No. 123. The motions are fully briefed.
On April 10, 2020, the government informed the Court and the parties that “to facilitate
judicial review, the Defense Information Systems Agency has agreed not to award any task order
in excess of $500 pursuant to the protested contracts before June 5, 2020.” ECF No. 148.
The Court has determined that oral argument is unnecessary. For the reasons that follow,
the Court finds that the remaining protests lack merit. Accordingly, the Court DENIES the
pending motions for judgment upon the administrative record filed by the Plaintiffs and
GRANTS the government’s cross-motion.
DISCUSSION
I. Subject-Matter Jurisdiction
The Court of Federal Claims has jurisdiction over bid protests in accordance with the
Tucker Act, 28 U.S.C. § 1491, as amended by the Administrative Dispute Resolution Act of
1996 § 12, 28 U.S.C. § 1491(b). Specifically, the Court has the authority “to render judgment on
an action by an interested party objecting to a solicitation by a Federal agency for bids or
proposals for a proposed contract or to a proposed award or the award of a contract or any
alleged violation of statute or regulation in connection with a procurement or a proposed
procurement.” 28 U.S.C. § 1491(b)(1); see also Sys. Application & Techs., Inc. v. United States,
691 F.3d 1374, 1380–81 (Fed. Cir. 2012) (observing that § 1491(b)(1) “grants jurisdiction over
objections to a solicitation, objections to a proposed award, objections to an award, and
objections related to a statutory or regulatory violation so long as these objections are in
connection with a procurement or proposed procurement”).
To possess standing to bring a bid protest, a plaintiff must be an “interested party”—i.e.,
an actual or prospective bidder (or offeror) who possesses a direct economic interest in the
procurement. Sys. Application & Techs., Inc., 691 F.3d at 1382 (citing Weeks Marine, Inc. v.
United States, 575 F.3d 1352, 1359 (Fed. Cir. 2009)); see also Orion Tech., Inc. v. United States,
704 F.3d 1344, 1348 (Fed. Cir. 2013). An offeror has a direct economic interest in a pre-award,
post-evaluation protest if the protester demonstrates that, absent the alleged errors, it would have
a “substantial chance” of receiving the award; that is, if the protester “could have likely
competed for the contract” but for the alleged errors. Orion, 704 F.3d at 1348–49; see also Myers
Investigative & Sec. Servs., Inc. v. United States, 275 F.3d 1366, 1370 (Fed. Cir. 2002) (holding
that “prejudice (or injury) is a necessary element of standing”). The Court assumes well-pled
allegations of error to be true for purposes of the standing inquiry. Square One Armoring Serv.,
16
Inc. v. United States, 123 Fed. Cl. 309, 323 (2015) (citing Digitalis Educ. Sols., Inc. v. United
States, 97 Fed. Cl. 89, 94 (2011), aff’d, 664 F.3d 1380 (Fed. Cir. 2012)).
Each Plaintiff in this protest challenges the agency’s ranking of its proposal and non-
selection for award. Their suits thus challenge a procurement decision and fall within the Court’s
“broad grant of jurisdiction over objections to the procurement process.” Sys. Application &
Techs., Inc., 691 F.3d at 1381. Moreover, each Plaintiff has sufficiently alleged a direct
economic interest in the procurement. Taking the allegations of error to be true—as it must to
determine standing—the Court finds that each Plaintiff “could likely have competed for the
contract” in the absence of the alleged errors. Accordingly, the Plaintiffs are interested parties
and the Court has subject-matter jurisdiction over their claims.
II. Motions for Judgment on the Administrative Record
Parties may move for judgment on the administrative record pursuant to Rule 52.1 of the
Rules of the Court of Federal Claims (“RCFC”). Pursuant to RCFC 52.1, the Court reviews an
agency’s procurement decision based on the administrative record. See Bannum, Inc. v. United
States, 404 F.3d 1346, 1353–54 (Fed. Cir. 2005). The court makes “factual findings under RCFC
[52.1] from the record evidence as if it were conducting a trial on the record.” Id. at 1357. Thus,
“resolution of a motion respecting the administrative record is akin to an expedited trial on the
paper record, and the Court must make fact findings where necessary.” Baird v. United States, 77
Fed. Cl. 114, 116 (2007). The Court’s inquiry is “whether, given all the disputed and undisputed
facts, a party has met its burden of proof based on the evidence in the record.” A&D Fire Prot.,
Inc. v. United States, 72 Fed. Cl. 126, 131 (2006). Unlike a summary judgment proceeding,
genuine issues of material fact will not foreclose judgment on the administrative record.
Bannum, 404 F.3d at 1356.
III. Scope of Review of Procurement Decisions
The Court reviews challenges to procurement decisions under the same standards used to
evaluate agency actions under the Administrative Procedure Act, 5 U.S.C. § 706 (“APA”). See
28 U.S.C. § 1491(b)(4) (stating that “[i]n any action under this subsection, the courts shall
review the agency’s decision pursuant to the standards set forth in section 706 of title 5”). Thus,
to successfully challenge an agency’s procurement decision, a plaintiff must show that the
agency’s decision was “arbitrary, capricious, an abuse of discretion, or otherwise not in
accordance with law.” 5 U.S.C. § 706(2)(A); see also Bannum, 404 F.3d at 1351.
This “highly deferential” standard of review “requires a reviewing court to sustain an
agency action evincing rational reasoning and consideration of relevant factors.” Advanced Data
Concepts, Inc. v. United States, 216 F.3d 1054, 1058 (Fed. Cir. 2000) (citing Bowman Transp.,
Inc. v. Arkansas-Best Freight Sys., Inc., 419 U.S. 281, 285 (1974)). As a result, where an
agency’s action has a reasonable basis, the Court cannot substitute its judgment for that of the
agency. See Honeywell, Inc. v. United States, 870 F.2d 644, 648 (Fed. Cir. 1989) (holding that
as long as there is “a reasonable basis for the agency’s action, the court should stay its hand even
though it might, as an original proposition, have reached a different conclusion”) (quoting M.
Steinthal & Co. v. Seamans, 455 F.2d 1289, 1301 (D.C. Cir. 1971)).
17
The Court’s scope of review is particularly narrow when it comes to agency judgments
regarding the technical merits of particular proposals. As the court of appeals observed in E.W.
Bliss Co. v. United States, 77 F.3d 445, 449 (Fed. Cir. 1996), protests concerning “the minutiae
of the procurement process in such matters as technical ratings . . . involve discretionary
determinations of procurement officials that a court will not second guess.” See also RX Joint
Venture, LLC v. United States, 145 Fed. Cl. 207, 213 (2019) (“[E]valuations of proposals for
their technical quality involve the specialized expertise of an agency’s subject-matter experts.”);
CSC Gov’t Sols., Inc. v. United States, 129 Fed. Cl. 416, 434 (2016) (explaining that great
deference must be afforded to an agency where the court reviews a technical evaluation “because
of the highly specialized, detailed, and discretionary analyses frequently conducted by the
government in that regard” (internal quotation marks omitted)). The Court’s function is therefore
limited to “determin[ing] whether ‘the contracting agency provided a coherent and reasonable
explanation of its exercise of discretion.’” Impresa Construzioni Geom. Domenico Garufi v.
United States, 238 F.3d 1324, 1332–33 (Fed. Cir. 2001) (quoting Latecoere Int’l, Inc. v. U.S.
Dep’t of Navy, 19 F.3d 1342, 1356 (11th Cir. 1994)).
IV. Merits
A. Technology Innovation Alliance (Case No. 19-1115C)
The Agency’s Initial Evaluation
TIA submitted its proposal on April 4, 2017. AR Tab 160 at 25711 (TIA proposal). TIA’s
proposal earned six strengths and was assigned two weaknesses under Factor 1. AR Tab 65d at
5698–00 (SSEB report). For Factor 3, Problem Statement No. 3, TIA was assigned one strength,
which was offset by one weakness and one significant weakness. Id. at 5702. No strengths or
weaknesses were assessed for Factor 3, Problem Statement No. 4. For Factor 4, TIA received
three strengths and no weaknesses. Id. at 5704.
The TEBs assigned TIA’s proposal the following adjectival ratings, which were affirmed
by the SSEB and the SSAC:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Satisfactory Marginal Acceptable Acceptable
Id. at 5705; AR Tab 65e at 5852 (SSAC memorandum). TIA’s proposed price was [***], which
the agency ranked [***] out of the ninety-nine offerors. AR Tab 65e at 5852.
The SSAC concluded that TIA’s proposal “was priced in the middle” and “was not one of
the highest technically rated.” AR Tab 65e at 5853. It therefore recommended against awarding a
contract to TIA. The SSA agreed and TIA was not selected as an awardee. AR Tab 65 at
4659.79–.81 (Source Selection Decision Document (“SSDD”)).
18
Remand and Final Evaluation
After the present protest was filed, the agency requested a remand during which it re-
evaluated the marginal rating it assigned to TIA’s proposal with respect to Problem Statement
No. 3. ECF No. 92. On December 20, 2019, the Agency filed a Memorandum for the Record
containing the SSA’s analysis and decision on remand. ECF No. 96. It reflects that TIA’s rating
for Problem Statement No. 3 was changed from “Marginal” to “Acceptable,” resulting in the
following adjusted set of adjectival ratings:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Satisfactory Acceptable Acceptable Acceptable
AR Tab 174 at 30008, ECF No. 96; see also id. at 30010 (noting a rating change from
“Marginal” to “Acceptable” for Problem Statement No. 3).
Notwithstanding the upward revision in TIA’s rating for Problem Statement No. 3, the
SSA again concluded that TIA’s proposal was not “among[] the most highly rated or the lowest
priced proposals and d[id] not represent a best value to the Government.” Id. TIA therefore again
did not receive an award.
TIA’s Protest
In its amended MJAR, TIA alleges that the agency “erred by assigning TIA two
weaknesses for its approach to risk in the Corporate/Philosophy/Culture on Innovation aspect” of
Factor 1. Pl. TIA’s Am. Mot. for J. on the Admin. Rec. & Br. in Supp. Thereof (“TIA MJAR”) at
18–21, ECF No. 106 (capitalization altered). It further contends that it was subjected to disparate
treatment when it was not assigned three additional strengths under Factor 1. Id. at 21–24.
Finally, with respect to Factor 4, TIA argues that the Agency arbitrarily awarded strengths to
other offerors based on certifications that TIA also possessed but for which it was not assigned
any strengths. Id. at 24. The Court addresses each of these arguments below and finds them
without merit.
a. Assignment of Weaknesses to “Approach to Risk” Component
of TIA’s Proposal
The instructions concerning the “Corporate Philosophy/Culture on Innovation” category
of Factor 1 required offerors to provide information about, among other things, their “Approach
to Risk.” AR Tab 5 at 376. Specifically, it required offerors to explain how they “manage risk in
an innovative environment” and how they “determine the level of acceptable risk.” Id. They were
also directed to provide an explanation of how they and DoD should share risk. Id. In addition,
the offerors were required to identify “the cost of failure” as well as “who should pay for it.” Id.
Finally, the offerors were required to state “[h]ow much failure” the government should accept.
Id. (internal quotation marks omitted).
19
The agency assigned two weaknesses to TIA’s proposal based on its responses to the
“Approach to Risk” questions. AR Tab 65d at 5700. First, it concluded that TIA “did not
adequately address how it would share risk with the DOD.” Id. According to the agency “[t]his
flaw . . . increases [TIA’s] risk of failure to be innovative because a lack of understanding of how
to allocate risk between the company and the DOD throughout the development lifecycle could
impede managing and mitigating risks for future SETI projects.” Id. Second, the agency
concluded that TIA “did not provide sufficient information as to how much failure they believed
the Government should accept.” Id. It observed that this flaw “raises the risk of failure to be
innovative because the Offeror’s approach to failure sharing between themselves and the
Government is not well developed and could potentially impede innovation in future SETI task
orders.” Id.
TIA argues that it was arbitrary and irrational for the agency to conclude that its proposal
did not adequately address risk sharing or how much failure it believed the agency should accept.
It observes that “TIA’s proposal described [***] and the risk sharing partnership it contemplated
between TIA and the government.” TIA MJAR at 19 (citing AR Tab 160 at 25935–36). It further
asserts that its proposal [***]. Id. (citing AR Tab 160 at 25935).
As explained above, given the technical nature of the issues, this Court’s scope of review
of the agency’s decision regarding whether TIA satisfactorily addressed the agency’s questions is
extremely narrow. The agency examined the content of TIA’s responses under the “Approach to
Risk” subtopic and found them lacking in two critical respects. First, it concluded that TIA did
not provide adequate information about how it proposed to share risks with DoD. The record
reflects a rational basis for this highly discretionary determination. The focus of the narrative in
TIA’s proposal was on risks to TIA, and there was scant if any mention of shared risks. See AR
Tab 160 at 25935 (describing [***]). Indeed, the word “share” does not appear in the narrative
discussing this point. Nor did TIA provide any discernible analysis regarding how much failure
the agency should find acceptable under the SETI contract; it merely offered the generalization
that [***]. Id.
In reviewing the reasonableness of the agency’s determination, it is instructive to
compare TIA’s responses with those that were not found lacking. The comparison reveals that
other offerors supplied more specificity regarding risk sharing than did TIA, in some instances
devoting entire sections to the topic. See, e.g., AR Tab 163 at 27250 (Intervenor ValidaTek, Inc.
proposal) (explaining that [***]); AR Tab 162 at 26862 (Intervenor Tiber Creek Consulting, Inc.
proposal) (explaining that [***]); AR Tab 145 at 19485 (Intervenor Innovative Government
Solutions JV, LLC proposal) (explaining in section titled [***]).
Similar results obtain when reviewing other offerors’ responses to the question of how
much failure the government should be willing to accept. These proposals again contained more
specificity than did TIA’s. See, e.g., AR Tab 162 at 26862 (Tiber Creek Consulting, Inc.
proposal) (explaining [***]); AR Tab 163 at 27251 (ValidaTek proposal) (explaining [***]). It is
also instructive that the agency similarly assigned a weakness to another offeror’s proposal
where, like TIA, it only addressed the effect of failure on the offeror, and not on the government.
See AR Tab 65d at 5182 (referring to AR Tab 145 at 19485) (assigning the same weakness to
Innovative Government Solutions JV, LLC where the proposal [***]).
20
In short, the Court concludes that the agency acted within its discretion when it found
TIA’s proposal deficient with respect to the way it addressed the approach to risk questions
posed by the Solicitation. TIA’s protest on this ground therefore lacks merit.
b. Disparate Treatment in Assignment of Two Weaknesses
Rather than One under “Approach to Risk” Topic
In addition to challenging the substance of the agency’s conclusion that its proposal fell
short in addressing its approach to risk, TIA also argues that DISA subjected it to disparate
treatment when it assigned the proposal two weaknesses for these shortcomings rather than one.
See TIA MJAR at 20. For example, TIA observes, awardee Applied Systems Engineering was
assigned one weakness for failing to “provide information as to who should pay for the cost of
failure . . . [and information] on how much failure they believe the Government should accept.”
AR Tab 65d at 4761. In addition, Interop-ISHPI JV, LLC was assigned a single weakness for
including an “immature description as to what they perceive the cost of failure to be, who should
pay for innovation-related failures at any/all points of the developmental lifecycle, and how
much failure the Government should accept.” Id. at 5204. And Mission Support, LP was
assigned only one weakness for a failure to “provide information as to what they perceive the
cost of failure to be . . . [and] a comprehensive understanding of who should pay for innovation-
related failures at any/all points of the developmental lifecycle.” Id. at 5326. But see id.
(assigning a second weakness to Mission Support, LP for failing to “provide information into
how their company determines the level of acceptable risk”).
As the court of appeals recently held in Office Design Group v. United States, to prevail
on a disparate treatment claim a protestor must show either “that the agency unreasonably
downgraded its proposal for deficiencies that were ‘substantively indistinguishable’ or nearly
identical from those contained in other proposals” or that “the agency inconsistently applied
objective solicitation requirements between it and other offerors, such as proposal page limits,
formatting requirements, or submission deadlines.” 951 F.3d 1366, 1372 (Fed. Cir. 2020). Only
if a protester meets one of these thresholds can the reviewing court “comparatively and
appropriately analyze the agency’s treatment of proposals without interfering with the agency’s
broad discretion in these matters.” Id. at 1373; see also WellPoint Military Care Corp. v. United
States, 953 F.3d 1373, 1378 (Fed. Cir. 2020). Indeed, having the Court pass judgment regarding
the relative merits of proposals that are not substantively indistinguishable “would give a court
free reign to second-guess the agency’s discretionary determinations underlying its technical
ratings” and “[t]his is not the court’s role.” Office Design Grp., 951 F.3d at 1373 (citing E.W.
Bliss Co., 77 F.3d at 449).
TIA’s disparate treatment argument does not meet the Office Design Group standards.
The agency assigned two weaknesses to TIA’s proposal based on its failure to address both how
TIA would share risk with the government and how much failure should be acceptable to the
government. None of the comparator proposals that TIA cites were found to have failed to
address both of these issues. The deficiencies in the proposals were therefore not substantively
indistinguishable. TIA’s disparate treatment claim accordingly lacks merit.
21
c. Unequal Assignment of Strengths Under Factor 1
TIA claims that the agency arbitrarily failed to assign three additional strengths to its
proposal that it awarded to other offerors with allegedly similar proposals. These disparate
treatment claims also do not pass muster under Office Design Group, 951 F.3d 1366.
i. Assignment of strength based on awards received
Under Factor 1, the Solicitation required offerors to “[l]ist and describe Awards and
Achievements received that were awarded because of Innovation.” AR Tab 5 at 378. TIA alleges
that it should have been awarded a strength based on its [***]. TIA MJAR at 21–22.
This contention lacks merit. IE-TEK [***]. See AR Tab 143 at 18014 (IE-TEK proposal)
(explaining that “IE-TEK [***]”). TIA’s proposal, on the other hand, listed the awards it
received with little or no explanatory narrative. See AR Tab 160 at 25950. The agency further
noted that IE-TEK also [***]. AR Tab 65d at 5131 (IE-TEK proposal evaluation). Because both
the awards and the information provided about them in the proposals are materially different,
TIA’s disparate treatment argument fails.
ii. Assignment of a strength based on laboratory facilities
Offerors were required to include information in their proposals regarding their
“Investment in Innovation” including their “physical investment in laboratory/testing space,”
and, if applicable, “the company’s ‘Virtual Investment’ such as cloud technology for ‘lab space’
and monetary investment of non-physical assets or virtual models.” AR Tab 5 at 377. TIA argues
that it was arbitrary for the agency not to assign its proposal a strength based on its [***] while at
the same time assigning strengths to Bluestone Logic, Innoplex LLC, and IE-TEK based on their
[***]. This argument also lacks merit.
TIA’s proposal stated that it had [***]. AR Tab 160 at 25942. TIA also referenced [***].
Id.
The proposals of the other offerors that TIA cites—Bluestone Logic, Innoplex LLC, and
IE-TEK—similarly provide explanations about the capabilities of their facilities which are
different from those of TIA. In addition, they supply information about how they propose to use
those capabilities to benefit DISA. For example, Bluestone Logic [***]. AR Tab 133 at 13589
(Bluestone Logic proposal). It specified that [***]. Id.
The proposals submitted by IE-TEK and Innoplex, the other two offerors TIA claims
received more favorable treatment, also contain significant detail about their labs and include
discussion of the benefits DISA would receive as a result of their facilities. See AR Tab 143 at
18011 (IE-TEK proposal) (explaining [***]); id. at 18012 (identifying, specifically, [***]); AR
Tab 142 at 17446 (Innoplex, LLC proposal) (explaining that [***]).
TIA argues that its facility “offered the Agency the same benefits that Bluestone’s, IE-
TEK’s and Innoplex’s did—[***] and that it was arbitrary and irrational for the agency to “fail to
recognize those benefits in TIA’s offering.” TIA MJAR at 23. But the Court hardly has the
expertise to decide whether the benefits offered by each of these proposals is “the same.” Id.
22
What the Court can tell is that the proposals of TIA and the other offerors have different features
and are not “substantively indistinguishable.” Office Design Grp., 951 F.3d at 1372. Consistent
with the reasoning in Office Design Group, the Court therefore has no basis for second guessing
the determinations of the agency’s subject matter experts regarding which proposals should be
assigned strengths and which should not. TIA’s challenge based on the agency’s failure to assign
it a strength for its laboratory facilities must therefore be rejected.
iii. Assignment of strengths based on partnerships
Under Factor 1, offerors were required to provide information about their “outreach and
participation,” including their “relationships, partnerships, and/or interactions with fundamental
research and commercial/academic sector.” AR Tab 5 at 377. In its proposal, TIA responded to
this requirement by stating that it was an [***]. AR Tab 160 at 25937. TIA asserts that it should
have been awarded a strength for its partnership with these entities, because another offeror,
Tapestry Technologies (“Tapestry”), received a strength for partnerships with [***] identified.
TIA MJAR at 24.
But again, the proposals are not substantively the same. Tapestry provided [***]. See AR
Tab 159 at 25476–78 (Tapestry Technologies proposal). TIA merely listed the entities by name
and identified them as [***]. AR Tab 160 at 25937. TIA’s disparate treatment claim as to this
aspect of its evaluation therefore also lacks merit.
d. Failure to Assign a Strength Under Factor 4 (Utilization of
Small Business)
Under Factor 4, all offerors were required to submit a “small business participation and
commitment plan” to “be evaluated on the level of proposed participation of U.S. small
businesses in the performance of [the contract].” AR Tab 5 at 382. TIA argues that the agency
acted arbitrarily when it assigned strengths to other offerors’ proposals under Factor 4, based on
their possession of certain certifications, while not awarding a strength to TIA, which had
received the same certifications. TIA MJAR at 24–26.
TIA’s argument lacks merit because it listed the certifications which it claims were
overlooked (such as [***]) in its Factor 1, not in its Factor 4, proposal. See AR Tab 160 at
25950–51 (list of certifications and accreditations); TIA MJAR at 26. In the other instances
where the agency assigned a strength for these certifications, the offerors had listed their
certifications in their Factor 4 proposals. See, e.g., AR Tab 65d at 5022 (DirectViz small
business Factor 4 evaluation); AR Tab 139 at 16113 (DirectViz small business proposal); AR
Tab 65d at 5549 (Sealing Technologies, Inc. small business Factor 4 evaluation); AR Tab 154 at
23912 (Sealing Technologies, Inc. small business proposal). TIA is therefore not similarly
situated to these offerors.
TIA argues that it was nonetheless treated unfairly. It points out that although separate
TEBs were assigned to evaluate each factor, subsequent review boards had access to all volumes
and could have therefore considered the certifications to assign strengths for any factor. TIA
MJAR at 26–27. But the approach TIA argues the agency should have taken would be contrary
to the terms of the Solicitation. The Solicitation expressly stated that “[t]o the greatest extent
23
possible, each volume shall be written on a stand-alone basis so that its contents may be
evaluated with a minimum of cross-referencing to other volumes of the proposal.” AR Tab 5 at
371 (emphasis supplied). Further, it stated that “[i]information required for proposal evaluation
which is not found in its designated volume will be assumed to have been omitted.” Id.
(emphasis supplied). Thus, TIA’s claim that it was entitled to be assigned a strength under Factor
4 based on information it submitted for consideration in its Factor 1 proposal lacks merit.
B. DirectViz Solutions, LLC (Case No. 19-1162)
1. The Agency’s Evaluation and Award Decision
DirectViz Solutions, LLC (“DVS”) submitted its proposal on April 4, 2017. AR Tab 139.
The agency assigned DVS six strengths and three weaknesses under Factor 1. AR Tab 65d
at 5014–17. For Factor 2, the agency deemed two out of three of DVS’s references not relevant.
Id. at 5018. It found relevant the third reference and assigned it a quality rating of “Exceptional.”
Id. The agency assigned DVS four strengths and one weakness under Problem Statement No. 3,
id. at 5019–20, and one strength, one weakness, and one significant weakness for Problem
Statement No. 4, id. at 5020–21. Finally, under Factor 4, DVS earned five strengths and was
assigned no weaknesses. Id. at 5022–23.
The agency assigned DVS’s proposal the following adjectival ratings:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Satisfactory Outstanding Marginal Good
AR Tab 65 at 4659.65 (SSA ratification of SSEB recommendation); AR Tab 65d at 5013–24.
DVS’s evaluated price ranked [***]. AR Tab 65d at 5024.
The SSA observed that DVS’s six strengths under the Innovation factor “are attractive
and while the risk of unsuccessful performance was low, based on the rating, the technical
advantages were not superior to other proposals.” Id. at 4659.67. He considered the “technical
advantages and disadvantages” that came with DVS’s proposed price. Id. He concluded that
because the proposal was “not amongst the most highly rated or the lowest priced” it did “not
represent a best value to the Government,” and “d[id] not merit selection for an award.” Id. at
4659.68.
2. The Debriefing
On July 9, 2019, DISA notified DVS that it had not been selected for an award and it
supplied DVS with a list of the twenty-three successful offerors. AR Tab 69 at 8667–69. DVS
requested a post-award debriefing the same day. AR Tab 73 at 8726.
The next day, on July 10, 2019, the agency sent DVS a document that included a table of
the successful offerors with the adjectival ratings assigned them for each technical factor, along
with their prices. AR Tab 73a at 8731–33. In addition, the document included a summary of the
24
evaluation process the agency followed. Id. On July 12, DVS sent the agency a follow-up email
asking questions in response to the initial debrief email to which the CO responded on July 22.
AR Tab 100 at 8990 (email from CO to DVS).
3. DVS’s Protest
a. Violation of 10 U.S.C. § 2305
DVS first contends that the agency’s debriefing process did not comport with the
requirements of 10 U.S.C. § 2305(b). Mem. in Supp. of Pl.’s Mot. for J. on the Admin. R. (“DVS
MJAR”) at 15, ECF No. 84. That statute requires that an unsuccessful offeror who makes a
timely request for a debriefing be provided information regarding “the basis for the selection
decision and contract award.” 10 U.S.C. § 2305(b)(5)(A). As pertinent to the present
procurement, the debriefing must include, “at a minimum”: “the agency’s evaluation of the
significant weak or deficient factors in the offeror’s offer”; “the overall evaluated cost and
technical rating of the offer of the contractor awarded the contract and the overall evaluated cost
and technical rating of the offer of the debriefed offeror”; “the overall ranking of all offers”; “a
summary of the rationale for the award”; “reasonable responses to relevant questions posed by
the debriefed offeror as to whether source selection procedures set forth in the solicitation,
applicable regulations, and other applicable authorities were followed by the agency”; and “an
opportunity for a disappointed offeror to submit, within two business days after receiving a post-
award debriefing, additional questions related to the debriefing.” 10 U.S.C. § 2305(b)(5)(B).
DVS argues that—in violation of these statutory requirements—the agency did not
provide it with “the overall ranking of all offers.” DVS MJAR at 15. It notes that “[t]he purpose
of an overall ranking is to demonstrate an Agency’s reasoning behind its award decisions.” Id.
DVS argues that “[u]nless [it] can review the overall rankings following DISA’s award decision,
it will have no assurance that it has identified all arbitrary and capricious actions, abuses of
discretion, and failures to observe procedures required by law relevant to DISA’s agency
actions.” Id. at 16. It also contends that “the requirement that an overall ranking be formulated
implicates agency compliance and enforces discipline during the procurement process.” Id.
The government argues that § 2305(b)(5)(B)(iii) “does no more than require agencies to
provide an offeror’s overall ranking if the ranking exists” and that “[i]t does not require the
agency to create a ranking in the first instance.” Def.’s Cross-Mot. for J. upon the Admin. R. and
Consolidated Resp. in Opp’n to Pls.’ Mots. for J. upon the Admin. R. (“Gov’t MJAR”) at 37–38,
ECF No. 123. And even if DVS did not receive information to which it was entitled during the
debriefing, the government argues, it has not shown that it suffered prejudice as a result. Id. at
35. The Court agrees with the government.
First, the government’s interpretation of the statute is supported by its language and its
purposes. Section 2305(b)(5)(B)(iii) requires the agency to provide “the overall ranking of all
offers,” not “an overall ranking of all offers.” Use of the definite article “the” rather than the
indefinite article “an” indicates to the Court that the “overall ranking” referenced in the statute
refers to a ranking that has already been performed, not a ranking performed for purposes of a
debriefing. Moreover, if the agency did not rank all of the proposals as part of its evaluation
25
process, then performing such a ranking after the fact would not serve the statutory purpose of
shedding light on the reasoning behind the agency’s award decision.
In this case, there is nothing in the administrative record which reflects that the agency
created a ranked list of all of the proposals it received. 5 And while failing to perform a
comprehensive ranking of offerors may in some circumstances reflect a lack of discipline (as is
suggested by DVS), DVS does not contend that the agency abused its discretion or acted
unreasonably when it did not perform a comprehensive ranking of all of the offerors here.
In any event, even assuming that the agency violated a requirement that it supply DVS
with an overall ranking of all offerors, DVS has failed to show that the purported error was
prejudicial—i.e. that “there was a substantial chance that it would have received [an] award but
for the error.” Bannum, 404 F.3d at 1355 (internal citations omitted). In fact, when DVS posed
questions to the agency after receiving its debriefing documents, it did not mention the ranking
of offerors that it now claims the agency was required to provide to it. That suggests to the Court
that DVS itself did not consider the information particularly critical to its understanding of the
basis for the agency’s decision not to award it a contract. Additionally, as part of the debriefing,
the agency provided DVS with a table that listed the technical ratings and prices of all of the
successful offerors, which would have enabled it to perform a comparison between those ratings
and prices and its own. AR Tab 73a at 8732. And finally, it has since been provided a copy of the
administrative record, including the SSDD which contains extensive information about “the basis
for the selection decision and contract award,” which is what 10 U.S.C. § 2305(b)(5)(A) requires
the agency to “furnish” upon timely request by offerors. The absence of a document that ranks
all of the offerors has not prevented DVS from pressing multiple allegations of error in this
protest.
In short, the Court does not agree that the agency failed to supply DVS with information
to which it was statutorily entitled. But even if a violation occurred, DVS has not shown that it
suffered any resulting prejudice. Therefore, this protest ground lacks merit.
5
The memorandum of the SSAC indicates that the agency began its selection process by
eliminating the forty proposals that received an “Unacceptable” rating in either Factor 1 or
Factor 3. AR Tab 65e at 5784. It then looked at the twenty offerors (of the remaining fifty-nine)
that had been rated “Outstanding” on Factor 1 and reviewed them from the highest to the lowest
technically rated. Id. at 5785. It made awards to eighteen of these offerors upon consideration of
their technical ratings and prices. Id. at 5872. It next moved on to review in order of their overall
technical merits the twenty-five proposals that had at least a “Good” rating on Factor 1, and then
those that had received at least an “Acceptable” rating on Factor 1. Id. at 5826–73. The
administrative record also includes a table prepared by the CO which listed the SSAC’s thirty
highest rated proposals, along with their prices and technical ratings. DVS is listed as number
thirty in technical ratings. AR Tab 54 at 3432–33.
26
b. Alleged Errors in Evaluation under the Innovation Factor
In its complaint and MJAR, DVS presents several challenges to the agency’s evaluation
of its proposal under the Innovation Factor. For the reasons set forth below, the Court concludes
that each of those challenges lack merit.
i. Use of unstated criteria to evaluate “Knowledge
Management”
Pursuant to section L.4.2.3.2 of the Solicitation’s instructions, offerors were required to
provide specified information regarding the topic of their “Investment in Innovation” that would
be used to evaluate their proposals under the Innovation factor. AR Tab 5 at 377. As pertinent
here, under the subtopic “Knowledge Management,” the instructions required offerors to
describe “[w]hat . . . the company’s Knowledge Management methodology look[s] like,” as well
as “[w]hat . . . the company’s ‘architecture’ look[s] like and what types of resources are available
in the company architecture.” Id.
In compliance with this instruction, DVS’s proposal includes a [***]. AR Tab 139 at
15970. With respect to the instruction to set forth the “types of resources . . . available in the
company architecture,” AR Tab 5 at 377, DVS stated that it [***], AR Tab 139 at 15970. It also
identified the [***]. Id.
The agency assigned DVS a weakness under the Knowledge Management subtopic. It
reasoned as follows:
The Offeror provides a very brief description of their KM methodology and their
KM architecture – [***] not a knowledge management architecture with the
maturity to manage the strategy, planning, execution, and improvement processes
that the Offeror includes in their methodology. This is a flaw in the Offeror’s
proposal that increases the risk of unsuccessful contract performance because SETI
stakeholders require a mature and proven knowledge management strategy,
architecture, and resources to provide an oversight management framework for
their complex IT developmental efforts.
AR Tab 65d at 5016.
DVS contends that the agency’s assignment of a weakness to its Factor 1 proposal on
these bases violated the “fundamental principle of procurement law that an agency may only
evaluate offerors in accordance with the evaluation criteria stated in the Solicitation.” DVS
MJAR at 16 (citing FAR 15.305(a)). It observes that the “Solicitation does not prescribe any
preferences as to the types of KM resources that should be utilized[, n]or does [it] indicate that
offerors’ proposals will be downgraded for use of any particular KM resources, [***].” Id. at 17.
Nonetheless, according to DVS, the agency treated its proposal [***] as “essentially
disqualifying.” Id. at 16–17; see also id. at 17 (complaining that “DISA failed to put DVS on
notice that a proposal that incorporates [***] would be evaluated negatively”).
The Court is not persuaded that DVS’s proposal was “downgraded” much less
“essentially disqualified” simply because it proposed the use of a particular technology
27
product—[***]. Id. at 16–17. The agency stated that it assigned the proposal a weakness because
it provided only a “very brief description of [DVS’s] KM methodology and . . . KM
architecture,” and because it concluded that the technology DVS identified—[***]—was not “a
knowledge management architecture with the maturity to manage the strategy, planning,
execution, and improvement processes that the Offeror includes in their methodology.” AR Tab
65d at 5016.
DVS points out that the agency assigned awardee DHPC Technologies, Inc. (“DHPC”), a
strength based on its responses to the Knowledge Management subtopic where it proposed a
[***]. Id. at 18 (citing AR Tab 65 at 4659.31). DVS expresses skepticism about “the sincerity of
DISA’s concern over DVS’s proposed KM solution” given this assignment of a strength to
DHPC. See id. But the assignment of a strength to another offeror that proposed the [***], does
not raise questions about DISA’s “sincerity.” Id. Instead, it confirms that it was not DVS’s
proposed [***] in and of itself that was of concern to the IEB. The Court therefore rejects DVS’s
argument that the agency applied an unstated evaluation criterion to its proposal.
ii. Failure to consider other knowledge management
capabilities contained in the proposal
In addition to its argument that the agency applied unstated evaluation criteria to the
evaluation of DVS’s knowledge management proposal, DVS also contends that the agency
arbitrarily ignored that it had “proposed other KM solutions in its proposal as alternatives or
supplements to [***].” Id. at 19. DVS asserts that its proposal “indicates that it utilizes [***].”
Id. (citing AR Tab 139 at 15970). It also contends that its proposal [***]. Id. (citing AR Tab 139
at 15969). Finally, in its Reply, DVS cites language in its proposal which states that its [***].
DVS Reply Mem. In Support of Pl.’s Mot. J. on the Admin. R. (“DVS Reply”) at 15 (quoting
AR Tab 139 at 15969).
The Court lacks the competence not to mention the authority to second guess the agency
regarding the relevance of these other technologies to the Knowledge Management subtopic. See
Benchmade Knife Co. v. United States, 79 Fed. Cl. 731, 740 (2007) (observing that the court
“simply is not equipped to determine whether the differences between the [combat] knives
articulated by Benchmade rise to the level of significant differences, thus rendering the agency’s
determination unreasonable” and that “[a]gencies are entitled to considerable discretion and
deference in matters requiring exercise of technical judgment”). Further, it finds it noteworthy
that DVS itself did not reference these additional technologies in explaining “[w]hat . . . [DVS’s]
Knowledge Management methodology look[s] like,” or “[w]hat . . . the company’s ‘architecture’
look[s] like and what types of resources are available in [its] architecture.” AR Tab 5 at 377.
Instead, the references to the other technologies are included in [***]. AR Tab 139 at 15968–69.
The Court concludes that it was not irrational or improper for the agency to assign a
weakness to DVS’s proposal where it did not identify these other resources as part of its
knowledge management methodology or architecture. DVS’s protest on this basis is therefore
rejected.
28
iii. Unequal treatment
In conjunction with the evaluation of Factor 1, the Solicitation required offerors to
provide information in their proposals concerning their “Corporate Philosophy/Culture on
Innovation.” AR Tab 5 at 376. As pertinent to this protest, offerors were to “describe the
company’s culture regarding employee[s’] pursuit of Innovation and how they are rewarded for
doing so.” Id. at 377. Offerors were also asked to provide “an example of when the company
failed at attempting to deliver Innovation and what they learned from that failure” as well as
“[w]hat, if any consequences there were to the employee.” Id.
The agency assigned a weakness to DVS’s proposal regarding these requirements,
finding that it did not “provide adequate information on when th[e] company failed at attempting
to deliver Innovation and what, if any, consequences there would be to employees who pursued
an innovative initiative and experienced failure.” AR Tab 65d at 5016. The agency explained that
the example of failure that DVS had provided involved “an operational oversight failure by the[]
Government client, not a company failure in attempting to deliver innovation.” Id.; see also AR
Tab 139 at 15964 (DVS proposal providing example in which its government client “failed to
define a requirement”). The flaw in DVS’s proposal in this regard, the agency concluded,
“increases the risk of failure to be innovative because the Offeror does not demonstrate it
understands there is value in failure and can apply the lessons learned from innovation failure
towards future innovation success.” AR Tab 65d at 5016.
DVS does not disagree with the agency’s conclusion that its proposal was not compliant
with the instructions because it did not identify and discuss one of its own failures to innovate.
Nor does it contend that it was irrational for the agency to determine that this shortcoming
merited the assignment of a weakness. Rather, it argues that it is a victim of “unequal treatment”
because the agency “failed to assign weaknesses” and even “made awards” to other offerors
whom it alleges also “failed to discuss past failures to innovate.” DVS MJAR at 38. These
offerors, it asserts, included Integrated Systems, Inc., Applied Systems Engineering Joint
Venture, ValidaTek, Inc., and DHPC Technologies. Id.
Contrary to DVS’s assertions, however, each of the proposals it cites did in fact include
specific examples of past failures by the offerors. The proposal of Integrated Systems, for
example, discussed [***]. AR Tab 146 at 20098. [***] Id. at 20098–99. Applied Systems
Engineering’s proposal also supplied a specific example of failure [***]. AR Tab 130 at 12351
(describing a [***]). The proposal also addressed in some depth [***]. Id. ValidaTek’s proposal
similarly included examples of failures it experienced along the way in [***]. AR Tab 163 at
27252. And DHPC’s proposal also included specific examples of failures it had experienced
[***]. AR Tab 138 at 15319.
In its reply brief, DVS repeats its inaccurate assertion that the proposal of Applied
Systems did not include any examples of failure. DVS Reply at 16. And while retreating from its
original assertion that the other cited proposals included no examples of failure, it now alleges
that those proposals “provide only vague anecdotes with close to no detail and hardly any
exposition on the lessons learned from failure, as required by the Solicitation.” Id. The Court
does not agree with this characterization of the other proposals. But in any event, as explained
above, to establish disparate treatment, DVS must show that the other proposals were
29
“substantively indistinguishable” from its own. See Office Design Grp., 951 F.3d at 1373. For
the reasons set forth above, it has failed to make this showing. Therefore, DVS’s allegations of
disparate treatment lack merit.
c. Alleged Errors in Evaluation of Past Performance
As noted earlier, DVS submitted three past performance references, two of which the
agency deemed not relevant. AR Tab 65d at 5018. DVS claims that these determinations were at
odds with the requirements of the Solicitation and were based on unstated evaluation criteria.
DVS MJAR at 20.
At the outset, the Court notes the determination of whether a particular example of past
performance is relevant involves an exercise of discretion that lies particularly within the
expertise of the procuring agency. The agency’s subject-matter experts are best suited to
determine whether and to what extent the experience reflected in a past performance example
instills confidence that the offeror will successfully perform and meet the needs of the agency on
the contract at issue. See Vanguard Recovery Assistance v. United States, 101 Fed. Cl. 765, 785
(2011) (quoting Al Andalus Gen. Contracts Co. v. United States, 86 Fed. Cl. 252, 264 (2009))
(stating that it is a “‘well-recognized’ principle that ‘an agency’s evaluation of past performance
is entitled to great deference’”); Seaborn Health Care, Inc. v. United States, 101 Fed. Cl. 42, 51
(2011) (finding that when evaluating an offeror’s past performance, “FAR 15.305(a)(2) affords
agencies considerable discretion in deciding what data is most relevant”); FirstLine Transp. Sec.,
Inc. v. United States, 100 Fed. Cl. 359, 396 (2011) (quoting Univ. Research Co. v. United States,
65 Fed. Cl. 500, 505 (2005)) (“When the Court considers a bid protest challenge to a past
performance evaluation conducted in the course of a negotiated procurement, ‘the greatest
deference possible is given to the agency.’”); Todd Constr., L.P. v. United States, 88 Fed. Cl.
235, 247 (2009), aff’d, 656 F.3d 1306 (Fed. Cir. 2011) (“[T]he relative merits of the offerors’
past performance is primarily a matter within the contracting agency’s discretion.’”) (internal
citations omitted). Therefore, to justify an interference with the agency’s exercise of discretion
regarding the relevance of an offeror’s past performance, the protester must demonstrate that the
agency determination regarding relevance “lacked any rational basis.” Overstreet Elec. Co., Inc.
v. United States, 59 Fed. Cl. 99, 117 (2003), appeal dismissed, 89 Fed. Appx. 741 (Fed. Cir.
2004) (emphasis and modification in original).
DVS argues that “[h]ad the Agency correctly evaluated References 1 and 2 under the
clearly stated evaluation criteria,” these references “would have received ‘Very Relevant’ ratings
and DVS’s Past Performance proposal would have compelled a ‘Substantial Confidence’ overall
rating and [it] likely would have been selected for award.” DVS MJAR at 25. The Court is not
persuaded by this contention.
Reference 1 involved DVS’s work as [***]. AR Tab 139 at 15985. The PPEB deemed
this work not relevant because, although “[s]ome task areas [identified in the SETI performance
work statement] were addressed,” the referenced contract concerned “an operations and
maintenance requirement” and it demonstrated “little complexity and innovation beyond
standard problem solving.” AR Tab 46 at 1895 (agency’s past performance rating and narrative).
30
DVS contends that—contrary to the PPEB’s views—the work it performed on
Reference 1 “did require the complexity, innovation or scope to be relevant.” DVS Reply at 12.
It observes that its proposal “list[ed] a variety of solutions that it delivered and implemented
during performance including [***].” Id. Further, DVS states, it “also [***].” Id. (citing Tab 139
at 15983). DVS argues that it therefore “‘did more than just solve problems’” in performing the
work on the Reference 1 contract. It also “provided, inter alia, [***].” Id. (citing Tab 139 at
15988, 15990).
DVS poses a similar challenge to the PPEB’s conclusion that Reference 2, which cited its
work under the [***] was not relevant. AR Tab 46 at 1896. The PPEB observed that this contract
involved “a migration and support effort,” and that “[n]o innovation or description of
complexities [was] provided.” Id. at 1897. It characterized the referenced contract as involving
“standard operations and maintenance work.” Id. Therefore, the PPEB reasoned, “[l]ittle or none
of the scope and magnitude of effort and complexities this solicitation requires is demonstrated.”
Id.
As it did with respect to Reference 1, DVS contends that Reference 2 also should have
been found relevant because, contrary to the agency’s views, the reference revealed its “ability to
perform the SETI work.” DVS Reply at 13. Again, as it did with Reference 1, DVS supplies
language from its past performance proposal describing its technical accomplishments under the
referenced contract, and argues that those accomplishments belie the agency’s finding that
Reference 2 was not relevant. Id.
DVS has not persuaded the Court that the agency’s conclusions regarding References 1
and 2 lacked any rational basis. The PPEB reviewed DVS’s questionnaire and explained why it
found that only one of the three examples provided was relevant. DVS disagrees with the
agency’s judgment that the prior work reflected in the References is not sufficiently similar to
the SETI contract in terms of its scope, magnitude of effort, and complexity. But this Court has
no legal basis for choosing DVS’s opinion over that of the PPEB. In cases that involve the
application of judgment in a highly technical area, the Court’s “main task” instead is to ensure
that the agency “examined the relevant data and articulated a ‘rational connection between the
facts found and the choice made.’” WorldTravelService v. United States, 49 Fed. Cl. 431, 441
(2001) (citing Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43
(1983)). The Court concludes that the agency passed that threshold in this case.
The Court also finds no merit to DVS’s contention that the agency ignored its stated
evaluation criteria in assessing the relevance of References 1 and 2. DVS cites the Solicitation’s
instructions for providing information about past performance which state that “[s]imilar
Contracts may include projects in a variety of sizes, a variety of disciplines, varying degrees of
technical complexity” and that “[s]imilar projects may include,” among others, “[e]fforts in
excess of $3M for [the] restricted category or in excess of $15M for [the] unrestricted category
(if less than these thresholds, justify relevancy to SETI).” AR Tab 5 at 379; see DVS MJAR at
23.
DVS observes that both References 1 and 2 involved efforts well in excess of the
three-million-dollar threshold. Therefore, it argues, the agency determination that they were not
relevant was inconsistent with the instructions in the Solicitation. This argument lacks merit.
31
While the Instructions stated that similar projects “may” include efforts that exceeded the dollar
thresholds, they do not provide that exceeding the dollar thresholds is sufficient in and of itself to
establish the relevancy of a past performance reference. To the contrary, offerors were advised
that “[s]imilar projects” that that they identified “should demonstrate as many of the project
types included in the PWS (either individually or in combination thereof) as possible.” AR Tab 5
at 379. This instruction makes it clear that the determination of relevance would be based on
multiple factors, not just the value of the referenced contract.
Finally, DVS also argues that “even if [its] Past Performance References # 1 and # 2 were
indeed ‘Not Relevant,’ DISA provided no reasoning as to why DVS was assigned a Performance
Confidence Assessment Rating of ‘Satisfactory Confidence,’ when Reference # 3 was rated as
‘Relevant’ and ‘Exceptional.’” DVS MJAR at 24. The Court believes the agency’s reasoning is
self-evident. The Solicitation provides that a “Satisfactory Confidence” rating is appropriate
when “[b]ased on the Offeror’s recent/relevant performance record, the Government has a
reasonable expectation that the Offeror will successfully perform the required effort.” AR Tab 5
at 390. A “Substantial Confidence” rating requires that the agency have “a high expectation that
the Offeror will successfully perform the required effort.” Id. Here, because DVS supplied only
one relevant reference (albeit a reference that had a high-quality rating), the agency concluded
that the one reference gave them a reasonable expectation that DVS could successfully perform
on the SETI contract, but not necessarily a “high” one. Id.
In short, DVS has not shown that the agency’s assessment of the relevance of its past
performance references lacked a rational basis or was inconsistent with the Solicitation’s criteria.
Its protest on these bases therefore lacks merit.
d. Inconsistency of Adjectival Ratings
DVS contends that the agency was “remarkably inconsistent” in its assignment of
adjectival ratings. DVS MJAR at 25. This contention is not supported by the portions of the
record upon which DVS relies.
First, the Court is not persuaded by DVS’s argument that the agency’s assignment of
adjectival ratings to its proposal for Factors 3 and 4 was “facially irrational.” Id. It so contends
because, on the one hand, the PSEB assigned the proposal an “Outstanding” rating under Factor
3, Problem Statement No. 3, and identified four strengths and one weakness, id., but on the other,
the SBEB assigned it “Good” rating under Factor 4 (Small Business Plan) where the balance of
strengths (five) against weaknesses (none) was even more favorable. Id.
DVS’s argument is unpersuasive for several reasons. First, separate technical evaluation
boards evaluated each of the technical factors. Even if there were inconsistencies it would be
neither surprising nor evidence of irrationality, given that distinguishing a “Good” proposal from
an “Outstanding” one involves at least some subjective judgment. As GAO has observed, “it is
not unusual for different evaluators, or groups of evaluators, to reach different conclusions and
assign different scores or ratings when evaluating proposals, since both objective and subjective
32
judgments are involved.” Wellpoint Military Care Corp., B-415222.5, 2019 CPD ¶ 168 (Comp.
Gen. May 2, 2019). 6
In addition, there are material differences between the criteria for assigning adjectival
ratings under Factors 3 and 4. Specifically, the adjectival rating assigned under Factor 3,
Problem Statement No. 3 is based in part upon a balance between the number of strengths and
weaknesses in a proposal. See AR Tab 5 at 391. To secure an “Outstanding” rating under Factor
3 the proposal’s strengths must “far outweigh any weaknesses.” Id. On the other hand, a “Good”
rating for Factor 4 is merited where the proposal indicates “a thorough approach and
understanding of the small business objectives,” while an “Outstanding” rating requires an
“exceptional approach and understanding.” Id. The balance between a proposal’s assigned
strengths and weaknesses is not mentioned. Id.
The other alleged inconsistency DVS identifies—involving the evaluation of Problem
Statement No. 4—is similarly illusory. See DVS MJAR at 26. DVS was assigned an overall
rating of “Marginal” for Problem Statement No. 4 where the agency assessed one strength, one
weakness, and one significant weakness. Awardee SuprTek also received a “Marginal” rating for
Problem Statement No. 4, where it was not assessed any strengths and was assigned two
significant weaknesses. Id.
These results are also consistent with the evaluation criteria and do not evince any
irrationality. The Solicitation expressly provides that a “Marginal” rating would be assigned
under this factor if, among other things, a proposal “has one or more weaknesses which are not
offset by strengths.” AR Tab 5 at 391. Both DVS’s proposal and that of SuprTek meet this
criterion.
In short, there is nothing in the Solicitation requiring that the same adjectival ratings be
assigned across the technical factors based on an accounting of the number of strengths and
weaknesses assigned under each factor. To the contrary, the evaluation criteria vary depending
on the technical factor being assessed. And the evaluators themselves are not the same.
Therefore, DVS’s protest alleging the irrational assignment of adjectival ratings must be rejected.
e. Price Reasonableness
DVS contends that the agency’s price reasonableness analysis was arbitrary and
capricious and violated the applicable provisions of the FAR. DVS MJAR at 27. The Court does
not agree.
“The purpose of a price reasonableness analysis is to prevent the Government from
paying too high a price for a contract.” See, e.g., Patriot Taxiway Indus. V. United States, 98
Fed. Cl. 575, 587 (2011) (citing Ceres Envtl. Servs. V. United States, 97 Fed. Cl. 277, 303 n.15
(2011)). FAR 15.404-1(b)(2) provides that agencies may “use various price analysis techniques”
to ascertain price reasonableness and sets forth a non-exhaustive list of such techniques. And the
6
GAO decisions are not binding on the Court but may be treated as persuasive authority in light
of GAO’s expertise in the bid protest arena. See Allied Tech. Grp., Inc. v. United States, 649
F.3d 1320, 1331 n.1 (Fed. Cir. 2011).
33
Solicitation provided that “[t]he Offeror’s price proposal (fully burdened fixed price labor rates)
will be evaluated, using one or more of the techniques defined in FAR 15.404, in order to
determine if it is reasonable and complete.” AR Tab 5 at 393.
Performing a comparison of all of the offerors’ proposed prices is one of the two
“preferred” techniques identified in the FAR. See FAR 15.404-1(b)(3). The regulation further
provides, moreover, that “[n]ormally, adequate price competition establishes a fair and
reasonable price.” FAR 15.404-1(b)(2)(i). It is undisputed that in this case, the agency performed
a comparison of the offerors’ prices to each other and also to the independent government
estimate, both of which are techniques described in the FAR.
DVS argues nonetheless that the agency’s price reasonableness analysis was not FAR-
compliant. DVS MJAR at 27–28. First, DVS contends, “the mere presence of competition and
the mere receipt of proposals does not per se establish reasonable prices.” Id. at 28. This is a
straw man. The agency did not base its reasonableness determination solely on the existence of
competition. Rather, as DVS itself acknowledges in its MJAR, the agency “conducted an in
depth price reasonableness evaluation.” Id. at 27.
As the administrative record reveals, the PEB “reviewed all prices for all of the offerors
for completeness and to ensure that all prices were calculated correctly.” AR Tab 65c at 4678
(CO’s memorandum for the record); see also AR Tab 51 (PEB report). Next, it evaluated each
proposal for unbalanced pricing. AR Tab 65c at 4678. The PEB then compared the total
proposed prices to each other and to the IGCE. Id. It identified the average total price of the
ninety-nine offerors ($193,870,790), the highest proposed total price ($381,206,594), the lowest
proposed total price ($99,924,321), and the median total price ($187,899,682). Id. It observed
that the total proposed prices of seventy-nine offerors were lower than the IGCE and twenty
were higher. Id.
The PEB also compared the individual labor rates against each other, using a 2.5 standard
deviation rate to identify a maximum rate for each of the ninety-six labor categories. Id. at 4679.
It provided the CO with a spreadsheet that identified the twenty offerors who proposed at least
one labor rate that exceeded the maximum. Id. It also identified the number of labor rates that
exceeded the maximum for each of the twenty offerors. Id.
The agency therefore did not rely upon the mere presence of competition to conduct its
analysis. It performed precisely the type of comparison of the proposals that the FAR
recommends as a preferred technique. Moreover, the agency did not “ignore[] the results” of its
own analysis or “ma[k]e awards to offerors that failed [its] price reasonableness test,” as DVS
claims. DVS MJAR at 27. To the contrary, the CO acknowledged that there was a “large
variance among the total proposed prices and the individual labor rates.” AR Tab 65c at 4679.
But the variance, he noted, “was not unanticipated” and, in his view, “did not cause a concern
that the government would pay an unreasonably high price.” Id. He explained that the
Solicitation had “introduced a great amount of risk of paying a high, but not unreasonable, price,
based upon the need to obtain a range of potentially innovative solutions.” Id. The PWS, he
further elucidated, “described the complex and unknown nature of the work that would be
executed for worldwide missions” as well as the difficulty of “fully anticipating how technical
requirements and individual programs will evolve over the life of the Contract vehicle.” Id.
34
Given these and other considerations set forth in his memorandum, the CO concluded
that the large variance between the highest and lowest total proposed prices and individual labor
rates did not, in and of themselves, establish that any particular prices were not reasonable or that
the highest priced proposals were unreasonably high. Id. He reiterated that the Solicitation
“included significant risk for any offeror,” explaining that “[o]fferors had to make business
decisions on how to strategize their pricing which led to the wide dispersion of prices, and the
Government accepted the potential risk of paying higher prices, but not unreasonable prices, for
higher quality solutions.” Id. at 4680. Further, “[w]ith the variety of requirements that can and
will be executed under the SETI, and the risk of complexities, unknowns, time, location, and
magnitude,” the Solicitation made clear “that non-cost factors, when combined, were
significantly more important than price for this contract and that it may have to pay more for and
was willing to pay more for higher technically rated proposals.” Id. He concluded that “the risk
of paying too much for requirements is low compared to the needs of the Government,
particularly given the mitigating factor that there will be competition to refine prices at the [task]
order level.” Id.
Finally, the CO noted that a number of offerors had proposed individual labor rates that
exceeded the maximum (based on the 2.5 standard deviations). Id. Nonetheless he concluded that
“if the SSA would be willing to pay those higher rates for higher technically rated offerors, the
SSA should have the option to do so,” and should document its reasoning in the SSDD “with the
rationale to include a risk analysis for paying higher prices for a higher rated proposal.” Id. at
4681.
Despite the CO’s extensive explanation of his conclusions, DVS argues that his
underlying rationale wrongly “confuses price reasonableness with a best value trade off.” DVS
MJAR at 29. It notes that “[p]rice reasonableness is not concerned with whether the government
may be willing to pay higher rates for higher technically-rated offerors—that is the concern of a
best value trade off.” Id. “Rather,” it says, “price reasonableness is concerned with whether the
prices proposed by the offeror are too high for the services proposed”; “in other words . . .
whether the prices exceed fair market rates or are otherwise unfair to the government.” Id. at 29–
30.
The Court finds this argument unpersuasive. The CO did not suggest that the test of price
reasonableness was how much the agency was willing to pay. To the contrary, after conducting
his in-depth analysis of the offerors’ prices relative to the IGCE and each other, he left room for
the agency to determine whether the prices proposed were reasonable in consideration of the
particular attributes of each offeror’s proposal. And in the end—while DVS attacks the agency’s
analysis—it is unable to identify a successful offeror whose price proposal was irrationally found
reasonable. At best, it notes that [***]. DVS MJAR at 28. But there were ninety-six different
labor categories for which rates had to be proposed and other offerors exceeded the maximum
labor rates in far more categories. See AR Tab 65c at 4679 (listing offerors who exceeded labor
rates in 122, 58, 57, 52, and 41 labor categories). 7 Further [***] received an “Outstanding” rating
for innovation, but its total price ([***]) was only slightly above the average total price
7
None of the offerors referenced in the parenthetical received contract awards.
35
($193,870,790) of the ninety-nine offerors who remained in the competition following the initial
cut. See AR Tab 65d at 5645.
Similarly, DVS complains that awardees [***], proposed prices that were “well over” the
IGCE of $221 million and, as a result, they “could not be reasonably considered to be fair and
reasonable.” DVS MJAR at 30. In its reply, DVS also cites the prices of RedTeam Engineering
and ValidaTek. DVS Reply at 3. But all of these offerors’ proposals ranged from $221 to $240
million which means that they were all less than ten percent above the IGCE. Id. Further, each of
these proposals received “Outstanding” ratings for Innovation. AR Tab 65 at 4659.6; AR Tab
65d at 5488. Therefore, DVS has not established that the agency acted irrationally when it found
that these offerors’ price proposals were not unreasonable.
In short, the agency conducted a reasonable price analysis in compliance with the FAR. It
compared the offerors’ total prices as well as their labor rates. The CO explained why—in light
of the particular characteristics of this procurement—the variations identified in the price
comparison did not, in and of themselves, create a risk that the government would pay too much
for the services. That is sufficient to establish that its price reasonableness analysis was neither
arbitrary and capricious, nor contrary to law. See Technatomy Corp. v. United States, 144 Fed.
Cl. 388, 390 (2019) (holding that “DISA conducted a meaningful price reasonableness analysis,
in compliance with 48 C.F.R. § 15.404-1(b)(2), by comparing the prices of offerors to each
other’s, to the average price, to the independent government cost estimate, and to the prices
under other contracts, and by explaining that variations were due to differing risk preferences
and technical approaches”); cf. Multimax, Inc., B-298249.6, at 8 (Comp. Gen. Oct. 24, 2006)
(finding price reasonableness analysis deficient where there was “no indication that the agency
ever reviewed the results of [its] formula to assure that the prices at the extreme end of the ranges
reflected reasonable pricing” but “rather . . . mechanistically applied the formula and accepted
the results without further analysis”).
f. The Agency’s Unbalanced Pricing Analysis
The Solicitation provided that the agency would review the offerors’ proposals for
unbalanced pricing, which “exists when, despite an acceptable total evaluated price, the price of
one or more contract line items is significantly over or understated.” FAR 15.404-1(g)(1). The
mere existence of mathematical imbalance in pricing does not necessarily require the rejection of
a proposal. See Munilla Constr. Mgmt., LLC v. United States, 130 Fed. Cl. 635, 652 (2017)
(citing Al Ghanim Combined Grp. V. United States, 56 Fed. Cl. 502, 515 n.17 (2003))
(distinguishing between mathematically and materially unbalanced proposals). Rather, where the
agency’s analysis indicates that unbalanced pricing exists, the CO must: 1) “[c]onsider the risks
to the Government associated with the unbalanced pricing in determining the competitive range
and in making the source selection decision”; and 2) “[c]onsider whether award of the contract
will result in paying unreasonably high prices for contract performance.” FAR 15.404-1(g)(2).
DVS contends that the agency’s unbalanced pricing analysis with respect to the proposals
of awardees Riverside Engineering Joint Venture and DHPC was arbitrary, capricious, and
contrary to law. DVS MJAR at 31. The Court disagrees.
36
For awardee DHPC, the PEB identified potentially unbalanced pricing within the Human
Factors Engineer labor category because the fully loaded rate for the Junior Engineer was higher
than the rate for the Senior Engineer. See AR Tab 51 at 3320–21. The PEB similarly identified
potential unbalanced pricing in Riverside’s proposal within the Systems Engineer labor category,
where the rate for the Senior Engineer was lower than the rates for both the Junior and Mid-
Level engineers. Id. at 3309.
But the CO found that these examples of unbalanced pricing did not create an
unacceptable level of risk. Id. at 3309, 3320–21. First, the CO expressed doubt as to whether
there was actually an imbalance between the fully loaded rates for Junior and Senior engineers
under Riverside’s proposal. Id. at 3309. He suggested that the apparent imbalance may have been
the result of certain accounting methods. Id. And in any event, he found, to the extent that the
discrepant labor rates in either Riverside or DHPC’s proposals represented unbalanced pricing,
they involved only one of ninety-six applicable individual labor categories and did not “render
the entire proposal unbalanced.” AR Tab 52a at 3386, 3387 (memorandum for the record on fair
and reasonable price). Further, the CO concluded that “the unbalanced pricing for one labor
category set would not pose an unacceptable performance risk, and is unlikely to result in the
Government paying an unreasonably high price.” Id. at 3387.
DVS asserts that “[o]verall, [the agency’s] assessment of risk, or lack thereof, with
respect to [DHPC] and Riverside’s unbalanced pricing was woefully inadequate.” DVS MJAR at
33. The Court disagrees. The agency acknowledged the instance of apparent unbalanced pricing
in the labor rates in each of the two proposals and found that these instances did not establish that
the proposals overall were unbalanced. It also explained why the imbalances did not create the
risks of poor performance or unreasonably high prices against which an unbalanced pricing
analysis guards. It found any risks presented acceptable because for each proposal there was an
imbalance in only one of ninety-six labor categories.
Further explanation was not required. DVS’s challenge based on unbalanced pricing
lacks merit.
g. The Agency’s Best Value Tradeoff Analysis and Source
Selection Decision
DVS next argues that the agency “failed to conduct a best-value tradeoff analysis as
required by the Solicitation.” DVS MJAR at 33. It contends that the agency did not
“meaningfully” consider price, but instead “mechanically made award to the offerors whose
proposals exhibited, in descending order, the best combination of adjectival ratings under the
non-price factors.” Id. at 34, 35. It similarly claims that the agency improperly placed “rigid
reliance [on] adjectival ratings, without any engagement with the relative merits behind those
ratings or any consideration of the price differences.” DVS Reply at 8. In particular, DVS argues,
its overall non-price technical ratings were not significantly different from those of several
awardees whose price proposals were higher.
DVS’s assertion that the agency did not engage in a reasonable best value tradeoff
process is not supported by the record. For example, DVS cites the proposal of awardee
Synaptek. DVS MJAR at 34. DVS’s total price was [***] than Synaptek’s. Id. And DVS
37
received the same adjectival ratings as Synaptek, except that Synaptek was assigned a “Good”
rating for Problem Statement No. 4, while DVS was assigned a “Marginal” rating. Id. DVS
argues that it was unreasonable for the agency to [***] for what DVS characterizes as a “modest
difference in technical ratings.” Id. DVS also objects to the agency’s decision to make awards to
three offerors that received “Neutral” past performance ratings, two of which it paid “substantial
price premium[s],” in one case more than $50 million, or thirty-one percent. Id. at 36. It contends
that “[p]aying this sort of premium for an entirely unproven contractor is irrational.” Id.
The Court rejects DVS’s reliance upon these examples as relevant comparators. The
technical rating for Synaptek’s proposal, for example, was materially better than DVS’s rating
because DVS received only a “Marginal” rating for Problem Statement No. 4, while Synaptek’s
rating of “Good” was two levels higher. And Synaptek’s [***], which does not strike the Court
as a facially irrational “premium” for the agency to decide to pay over the life of a ten-year
contract for a technically superior proposal. Id.
Nor does it agree with DVS’s related contention that the agency failed to provide an
adequate explanation for its tradeoff decisions. The record reflects that all of the awardees’ non-
price proposals were rated more highly than DVS’s proposal. In particular, the other offerors to
whom DVS compares itself received an “Outstanding” in Innovation, or, if they received the
lower “Good” rating, did not receive a “Marginal” rating in any other category as DVS did.
Indeed, the administrative record is replete with evidence that the agency engaged in a
process for making its best value tradeoff decisions that was reasonable and consistent with the
mandates of the Solicitation. The PEB thus vetted and flagged the proposals with the highest
total prices. AR Tab 65c at 4681. None of those proposals were selected for an award. The SSEB
reviewed the technical ratings assigned to all of the offerors, summarized them, and described
where each offeror ranked in terms of its overall price. See AR Tab 65d.
The record reflects that the SSAC and SSA analyzed and considered price but that they
prioritized the proposals’ technical ratings, especially Innovation, over price. This is consistent
with the Solicitation, which provided that “[w]hen combined all non-price factors are
significantly more important than price,” and which frequently referenced the importance of
innovation to meeting the government’s needs. AR Tab 5 at 395.
Finally, the Court concludes that the agency did not place excessive reliance upon
adjectival ratings in making its tradeoff decisions. To the contrary, consistent with the
requirements of FAR 15.308, the SSA’s decision was based on a comparative assessment of
proposals against all source selection criteria stated in the Solicitation. See AR Tab 65. The SSA
relied upon the reports and analyses prepared by the TEBs, the SSEB, and the SSAC and made
an independent judgment as to which offerors would receive an award. Id. at 4659.3, 4659.104.
The SSA did not rely exclusively on adjectival ratings to compare the proposals’ relative merits;
to the contrary, it appropriately used the ratings as guideposts but also considered the strengths
and weaknesses of each proposal. It generated thousands of pages of documentation that explains
why each of the ratings was assigned. See Wackenhut Servs., Inc. v. United States, 85 Fed. Cl.
273, 297 (2008) (citing Opti-Lite Optical, 1999 WL 152145, at *3 (Comp. Gen. 1999))
(“[A]djectival ratings and point scores are useful as guides to decision-making . . . but . . . must
38
be supported by documentation of the relative differences between the proposals, their strengths,
weaknesses and risks, and the basis and reasons for the . . . decision.”). 8
Indeed, the Court cannot imagine how the agency could have conducted the best value
tradeoff in this procurement, without relying on the adjectival ratings to guide the comparison of
the proposals. DVS’s challenge to the agency’s best value tradeoff process, like its other
disagreements with the agency’s discretionary judgments, is without merit.
C. Tenica (Case No. 19-1169)
The Agency’s Evaluation
Tenica timely submitted its proposal on April 4, 2017. AR Tab 161. The TEB assigned
the proposal eleven strengths, three weaknesses, and two significant weaknesses under Factor 1.
AR Tab 65d at 5685–89. For Past Performance, the PPEB found all of Tenica’s three references
“Recent.” Id. at 5690. Reference 2 was found “Somewhat Relevant,” but References 1 and 3
were both deemed “Not Relevant.” Id. The TEB assigned Tenica’s Problem Statement No. 3
three strengths and no weaknesses. Id. at 5691. Its Problem Statement No. 4 received two
strengths, one weakness, and one significant weakness. Id. at 5692.
Based on the evaluations, the TEBs assigned Tenica’s proposal a “Good” rating for
Factor 1, a “Neutral” rating for Factor 2, an “Outstanding” rating for Problem Statement No. 3, a
“Marginal” rating for Problem Statement No. 4, and an “Outstanding” rating for Factor 4. Id. at
5684. On review, the SSEB made one change, upgrading the rating for Problem Statement No. 4
from “Marginal” to “Acceptable.” Id. at 5696. With that change, the agency assigned Tenica’s
proposal the following overall ratings:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Neutral Outstanding Acceptable Outstanding
AR Tab 65 at 4659.83 (SSA ratification of SSEB recommendation); AR Tab 65d at 5696.
Tenica’s evaluated price ranked [***]. AR Tab 65 at 4659.83.
The SSAC recommended the agency not award a contract to Tenica because its “proposal
was priced at the higher end of all of the Offerors” and “was not one of the highest technically
rated.” AR Tab 65e at 5858. The SSA considered “the benefits offered[,] the risks presented, and
8
The Court notes that at the end of its MJAR, DVS makes a catchall argument that its protest
should be sustained because the agency failed to document its evaluation, best value tradeoff,
and source selection decisions. DVS MJAR at 37–38. Because its argument relies upon alleged
examples of inadequate documentation already addressed (and rejected) earlier in this opinion,
the Court deems it unnecessary to separately address this catchall argument.
39
the prices proposed” by Tenica’s proposal and “agree[d] with the SSAC that th[e] proposal is not
amongst the most highly rated or the lowest priced and does not represent a best value to the
Government.” AR Tab 65 at 4659.86. Therefore, the SSA concluded, “it does not merit selection
for award.” Id.
Tenica’s Protest
Tenica challenges its ratings on Factors 1 and 2, and for Problem Statement No. 4. In
addition, it contends that the agency conducted an unreasonable price evaluation. For the reasons
set forth below, the Court concludes that none of these challenges provide a basis for sustaining
Tenica’s protest.
a. Factor 1 Evaluation
The agency assigned Tenica a “Good” rating for Factor 1 based on its determination that
the proposal contained eleven strengths, three weaknesses, and two significant weaknesses. AR
Tab 65 at 4659.83. Tenica claims that four of the weaknesses the agency assigned were based on
its alleged failure to provide certain information that its proposal did, in fact, contain. Tenica
MJAR at 22. It challenges the assignment of the fifth weakness as an example of disparate
treatment. Id. at 27. Tenica contends that if these weaknesses were removed it would be entitled
to an “Outstanding” rating under Factor 1. Tenica’s contentions lack merit.
Tenica first contends that the agency wrongly assigned it a weakness for failure to
provide required information about how its employees are rewarded for or incentivized to pursue
innovation. Tenica MJAR at 23 (quoting AR Tab 161 at 26093–94). It cites to portions of its
proposal which it alleges satisfied this requirement. But the portions of its proposal that it claims
refute the agency’s assessment appear in its Executive Summary, which is contained in Volume
I, not in Volume II, Tab C, which is the portion of the proposal that the Solicitation instructs
offerors to use to supply the information required for the Factor 1 evaluation. See AR Tab 5 at
369. The Solicitation further provides that “[i]nformation required for proposal evaluation which
is not found in its designated volume will be assumed to have been omitted from the proposal.”
Id. at 371.
In any event, it was not unreasonable for the agency to decide that neither the quoted
statements in the Executive Summary, nor the similar discussion contained in Volume II, Tab C,
AR Tab 161 at 26156–57, were responsive to the questions about how Tenica incentivizes or
rewards employees. The cited portions of the executive summary narrative for Reference 1, for
example, describe [***] but do not address what incentives or rewards were provided to the staff
for doing so. Id. at 26093. Similarly, the narrative for Reference 2 states that [***]. Id. at 26157.
But Tenica again supplies no description of incentives or rewards for the employees who came
up with the idea to do so. And finally, the language pertaining to Reference 3 states that [***].
Id. at 26094. It does not describe incentives or rewards for the employees—it merely [***]. See
id. (stating that [***]).
The agency also assessed a weakness based on Tenica’s failure to provide “adequate
methods for measuring the effectiveness of Innovation efforts.” AR Tab 65d at 5688. According
to Tenica, this topic is addressed in its proposal. Tenica MJAR at 23–24 (quoting AR Tab 161 at
40
26158–59). But the agency did not assess a weakness based on Tenica’s failure to address the
topic at all; it assessed a weakness because it concluded that the narrative did not set forth
methods for measuring the effectiveness of innovation efforts that were “adequate” to the task.
AR Tab 65d at 5688. It explained that “without a clear method” for measuring effectiveness,
Tenica would be “less likely to succeed in performing SETI tasks that require mature and
detailed measures to manage complex system development in future SETI task orders.” AR Tab
65d at 5688.
Tenica challenges the agency assessment of a third weakness based on its failure to
provide information about its employee retention rates. Tenica MJAR 24–25. It cites to the
Management Information portion of its proposal, which included a [***]. AR Tab 161 at 26125.
But this discussion does not appear in Volume II, Tab C and therefore was legitimately not
considered by the agency in rating the proposal under Factor 1. See id. Further, the assessed
weakness was based on both the failure to provide the retention rate and the failure to provide
information about how Tenica tracks employees’ education and training. AR Tab 65d at 5688.
Tenica’s MJAR does not address the latter issue.
The fourth weakness Tenica claims was wrongly assessed concerned its articulation of its
approach to risk and risk management. Tenica MJAR 25–26. It mischaracterizes the basis for the
agency’s assessment of a significant weakness regarding this issue. The agency did not assign a
significant weakness because Tenica “[f]ail[ed] to provide information as to what Tenica’s
approaches to risk management are.” See id. at 25. Rather, the agency found that the approach
was not “clearly articulate[d],” that it could not discern an “overarching risk governance policy”
based on the “minor examples” provided in the proposal, and that—although the proposal
contained a graphic entitled “Risk Reduction”—it did not provide an explanation of [***]. AR
Tab 65d at 5688. Tenica’s motion, which simply reproduces the narrative in its proposal, does
not address these concerns at all.
Tenica contends that it was improper for the agency to assess a weakness in its proposal
based on its failure to “provide information on when it failed at attempting to deliver
Innovation[ and] what it learned from that failure.” Tenica MJAR at 27. Tenica justifies not
addressing this point at all by asserting that it has “never failed at attempting to deliver
innovation.” Id. Of course, Tenica did not so state in its proposal. And it was not unreasonable
for the agency to conclude that Tenica simply failed to address the question posed, given the
improbability of its representation that it has never had an innovation failure. 9
9
According to Tenica, the Solicitation did not require offerors to expressly note the
inapplicability of the question. Tenica MJAR at 27. In fact, Tenica argues, the Solicitation
“expressly contemplates” that some offerors will not have any examples to provide. See id.
(citing AR Tab 44 at 1537). This argument is not supported by the language of the Solicitation
upon which Tenica relies. The Solicitation instructs offerors to “provide an example of when the
company failed at attempting to deliver Innovation and what they learned from that failure.” AR
Tab 5 at 377. It then asks “[w]hat, if any, consequences were there to the employee.” Id. The
phrase “if any,” refers to consequences to employees for failures to deliver innovation, it does
not refer to unsuccessful attempts to deliver innovation.
41
Finally, Tenica alleges that it was a victim of disparate treatment with respect to the
assignment of this weakness. Tenica MJAR at 27. It argues that several awardees who did not
discuss failed attempts at innovation in their proposals ([***]) were not assigned a weakness on
that basis. Id. For the reasons set forth above in discussing an essentially identical disparate
treatment claim made by DVS, the Court finds this contention without merit.
b. Past Performance Rating
The Solicitation provides that the “past performance evaluation factor assesses the degree
of confidence the Government has in an Offeror’s ability to supply solutions and services to meet
users’ needs, based on a demonstrated record of performance.” AR Tab 5 at 388 (Section M.2.3
Factor 2: Past Performance). The confidence determination would be based on an analysis of the
recency, relevancy, and quality of up to three past performance references for each offeror. Id.
Tenica received a “Neutral Confidence” rating because its “performance record is so
sparse that no meaningful confidence assessment rating can be reasonably assigned.” AR Tab
65d at 5690. The rating was based on the agency’s determination that two of Tenica’s three
references were “Not Relevant.” Id.
Tenica challenges its “Neutral” rating on several bases. Tenica Mot. for and Mem. in
Support of Pl.’s Request for J. on the Admin. R. (“Tenica MJAR”) at 14–19, ECF No. 71. First,
alleging disparate treatment, Tenica contends that several of the awardees “received the exact
same comments for their references as Tenica did, yet for unknown reasons these other proposers
received materially higher relevance ratings.” Id. at 15. Second, it argues that the agency’s
classification of two of its references as “Not Relevant” was “internally inconsistent, and
arbitrary on its face” because “the Agency also found these ‘non-relevant’ references to be both
very recent and of exceptional quality.” Id. at 15, 20. For the reasons set forth below, the Court
concludes that each of these arguments lacks merit.
i. Disparate treatment
As noted, the agency found two of Tenica’s past performance references not relevant.
Reference 1 was deemed not relevant because “[b]ased upon the narrative, detail as to the work
accomplished by [Tenica] could not be determined.” AR Tab 171 at 29383. The agency further
explained that Tenica’s narrative [***] and that “[t]here was little to no demonstration of the
scope and magnitude of effort and complexities this solicitation requires.” Id. And while the
proposal’s narrative stated that Tenica [***] the agency concluded that this description did not
“speak to the complexity involved or how the work is innovative.” Id. Further, the PPEB found
that “[t]he task areas [Tenica] cited were not substantiated in the narrative.” Id.
The agency also found Reference 3 “Not Relevant” because it concerned a “sunsetting
mission which [did] not cover any task areas” and “involved little or none of the scope and
magnitude of effort and complexities this solicitation requires.” Id. at 29389. Reference 2, on the
other hand, was rated “Somewhat Relevant” because while “[s]ome of the scope and complexity
anticipated on SETI is demonstrated,” the narrative did not speak to innovation “in any detail,”
and “[did] not state how or why it is innovative.” Id. at 29386.
42
Tenica alleges the PPEB evaluations of three other offerors that received contract awards
(Volant, Innovative Government Solutions, and Synaptek) contained the “the exact same
critiques [as those] underlying Tenica’s Factor 2 ‘Neutral Confidence’ scores.” Tenica MJAR at
16. The fact that these proposals received higher confidence ratings than Tenica’s, it argues,
reflects disparate treatment.
Tenica has failed to persuade the Court that the agency “unreasonably downgraded its
proposal for deficiencies that were ‘substantively indistinguishable’ or nearly identical from
those contained in other proposals.” Office Design Grp., 951 F.3d at 1372. At most, Tenica has
shown that the PPEB found that the agency employed some similar language in its evaluations of
the other offerors’ references and its evaluation of Tenica’s references. But beyond that, Tenica
ignores the material differences between the references revealed by the PPEB’s evaluation.
For example, Tenica observes that Volant’s proposal received a rating of “Substantial
Confidence” notwithstanding that the PPEB found that one of Volant’s references (like Tenica’s)
did not “clearly demonstrate the specifics of innovation” and did not “provide enough detail in
the description as to what the offeror actually performed.” Tenica MJAR at 16–17. But while the
agency observed that Volant did not “clearly demonstrate” how the work it had performed under
Reference 1 was innovative, it nonetheless found the reference relevant because it showed that
Volant “was able to use newer technology in a complex environment,” because “[m]any task
areas [were] addressed,” and because the “scope and magnitude of effort and complexities
required to develop, deploy and integrate” under the referenced contract “is similar to SETI.” AR
Tab 171 at 29457.
Similarly, the agency criticized Volant’s Reference 2 on the grounds that “[t]ask areas
[were] addressed broadly, but there is not enough detail in the description as to what the offeror
performed.” Id. at 29458. But Tenica again ignores that the agency also found that Volant’s
Reference 2 demonstrated a “[s]imilar scope and magnitude of effort and complexities [to the
SETI contract] . . . through the breadth of work that was required,” and that the reference
demonstrated at least “a minor innovation . . . through the development of a prototype that was
subsequently deployed in multiple environments which is similar to SETI.” Id.
Tenica’s effort to compare the evaluation of its past performance proposal to that of
Innovative Government Solutions (“IGS”) is similarly unavailing. Tenica finds it “curious,” that
IGS received a substantial confidence rating given what it calls the agency’s “admissions that the
references themselves demonstrated important areas in which the offeror could not demonstrate
relevance.” Tenica MJAR at 17. It observes that the agency “critiqued one of [Tenica’s]
references as only demonstrating innovative practices with limited detail, and critiqued another
reference as only speaking to a developed strategy for innovation but not to the actual
deployment of innovation.” Id.
But in selectively citing these critiques of IGS’s performance references, Tenica again
glosses over the aspects of the proposal that the agency relied upon to find them relevant. For
example, the agency deemed IGS’s Reference 1 “Somewhat Relevant” because it concluded that
[***]. AR Tab 171 at 28826. To be sure, the agency also critiqued the fact that [***]. Id.
Nonetheless, it also found that “[***] which involved some of the scope and magnitude of effort
and complexities this solicitation requires.” Id. Similarly, the agency found IGS’s Reference 2
43
“Relevant” notwithstanding that “[t]he narrative demonstrated innovative practices with limited
detail” because it “describe[d] the deployment of a [***] resulting in some improvements” and
“involved [a] similar . . . scope and magnitude of effort and complexities [as] this solicitation
requires.” Id. at 28827. And its Reference 3 was found “Very Relevant” because under the
referenced contract IGS had “[a]rchitected, engineered and developed a complex DoD enterprise
network which is nearly identical to the scope and complexity SETI is anticipated to require.” Id.
at 28830.
Finally, Tenica observes that Synaptek received a “Satisfactory Confidence” rating
despite the PPEB’s conclusion that “one of its references could not be evaluated due to non-
compliance and another of its references failed to demonstrate innovation.” Tenica MJAR at 17.
Tenica is correct that the agency declined to consider Reference 1 in Synaptek’s proposal. And
while it is also true that the agency concluded that Reference 2 did not demonstrate innovation,
the agency nonetheless found it “Somewhat Relevant” because the “[p]ast performance effort
involved some of the scope and magnitude of effort and complexities this solicitation requires.”
Id. at 29301. Syanptek’s “Satisfactory” confidence rating was therefore based on the fact that—
unlike Tenica—its third performance reference was deemed “Relevant.” Id. at 29304. That
determination was based on the agency’s conclusion that the reference “demonstrate[d] a project
of . . . complexity for a standard deployment across a joint environment and involved some
innovation through the automation of security” which “involved similar scope and magnitude of
effort and complexities SETI is anticipated to require.” Id.
In short, the performance references of Tenica and its proposed comparators are
materially different. In light of those differences, the agency’s evaluation of Tenica’s references
is not inconsistent with its evaluation of the references of the comparators. Tenica’s disparate
treatment argument therefore lacks merit.
ii. Internal inconsistency
In addition to disparate treatment, Tenica alleges that the agency’s classification of two of
its references as “Not Relevant” was “internally inconsistent, and arbitrary on its face” because
“the Agency also found these ‘non-relevant’ references to be both very recent and of exceptional
quality.” Tenica MJAR at 15, 20. Specifically, Tenica argues, if there was enough detail in the
references to establish that its performance was “Exceptional,” then there must have been enough
detail to show that they were also relevant.
The Court disagrees. The determination of relevance requires that the referenced “effort
involve[] similar scope and magnitude of effort and complexities th[e] solicitation requires.”
Gov’t MJAR at 72 (quoting AR Tab 5 at 389). The quality assessment, on the other hand,
focused upon whether “[c]ontractor performance . . . met . . . or exceeded [contractual
requirements] to the Government’s benefit,” based on the number of problems that occurred
during performance and the effectiveness of “corrective actions” taken by the contractor. AR
Tab 5 at 389. It is therefore entirely possible for the agency to conclude that a reference reflected
high quality work but that Tenica had failed to supply sufficient detail to establish that the work
performed was of a scope, magnitude, and complexity comparable to the work required to
perform on the SETI contract. And to the extent that Tenica challenges the agency’s conclusions
that it provided insufficient information to establish the relevance of References 1 and 3, the
44
Court notes that it was up to the agency’s subject-matter experts to assess whether the detail
provided established the relevance of the references. Tenica’s mere disagreement with that
assessment does not provide grounds for setting it aside.
Tenica’s claims regarding the propriety of the agency’s characterization of Reference 3 as
“Not Relevant” based on the fact that the project involved a sunsetting mission is equally without
merit. Tenica MJAR at 20 (quoting AR Tab 46 at 2560). The Court agrees with Tenica that
“neither the Solicitation nor the Acquisition Plan contain any prohibition against references
involving ‘sun-setting’ projects.’” Tenica MJAR at 20. But the agency did not find the reference
“Not Relevant” because it involved a sunsetting mission; it stated that the reference was not
relevant because it did not cover the task areas identified in the Solicitation and it was not
comparable to the SETI contract in terms of its scope, magnitude, or complexity of effort. AR
Tab 171 at 29389. Accordingly, the Court rejects Tenica’s contention that the agency’s relevancy
determinations were arbitrary and capricious or internally inconsistent.
c. Problem Statement No. 4 Evaluation
The agency assigned Tenica two strengths, one weakness, and one significant weakness
under Problem Statement No. 4. AR Tab 65d at 5692. The weakness assigned was based on the
agency’s conclusion that Tenica did not “adequately address [its] assumptions and techniques to
be used in completing this effort, including those related to the various disciplines involved.” See
id. The significant weakness assigned was based on the agency’s conclusions that Tenica’s
proposal “discusses the views [it] will use but not why [it] would use those views” and that “[a]
quality architecture requires not only specific views but also a purpose for those views in order
for them to provide useful information for decision makers.” Id.
Tenica argues that these determinations were arbitrary and that it should have received an
“Outstanding” rather than merely an “Acceptable” rating for Problem Statement No. 4. Tenica
MJAR at 28–29. First, according to Tenica, the agency’s criticism of its failure to “adequately
address assumptions and techniques” was unjustified because its narrative identified at least three
of the assumptions underlying its response to the problem statement. AR Tab 65d at 5692; see id.
at 28. It also notes that “the Solicitation did not require offerors to separately identify or break
out any particular assumptions used in responding to Problem Statement No. 4, and merely
required offerors to address the assumptions – which TENICA did.” Tenica MJAR at 28.
Tenica’s observations do not establish that the agency’s assessment lacked a rational
basis. The agency found Tenica’s handling of Problem Statement No. 4 flawed because it found
that Tenica’s treatment of both its assumptions and techniques was inadequate. AR Tab 65d
at 5692. Even assuming Tenica identified three assumptions in its response, that does not speak
to the question of whether it “adequately address[ed]” its assumptions. Id. And in any event, this
Court is in no position to substitute its opinion for that of the agency’s experts in deciding the
sufficiency of Tenica’s response to a hypothetical problem in a highly technical area.
Tenica’s challenge to the agency’s assignment of a significant weakness with respect to
Problem Statement No. 4 is similarly unpersuasive. Tenica observes that it “provided a chart
within its proposal that fully demonstrated [***].” Tenica MJAR at 28. But directing the Court to
a chart that Tenica believes the agency experts should have found sufficient to address their
45
technical concerns does not supply the Court with a basis for finding the agency’s contrary
determination irrational. Instead, it reflects only Tenica’s disagreement with the agency’s
judgment call. Therefore, Tenica’s challenge to the assignment of weaknesses under Problem
Statement No. 4 is unavailing.
d. Price Evaluation
Tenica challenges the agency’s price reasonableness analysis on grounds similar to those
pressed by DVS. The Court finds those arguments unpersuasive for the reasons set forth above
with respect to DVS’s protest.
Moreover, and in any event, Tenica has failed to show that—but for the alleged
infirmities in the price reasonableness analysis—it would have had a substantial chance of
receiving one of the awards. In fact, Tenica’s price itself was in the high range among all
offerors. It was also higher than the prices of twenty-four of the thirty most highly rated
proposals (which did not include Tenica’s). See AR Tab 65 at 4659.86 (SSA memorandum
noting that Tenica’s “proposal was priced at the higher end of all of the Offerors”); AR Tab 54 at
3432–33 (source selection document listing the thirty highest rated offerors and their proposed
prices).
Tenica’s prejudice argument itself is not based on its claim that the agency’s price
reasonableness analysis was flawed. Instead, it contends that the agency did not determine
“whether the lower-priced offerors could achieve the promised technical solutions for the offered
prices,” and that it was prejudiced by that alleged error because the five awardees who received
only a “Good” rating for Factor 1 [***]. Tenica MJAR at 31. But the error with which Tenica
charges the agency does not concern price reasonableness; it concerns price realism. See Agile
Def., Inc. v. United States, No. 19-1954, 2020 WL 2844705, at *3 (Fed. Cir. June 2, 2020)
(quoting First Enter. v. United States, 61 Fed. Cl. 109, 123 (2004)) (“[P]rice reasonableness
generally addresses whether a price is too high, whereas cost realism generally addresses
whether a cost estimate is too low.”); DMS All-Star Joint Venture v. United States, 90 Fed. Cl.
653, 657 n.5 (2010) (observing that “a price reasonableness analysis has the goal of preventing
the government from paying too much for contract work” while a price realism analysis
“investigates whether the contractor is proposing a price so low that performance of the contract
will be threatened”). Because the Solicitation did not require the agency to conduct a price
realism analysis, its failure to do so cannot serve as a basis for establishing prejudicial error. For
this reason as well, Tenica’s challenge to the agency’s price evaluation lacks merit.
D. Tapestry Technologies, Inc. (Case No. 19-1189)
1. The Agency’s Evaluation
Tapestry Technologies, Inc. (“Tapestry”) submitted its proposal on April 4, 2017. AR
Tab 159 at 25349 (Tapestry proposal). The TEBs assigned the proposal an “Acceptable” rating
for Factor 1, a “Satisfactory Confidence” rating for Factor 2, “Outstanding” ratings for both
problem statements under Factor 3, and an “Acceptable” rating for Factor 4. AR Tab 65d
at 5665.
46
On review, the SSEB upgraded Tapestry’s rating for Factor 2 from “Satisfactory
Confidence” to “Substantial Confidence.” Id. at 5669 (explaining that “[o]verall, the quality
ratings for the three references ranged from very good to exceptional”). On the other hand, it
downgraded the rating for Problem Statement No. 4 from “Outstanding” to “Good.” Id. at 5672
(citing AR Tab 5 at 391) (explaining that the two strengths the PSEB awarded indicated a
“thorough” but not “exceptional” approach and understanding of the problem’s requirements).
As a result, the SSEB assigned Tapestry the following ratings:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Acceptable Substantial Outstanding Good Acceptable
AR Tab 65d at 5674. It proposed a price of [***]. Id.; AR Tab 63a at 4659.
The SSAC recommended against an award to Tapestry. It observed that while Tapestry’s
price was among the lowest, it “failed to achieve a rating of higher than ‘Acceptable’ in the most
important factor,” which was “the distinguishing difference between those recommended for
award and [Tapestry].” AR Tab 65e at 5868. The SSA agreed and, as a result, Tapestry was not
among the awardees. AR Tab 65 at 4659.94–.96.
2. Tapestry’s Protest
Tapestry poses multiple challenges to the agency’s evaluation of its proposal. These
include contentions that the agency arbitrarily assigned weaknesses to its proposal under
Factor 1, while also unreasonably failing to award strengths under both Factors 1 and 4. Tapestry
also alleges disparate treatment in the Agency’s assignment of strengths and weaknesses.
Tapestry challenges as arbitrary and capricious the SSEB’s decision to downgrade its rating for
Factor 3, Problem Statement No. 4 from “Outstanding” to “Good.” In addition, it contends that
the Agency’s best value tradeoff decision was flawed in several respects. Finally, Tapestry
challenges the agency’s price reasonableness analysis.
The majority of Tapestry’s arguments challenge the judgments of the agency’s experts
regarding the technical merits of its proposal or the proposals of its competitors. As the Court
emphasizes throughout its opinion, these are judgments to which it must defer so long as the
record reveals that they have a rational basis. The Court is similarly skeptical of Tapestry’s
disparate treatment arguments because Tapestry does not argue that its proposal is substantively
indistinguishable from the proposals of competitors that received more favorable ratings. Instead,
it argues that the proposals are “very similar.” For those reasons and the others set forth below,
the Court concludes that Tapestry’s protest lacks merit.
a. Factor 1 Evaluation
The IEB assigned Tapestry four strengths and four weaknesses under Factor 1. See AR
Tab 65d at 5666–68. As a result, Tapestry received an “Acceptable” rating. Id. at 5668. Its
47
failure to earn at least a “Good” rating under Factor 1 ultimately played a critical role in the
agency’s decision not to award it a contract. See AR Tab 65 at 4659.97.
Tapestry contends that the weaknesses the agency assigned “were inconsistent with the
terms of the Solicitation or otherwise unreasonable,” and that “[i]n assigning many of these
weaknesses, DISA ignore[d] relevant elements of Tapestry’s proposal.” Pl.’s Mot. for J. on the
Admin. R. (“Tapestry MJAR”) at 7, ECF No. 80. Tapestry also contends that the agency
unreasonably failed to assign certain strengths to the proposal under Factor 1. Id. at 11–14.
Finally, it contends that the agency engaged in disparate treatment with respect to its evaluation
of certain features of Tapestry’s proposal under Factor 1. Id. at 18–26 The Court addresses each
of these arguments below.
i. Assignment of weaknesses based on lack of detail
regarding tracking and measurement of ROI
Under the Corporate Philosophy/Culture on Innovation category of Factor 1, the
Solicitation instructed offerors to describe their approach to risk and explain “the company’s
process for selecting innovation projects.” AR Tab 5 at 377. The agency assigned a weakness to
Tapestry’s proposal related to these requirements. AR Tab 65d at 5667. Specifically, the agency
pointed out that in section 2.3 of Tapestry’s proposal (which is entitled “Approach to Innovation
Risk”) Tapestry had referenced “a project selection process outlined in Section 2.5 on page 6.”
Id.; see also AR Tab 159 at 25475. But it noted, however, that “there is no such process detailed
on page 6, anywhere in section 2.5, or anywhere else in the proposal.” Id. In addition, the agency
observed, the proposal stated (again at section 2.3, AR Tab 159 at 25475) that Tapestry “[***]
described on page 8 in Section 2.6,” AR Tab 65d at 5667. Nonetheless the agency concluded that
“the process outlined [in section 2.6] is extremely limited in detail and does not specify how
[Tapestry] [***].” Id. According to the agency, Tapestry’s “inability to showcase and/or detail
how [it] [***].” Id.
Tapestry argues that “[i]n assigning this weakness, DISA relies on an incorrect section of
Tapestry’s proposal and ignores information in another section [i.e., 2.6] that directly addresses
the concerns.” Tapestry MJAR at 7. While the Court agrees that the explanation the agency
provided for assigning this weakness is not a model of clarity—it rejects Tapestry’s argument
that the agency’s concern was that the proposal contained no selection process at all. See id. at 8.
Instead, the Court understands that the agency assigned the weakness because it concluded that
Tapestry’s proposal at section 2.6 (which is entitled “Process For Selecting Innovation Projects”)
did not contain sufficient detail about how Tapestry [***]. And the agency found this flaw
significant because section 2.3 (which deals with approach to innovation risk) cross references
section 2.6, stating that it “[***] using methods described in Section 2.6 . . . to ensure we are
working on the right things and producing optimal results.” AR Tab 159 at 25475.
Tapestry also contends that the agency ignored section 2.3 of its proposal, entitled
“Approach to Innovation Risk,” in assigning this weakness. Tapestry MJAR at 8. It observes that
“Section 2.3 specifically discusses [***].’” Id. (quoting AR Tab 159 at 25475). But this passing
reference to [***] does not address the flaw the agency identified, which is that the proposal did
not provide details regarding how Tapestry tracks and measures ROI.
48
Tapestry further argues that the assignment of the weakness regarding [***] is
inconsistent with the agency’s decision to give Tapestry’s proposal “two separate strengths for
its [***].” Id. (citing AR Tab 65d at 5670). But the two strengths Tapestry references were
assigned by a different TEB, and concerned its response to Problem Statement No. 3, which is
evaluated under Factor 3. The assignment of strengths based on its responses under Factor 3 does
not call into question the rationality of the agency’s conclusion that Tapestry’s Factor 1 proposal
contained insufficient detail about how it [***].
Finally, in its reply, Tapestry alleges that the agency “held [it] to an unreasonably high
standard with [***]” as compared to proposals submitted by Synergy and Synaptek. Tapestry
Resp. in Opp’n to Def.’s Cross-Mot. for J. Upon the Admin. R. & Reply to Def.’s Resp. to Pl.’s
Mot. for J. Upon the Admin. R. (“Tapestry Reply”) at 3–4, ECF No. 131. The Court declines to
consider these disparate treatment arguments because they were made for the first time in
Tapestry’s reply. See SmithKline Beecham Corp. v. Apotex Corp., 439 F.3d 1312, 1319 (Fed.
Cir. 2006) (“Our law is well established that arguments not raised in the opening brief are
waived.”). And in any event, Tapestry’s disparate treatment claims lack merit because the
proposals to which it compares its own contained materially different features. 10
ii. Assignment of weakness regarding innovation
project tracking
Tapestry challenges the agency assignment of a weakness based on its response to the
Solicitation’s requirement under Factor 1 that it explain how it assesses, measures, and tracks
innovation, as well as its “methods for measuring the effectiveness of Innovation efforts.” See
AR Tab 5 at 377; AR Tab 65d at 5667. In assessing this weakness, the agency acknowledged
that Tapestry’s proposal included a so-called [***]. AR Tab 65d at 5667. But the agency found
Tapestry’s response inadequate because it concluded that the proposal consisted largely of “very
high-level statements” and insufficient detail about “how Tapestry tracks innovation.” Id. The
agency explained that this “flaw. . . increases the risk of failure to be innovative” because it was
“not clear whether [***].” Id. (discussing AR Tab 159 at 25478).
Tapestry contends that—contrary to the agency’s determination—its proposal “addresses
how it tracks innovation in the exact sections that DISA points out in the identified weakness”
(i.e., in sections 2.6 and 2.7). Tapestry MJAR at 9. Tapestry notes further that “section 2.7 refers
the reader to section 4, which lists real-world examples that support Tapestry’s statement[s].” Id.
But Tapestry’s expression of disagreement with the agency’s judgment regarding the sufficiency
of its narratives does not persuade the Court that the agency’s determination was irrational.
Indeed, the Court’s review of the narrative in Tapestry’s proposal concerning project selection
reveals ample basis for the agency’s view that it lacked detail. See, e.g., AR Tab 159 at 25478
10
Thus, Synaptek’s proposal also does not appear to explain how it [***] in connection with
innovation projects. See AR Tab 157 at 24936 (explaining that [***]). But Synaptek’s proposal
for tracking and prioritizing innovation projects contains a number of features not contained in
Tapestry’s proposal. See id. at 24935–36 (Synaptek) (explaining its [***]). Synergy’s proposal is
similarly distinguishable from Tapestry’s. See, e.g., AR Tab 158 at 25190 (explaining that it
[***]) (emphasis removed); id. (explaining exactly how [***]).
49
(observing that [***]). And the Court finds unilluminating Tapestry’s citation of strengths
assigned to other parts of its proposal. See Tapestry MJAR at 10 (discussing strengths assigned
related to its success with [***]). Therefore, Tapestry’s challenge to the agency’s assignment of
a weakness because the proposal contained insufficient detail about how Tapestry tracks
innovation cannot be sustained.
iii. Assignment of weakness regarding methodology
for making spending decisions
Tapestry also challenges the agency’s assignment of a weakness concerning the
Investment in Innovation aspect of Factor 1. The Solicitation directed offerors to “[d]escribe how
the company makes decisions on how, when and how much to spend on Innovation . . . [and] if
your company has supported Internal Research & Development (IR&D).” AR Tab 5 at 377. The
agency concluded that although Tapestry’s proposal revealed “[***].” AR Tab 65d at 5667.
Further, the agency determined, although Tapestry “provide[d] some detail on how they make
decisions as it relates to sponsoring innovative initiatives . . . they do not detail in their proposal
how they decide how, when, and how much to spend on innovation.” Id. These omissions raised
the risk of “unsuccessful contract performance,” the agency found, “because an Offeror without
a clear vision for deciding how, when, and how much to spend on new innovation initiatives is
less likely to succeed in performing SETI tasks that require recommendations as to how, when,
and how much to invest for different innovative projects.” Id. at 5667–68.
Tapestry argues that—contrary to the agency’s determination—it did supply the agency
with information about its process of deciding how, when, and how much to spend on its
innovation projects. In fact, according to Tapestry, it provided more detail than other offerors
who did not receive a weakness under this aspect of Factor 1. See Tapestry MJAR at 10–11
(referring to section 3.1 of its proposal); Tapestry Reply at 7 (referring to section 2.6 of its
proposal and discussing Synergy’s proposal).
The Court disagrees. The portions of its proposal that Tapestry cites do not explain the
criteria it uses when deciding how much or when to spend on innovation projects. See, e.g.,
Tapestry MJAR at 10–11 (quoting AR Tab 159 at 25480) (stating that its [***]). It was therefore
not unreasonable for the agency to assign a weakness to the proposal based on its lack of detail
and specificity.
Further, the Court rejects Tapestry’s claim that its proposal “provided more detail
concerning its innovation investment decision process than Synergy, which did not receive a
weakness on this basis.” Tapestry Reply at 7. Tapestry cannot successfully pursue a disparate
treatment claim because Synergy’s proposal and Tapestry’s proposal are materially different. For
instance, Synergy [***]. AR Tab 158 at 25191. Because the proposals are not substantively
indistinguishable, Tapestry’s disparate treatment claim is unavailing.
iv. Failure to assign strength for exceeding
requirements
Tapestry alleges that its proposal addressed two thirds of the agency’s “primary
innovation interests,” see AR Tab 5 at 376 (capitalization altered), as well as all PWS subtasks,
50
see AR Tab 1 at 11–33, even though it had no obligation to address either under the Solicitation.
Tapestry MJAR at 12. It contends that its “broad coverage of DISA functions” supplied
“‘evidence of sustained, year-after-year investment in technologies and innovative ways to
develop new capacity, improve service, reduce costs, and create efficiencies,’ which the
Solicitation cites as warranting a higher rating under Factor 1.” Id. (citing AR Tab 5 at 388).
Accordingly, Tapestry contends, it should have been assigned a strength under Factor 1 for
“exceeding DISA’s requirements for current primary innovation interests and the SETI PWS.”
Tapestry MJAR at 12 (capitalization altered).
This argument is a non-starter. A “strength” is defined in DoD’s Source Selection
Procedures as “an aspect of an offeror’s proposal that has merit or exceeds specified performance
or capability requirements in a way that will be advantageous to the Government during contract
performance.” U.S. Dep’t of Def., Source Selection Procedures 40 (2016),
https://www.acq.osd.mil/dpap/policy/policyvault/USA004370-14-DPAP.pdf. The Court has no
basis to make judgments regarding the value to the agency of the additional information Tapestry
supplied in its proposal, especially where the Solicitation explicitly stated that offerors were not
required to address all PWS subtasks or the agency’s “primary innovation interests.” AR Tab 5
at 376, 386. Accordingly, the Court rejects Tapestry’s contention that it should have been
awarded a strength under Factor 1 on these bases.
v. Failure to assign strength based on human
capital investment in innovation
Section M of the Solicitation states that offerors “may be evaluated more favorably and
achieve higher ratings” for, among other things, “[d]emonstrated continuous investment in
Innovation through evidence of sustained, year-after-year investment in technologies and
innovative ways to develop new capability, improve service, reduce costs and create
efficiencies” or “[d]emonstrated evidence of ongoing corporate investment in tools, training,
facilities, personnel and equipment.” AR Tab 5 at 388. Tapestry contends that, given these
criteria, the agency should have assigned it an additional strength for human capital investments
under the category of investment in innovation. Tapestry MJAR at 13. According to Tapestry, its
proposal earned that recognition because it “went above and beyond the Solicitation’s
requirements when it comes to ensuring that its people are positioned to excel.” Id. It explained
that it has [***]. Id. (quoting AR Tab 159 at 25480).
Tapestry again asks the Court to second guess the agency’s decision regarding the value
of particular features of its proposal. But it has not persuaded the Court that the agency could not
have rationally decided that Tapestry’s provision of these training and educational benefits to its
employees did not merit the assignment of a strength. For one thing, the proposal’s description of
these tools does not provide evidence of the kind of “ongoing” or “sustained, year-after-year
investment[s]” described in section M. AR Tab 5 at 388. For another, determining whether a
proposal merits a strength because it goes above and beyond the agency’s requirements involves
51
a judgment that is quintessentially the agency’s to make. Therefore, Tapestry’s challenge lacks
merit. 11
vi. Disparate treatment in the assignment of
strengths
Finally, Tapestry alleges unequal treatment in the agency’s assignment of strengths under
Factor 1. First, it contends that the agency awarded strengths to A Square Group (“ASG”),
Synaptek, and Versa Integrated Solutions, Inc. (“Versa”) but not to Tapestry, “for language that
was in Tapestry’s proposal, but not in other proposals.” Tapestry MJAR at 18–19 (citing AR Tab
159 at 25485). Specifically, under the category of History of Engineering and Deploying
Innovative Solutions, the offerors were directed to “describe how your company builds
acceptance of Innovation and its necessary disruption to your business culture.” AR Tab 5
at 377. Tapestry responded that it [***]. AR Tab 159 at 25485. It observes that the agency
employed virtually the same language in describing its reasons for assigning strengths to the
proposals of the other three offerors and yet did not assign such a strength to Tapestry’s
proposal. See Tapestry MJAR at 18; AR Tab 65d at 4773, 5625, 5740 (observing that offerors
“prototype[d] the new processes in limited technology groups[] rather than rolling them out to
the entire group first[] to ensure feedback was captured and the process was fine-tuned prior to
full implementation”).
Tapestry’s unequal treatment argument lacks merit. To be sure, the other offerors, like
Tapestry, described the use of [***]. See, e.g., AR Tab 157 at 24944 (Synaptek proposal
§ C.3.4); AR Tab 131 at 12771–72 (ASG proposal § 1.3.5). But the agency awarded strengths to
the other offerors based on multiple aspects of their proposals, finding that they “describe[d] a
thorough approach for building acceptance of Innovation and its necessary disruption into their
business culture.” See AR Tab 65d at 4773 (ASG evaluation) (assigning a strength because ASG
“describes a thorough approach for building acceptance of Innovation” and supplied evidence in
its [***]); id. at 5625 (Synaptek evaluation) (describing among other things the [***]); id. at
5740 (Versa evaluation) (observing that Versa described [***]). The Court therefore rejects
Tapestry’s argument that it was subjected to disparate treatment when it was not similarly
awarded a strength for its prototyping process.
Tapestry’s second disparate treatment claim is based on the contention that the agency
awarded strengths to Synaptek’s proposal “for Factor 1 elements [that were] also present in
Tapestry’s proposal.” Tapestry MJAR at 19. For example, Tapestry observes, Synaptek
“receive[d] a strength for its response to ‘the methods for measuring the effectiveness of
Innovation efforts.’” Id. (citing AR Tab 61 at 4407; AR Tab 157 at 24936 (section of Synaptek’s
11
In its reply brief, Tapestry contends that other offerors (Innovations NexGen JV, Mission
Support, and BCMC) received strengths for human capital investment features included in
Tapestry’s proposal. Tapestry Reply at 9–10. Tapestry failed to raise this disparate treatment
claim in its opening brief; accordingly it is waived. Moreover, and in any event, the claim lacks
merit because Tapestry’s proposal is substantively different from the proposals of these other
comparators. See Def.’s Consolidated Reply in Support of Cross-Mot. for J. Upon the Admin. R.
at 76–78, ECF No. 140 (discussing differences).
52
proposal meeting these requirements)). Tapestry argues that its proposal—which it contends
includes “very similar features to those proposed by Synaptek,” id. at 20—should also have been
assigned a strength. It takes a similar tack with respect to its other contentions of disparate
treatment between its proposal and Synaptek’s. See id. at 21 (citing AR Tab 159 at 25473,
25478, 25485) (reciting features of its proposal and contending that it should have also received
a strength under the investment in innovation category because its proposal “contains very
similar features to those for which Synaptek receives a strength”); id. at 22 (arguing that
“Tapestry deserves a strength” under the topic of “history of engineering and deploying
innovative solutions,” because “it proposes very similar approaches” to those which formed the
basis for the agency’s decision to assign Synaptek a strength.).
The Court does not possess the expertise to determine how the features of Synaptek’s
proposals compare to those of Tapestry in terms of their value to the agency; nor is it equipped to
second guess agency determinations regarding whether particular proposals contain sufficient
detail to address the Solicitation’s requirements. The same is true as to most of the other
comparisons that Tapestry asks the Court to make throughout much of its MJAR. The bottom
line is this: even assuming that the features of other offerors’ proposals were “similar” or even
“very similar” to its own (as Tapestry contends), differing ratings assigned to “similar” proposals
do not establish disparate treatment under Office Design Group. As the Court explained above,
to protect the agency’s authority to make distinctions among proposals based on its own
expertise, the proposals must be substantively indistinguishable to justify a finding of disparate
treatment. Tapestry does not even allege that this threshold is met as to any of its disparate
treatment arguments. Tapestry’s disparate treatment arguments as to the assignments of strengths
to Synaptek and other offerors such as ASG which are also discussed in its MJAR therefore fail.
b. Factor 3 Evaluation
As set forth above, the PSEB assigned Tapestry two strengths and no weaknesses for
Problem Statement No. 4. AR Tab 65d at 5671. The SSEB reviewed and discussed the strengths
assigned and left them in place. Id. at 5672. Nonetheless, the SSEB downgraded its Factor 3,
Problem Statement No. 4 rating from “Outstanding” to “Good.” Id. Tapestry argues that the
downgrade was not adequately explained and was arbitrary. Tapestry MJAR at 14–15. It
observes that “[t]he RFP contains no provision that a rating of ‘Outstanding’ requires that an
offeror receive more than two strengths,” nor does the agency give an additional explanation for
“why the particular strengths assigned did not indicate ‘an exceptional approach and
understanding.’” Id. at 15 (quoting AR Tab 5 at 391).
But the SSEB did not state that it downgraded the Problem Statement No. 4 rating
because Tapestry’s proposal did not earn more than two strengths, as Tapestry contends. See AR
Tab 65d at 5672. To the contrary, upon review and discussion, the SSEB made a qualitative
determination that “the specific merit of the two (2) strengths identified [] did not indicate an
exceptional approach and understanding of the requirements; rather, it indicated a thorough
approach and understanding of the requirements.” Id. The SSEB’s explanation is consistent with
the terms of the Solicitation, under which a “Good” proposal “indicates a thorough approach and
understanding of the requirements” and “contains strengths which outweigh any weaknesses,”
and an “Outstanding” proposal “indicates an exceptional approach and understanding of the
requirements” and contains strengths that “far outweigh any weaknesses.” AR Tab 5 at 391.
53
Of course, the determination of what characteristics make an offeror’s approach
“exceptional” as opposed to merely “thorough” is a largely subjective one. It was the SSEB’s
considered judgment—in disagreement with the PSEB—that the two strengths assigned to
Problem Statement No. 4 were insufficient to render Tapestry’s response to Problem Statement
No. 4 an exceptional one. That is not a judgment amenable to second guessing by the Court; in
fact, Tapestry itself does not argue that the agency abused its discretion in downgrading the
rating—its quarrel is with what it claims was the agency’s failure to adequately explain its
decision.
In any event, Tapestry has failed to demonstrate that it was prejudiced by receiving a
“Good” rather than outstanding rating under Problem Statement No. 4—i.e., that were it not for
that rating, it would have had a substantial chance of receiving an award. See Weeks Marine, 575
F.3d at 1359 (citing Info. Tech & Applications v. United States, 316 F.3d 1312, 1319 (Fed. Cir.
2003) (“To establish prejudice, [the protestor] must show that there was a ‘substantial chance’ it
would have received the contract award but for the alleged error in the procurement process.”)).
The best value determination was based on the higher degree of risk associated with Tapestry’s
proposal under Factor 1. AR Tab 65 at 4659.97. A slight upgrade in its rating for Problem
Statement No. 4, from “Good” to “Outstanding,” would not have moved Tapestry’s proposal into
contention. For that reason as well, this protest ground is rejected.
c. Factor 4 Evaluation
The Solicitation states that proposals would be evaluated under Factor 4 (Utilization of
Small Business) to determine “the extent to which Offerors have in place effective procedures to
ensure proper flow-down of requirements, process management, and performance assessments of
small business utilization at lower tiers.” AR Tab 5 at 392. In its evaluation, the agency
determined that Tapestry did not have such procedures in place. AR Tab 50 at 3274 (Factor 4
consensus evaluation). Tapestry contends that the agency’s decision lacked a rational basis
because the teaming agreements it supplied with its proposal reflected such procedures. Tapestry
MJAR at 16. In fact, Tapestry argues, it should actually have been assigned a strength for
meeting this requirement, which would have resulted in a “Good” rather than merely
“Acceptable” rating on Factor 4. Id. at 18.
This contention fails because Tapestry has not established that had it received a “Good”
rating for Factor 4 it would have had a substantial chance of receiving an award. It also fails on
the merits because the narrative of the small business volume of Tapestry’s proposal does not
include any discussion of this requirement; nor did it reference the teaming agreements upon
which Tapestry now relies. See AR Tab 159 at 25643–44. In fact, notwithstanding that Tapestry
provided the teaming agreements along with Volume III (the small business volume), the
Solicitation provided that they were to be submitted at Tab B of Volume II (the technical
proposal). AR Tab 5 at 375. Because Tapestry did not address these requirements in its proposal
narrative, it was not irrational for the agency to decide that it merited only an “Acceptable,”
rather than a “Good” rating.
54
d. The SSA’s Best Value Tradeoff Decision
In addition to challenging the agency’s decisions regarding the assignment of strengths
and weaknesses, Tapestry also challenges the SSA’s best value tradeoff analysis on the grounds
that 1) the SSA improperly relied on a mechanical application of adjectival ratings; 2) the SSA
failed to provide any rational explanation for its decision to make contract awards to three
offerors that received “Marginal” ratings on their problem statements; and 3) the SSA failed to
meaningfully consider the offerors’ prices. Id. Like Tapestry’s previous claims, the Court finds
that these lack merit.
i. Reliance on adjectival ratings
Tapestry argues that “[t]he SSA’s cost-technical tradeoff decision relies exclusively on
references to adjectival ratings as the basis for DISA’s decision not to make an award to
Tapestry.” Tapestry MJAR at 27. In addition, Tapestry notes, “[t]he SSA’s source selection
decision must represent his or her own independent judgment, and must be ‘documented, and the
documentation shall include the rationale for any business judgments and tradeoffs made or
relied on by the SSA, including benefits associated with additional costs.’” Id. (quoting FAR
15.308). Tapestry contends that the SSA decision runs afoul of the FAR because it consists of
“‘[c]onclusory statements, devoid of any substantive content,” thereby “‘threatening to turn the
tradeoff process into an empty exercise.’” Id. (quoting One Largo Metro, LLC v. United States,
109 Fed. Cl. 39, 77 (2013) (quoting Serco Inc. v. United States, 81 Fed. Cl. 463, 497 (2008)).
This argument is unavailing. The FAR requires that the SSA base his decision “on a
comparative assessment of proposals against all source selection criteria in the solicitation.”
FAR 15.308 (emphasis supplied). But FAR 15.308 permits the SSA to rely upon the evaluations
conducted by the TEBs and the recommendations made by the SSAC in performing that
comparative assessment. See Comput. Sci. Corp. v. United States, 51 Fed. Cl. 297, 320 (2002)
(“[T]he court does not interpret § 15.308 as requiring the SSA to conduct his own contract-by-
contract comparative assessment [when evaluating offerors’ past performance].”). Indeed, the
FAR “mandates that the SSA ‘shall . . . [c]onsider the recommendations of advisory boards or
panels.’” Id. (quoting FAR 15.303(b)(5)). “[A]s long as the agency evaluators conducted a
[comparative] assessment, the SSA may rely on th[eir] evaluations without performing the same
detailed analysis.” Id.
Furthermore, a review of the record here shows that the SSA did not base his best value
tradeoff analysis on a mechanical application of adjectival ratings but on a balance of the
proposal’s strengths and weaknesses against its relatively low price tag. He explained that
Tapestry was one of the lowest priced offerors but that it received only an “Acceptable” rating
for the Innovation factor, which was the most important of all of the technical factors. AR
Tab 65d at 4659.97. Indeed, the SSA noted that the Solicitation “was about Innovation and was
looking to attract the most innovative offerors.” Id. The SSA therefore reasonably found it
significant that Tapestry’s rating for Innovation was lower than the ratings of all of the
successful offerors.
But the SSA did not decide not to award Tapestry a contract based solely on its relatively
mediocre rating for Factor 1. He acknowledged both the low price of the proposal and Tapestry’s
55
“successful track record,” as well as the fact that it had “demonstrated the ability to solve” the
problems the agency presented, as reflected in its ratings for the problem statements.
Nonetheless, he explained, “[w]hen I consider the Offeror achieved the highest rating in the
second most important factor and in one of the Problem Statements, and proposed a low total []
price[,] I need to contemplate the risk of unsuccessful performance and consider whether this
Offeror presents the best value to the Government.” Id. Focusing on the risk of unsuccessful
performance, he reasoned that “[t]he solicitation was looking for Innovative Offerors . . . as
reflected in the most important standalone factor, Innovation.” Id. He explained that he was “not
willing to tradeoff a lower price for an Offeror who has demonstrated risk of unsuccessful
performance to be ‘no worse than moderate’ when [he could] select offerors who have achieved
ratings where the risk of unsuccessful performance is ‘low’ or ‘very low’ for the most important
factor.” Id.
The Court concludes that the SSA’s analysis comported with the requirements of the
FAR and reflected rational decision making. To be sure, the SSA used the adjectival ratings as
guides for conducting the tradeoff analysis. But that is the purpose of adjectival ratings generally
and particularly in a procurement like the present one, which required the analysis of highly
technical proposals submitted by almost 100 offerors. Further, it was appropriate for the SSA to
use the adjectival ratings as shorthand to explain the procurement decision because the bases for
the ratings are well documented and were themselves considered as part of the tradeoff process.
See Wackenhut, 85 Fed. Cl. at 297 (citing Opti-Lite Optical, 1999 WL 152145, at *3 (Comp.
Gen. 1999)) (“[A]djectival ratings and point scores are useful as guides to decision-making . . .
but [] must be supported by documentation of the relative differences between the proposals,
their strengths, weaknesses and risks, and the basis and reasons for the . . . decision.”).
Tapestry’s challenge based on the agency’s reliance upon adjectival ratings in making its
tradeoff decision therefore lacks merit.
ii. Ignoring “Marginal” ratings assigned to other
offerors
Tapestry also contends that the Agency’s tradeoff analysis was flawed because the SSA
allegedly ignored the “Marginal” ratings assigned to ASG, Innoplex, and Synergy for one of the
problem statements. Tapestry MJAR at 30–31. This argument is contradicted by the record,
which reveals that the agency evaluated, considered, and weighed the three “Marginal” ratings in
accordance with the Solicitation. AR Tab 65 at 4659.13–.16 (agency analysis of Innoplex); id. at
4659.37–.38 (agency analysis of Synergy); id. at 4659.41–.44 (agency analysis of ASG). For
each evaluation, the SSA acknowledged the “Marginal” ratings but found nonetheless that any
risk would be mitigated by competition at the task order level. See AR Tab 65 at 4659.16 (“I
recognize risk in the ‘Marginal’ rating for [Innoplex], but it will be mitigated to an acceptable
level through the task order competition.”); id. at 4659.38 (“The risk of unsuccessful
performance [by Synergy reflected by] the ‘Marginal’[] rating [is] mitigated by the number of
awardees and the competition at the Task Order level.”); id. at 4659.44 (“The risk of
unsuccessful performance [by ASG] associated with the ‘Marginal’ rating is mitigated by the
number of awardees and the competition at the Task Order level.”). Tapestry’s mere
disagreement with how the agency weighed the risk associated with these “Marginal” ratings
against the lower risk associated with their higher ratings on more important evaluation factors
does not provide a basis for sustaining this protest ground.
56
iii. Inadequate consideration of price
Finally, Tapestry argues that the SSA’s best value tradeoff analysis was flawed because
he only considered price as a nominal factor and was willing to pay an “enormous price
premium[] despite a lack of commensurate technical benefit.” Tapestry MJAR at 32. Tapestry
also argues that the Agency’s entire price reasonableness methodology was flawed because it
“ensured that almost no price could be too high to receive an award.” Id. at 35.
The Court is unconvinced by these arguments for the reasons it set forth above in dealing
with the similar arguments pressed by DVS. It was within the agency’s discretion to determine
how much of a price premium it was willing to pay in light of the perceived benefits of each
proposal. The SSA acknowledged Tapestry’s low price and its attractive past performance
record, but concluded that the price savings did not justify making an award to a proposal that
was relatively unimpressive with respect to Factor 1. AR Tab 65 at 4659.97. It was not
unreasonable for the agency to find that Tapestry’s low price was outweighed by the
shortcomings in its technical evaluation or that the higher prices of the successful offerors were
offset by the technical benefits they could provide. See Sys. Studies & Simulation, Inc. v. United
States, 146 Fed. Cl. 186, 201–02 (2019) (quoting Serco, 81 Fed. Cl. at 497) (observing that
“logic suggests that as [the magnitude of a price difference between two proposals] increases, the
relative benefits yielded by the higher-priced offer must also increase”); Technatomy Corp.,
B-414672.5, 2018 WL 5292575, at *15 (Comp. Gen. Oct. 10, 2018) (“Where [] a solicitation
provides that technical factors are more important than price in source selection, selecting a
technically superior, higher-priced proposal is proper where the agency reasonably concludes
that the price premium is justified in light of the proposal’s technical superiority . . . [as]
supported by a rational explanation.”). Tapestry’s objections to the way that the agency took
price into consideration are therefore rejected.
e. Innoplex’s proposal
Finally, Tapestry contends that Innoplex’s proposal received an unfair competitive
advantage because, according to Tapestry, it violated the provision in the Solicitation which
provided that “page limitations may not be circumvented by including inserted text boxes/pop-
ups or internet links to additional information.” Tapestry MJAR at 38–39 (citing AR Tab 5
at 370). Even if such a violation occurred, Tapestry has not established that it was prejudiced by
it. Further, offerors were permitted to use font sizes as small as six-point type for “tables, charts,
graphs and figures.” AR Tab 5 at 371. The Court has reviewed the pages in Innoplex’s proposal
that Tapestry cites and concluded that what Tapestry characterizes as text boxes could also
reasonably have been deemed by the agency to be tables, charts, graphs, or figures. See Tapestry
MJAR at 39 (citing AR Tab 142 at 17439, 17440, 17442, 17443, 17445, 17447, 17448, 17449,
17450, 17451, 17452, 17453, 17455, 17456, 17457).
E. CollabraLink Technologies, Inc. (Case No. 19-1178C)
The Agency’s Evaluation
CollabraLink Technologies, Inc. (“CollabraLink”) submitted its proposal on April 4,
2017. AR Tab 135 (CollabraLink proposal). The proposal was assigned six strengths and no
57
weaknesses under Factor 1. AR Tab 65d at 4910–11. For Factor 3, Problem Statement No. 3,
CollabraLink was assigned two strengths and no weaknesses. Id. at 4914. For Factor 3, Problem
Statement No. 4, CollabraLink earned two strengths, which were offset by one weakness and one
significant weakness. Id. at 4915–16. For Factor 4, CollabraLink received six strengths and no
weaknesses. Id. at 4917–18.
The agency assigned CollabraLink’s proposal the following adjectival ratings:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Satisfactory Good Marginal Outstanding
Id. at 4919; AR Tab 65e at 5843–45 (SSAC memorandum). CollabraLink’s evaluated price was
ranked [***]. AR Tab 65e at 5843.
The SSAC observed that CollabraLink’s proposal “was priced in the upper half of all of
the Offerors” and “was not one of the highest technically rated.” AR Tab 65e at 5845. It
therefore recommended against awarding a contract to CollabraLink. Id. The SSA agreed and
CollabraLink was not selected as an awardee. AR Tab 65 at 4659.70–.72.
CollabraLink’s Protest
In its amended MJAR, CollabraLink challenges several aspects of the agency’s
evaluation. It contends that the agency failed to follow the Solicitation’s ratings criteria when it
assigned a “Good” rather than “Outstanding” rating under Factor 1. Pl.’s Mem. in Support of its
Am. Mot. for J. on the Admin. R. (“CollabraLink MJAR”) at 12, ECF No. 104. It further argues
that there is insufficient documentation in the record to explain why the agency downgraded to
“Marginal” its rating for Factor 3, Problem Statement No. 4. Id. at 24. Finally, in arguments
reminiscent of those pressed by DVS, CollabraLink contends that the agency’s best value
tradeoff analysis was “based exclusively on adjectival ratings and did not account for the
underlying relative technical merit (e.g. Strengths and Weaknesses) of the proposals,” and that
the agency failed to meaningfully consider price. Id. at 29–30, 35. These contentions lack merit.
a. Rating for Factor 1
As discussed above, under the Solicitation, a proposal may be assigned a “Good” rating
under Factor 1 where: 1) it “addresses all Innovation elements and indicates a thorough approach
and understanding of Innovation”; 2) its strengths “outweigh any weaknesses”; and 3) the “[r]isk
of unsuccessful performance is low.” AR Tab 5 at 387–88. The criteria for assigning an
“Outstanding” rating for Factor 1, on the other hand, require that: 1) the proposal must “address[]
all Innovation elements and indicate[] an exceptional [as opposed to merely “thorough”]
approach and understanding of Innovation”; 2) the proposal’s strengths must “far outweigh”
weaknesses; and 3) the “[r]isk of unsuccessful performance” by the offeror must not only be
“low”—it must be “very low.” Id. at 387.
58
CollabraLink argues that it was arbitrary, irrational, and contrary to the Solicitation for
the agency to assign it a “Good” rather than “Outstanding” rating for Factor 1. CollabraLink
MJAR at 13. It contends that an “Outstanding” rating was required because its six strengths “far
outweigh[ed]” its weaknesses (of which there were none). Id. (citing AR Tab 5 at 387).
These contentions lack merit. Even assuming that Collabralink’s strengths can be
characterized as “far outweigh[ing]” its weaknesses, an “Outstanding” rating under Factor 1 also
requires that a proposal have an “exceptional approach and understanding of Innovation” and
that the offeror’s risk of failure be “very low.” AR Tab 5 at 387. The agency determined that
CollabraLink’s proposal did not surpass these thresholds. AR Tab 65d at 4912. As the agency
explained to CollabraLink during its debriefing, “[t]he evaluation board for Factor 1 determined
that [CollabraLink’s] proposal addressed all innovation elements and indicated a thorough
approach and understanding of innovation” but, after “t[aking] all strengths into consideration
. . . did not conclude that the Factor 1 proposal indicated an exceptional approach and
understanding of innovation.” AR Tab 101 at 8995 (CollabraLink debriefing Q&A response).
CollabraLink contends that it was arbitrary for the agency to find that its approach and
understanding were merely “thorough” and not “exceptional,” AR Tab 5 at 387, because the
agency characterized five of the six strengths it earned as having “the potential to yield the most
valuable and beneficial results” to the agency, AR Tab 65 at 4659.68. CollabraLink notes that it
was assigned more strengths that had this potential than seven of the awardees who were rated
“Outstanding.” CollabraLink MJAR at 15. CollabraLink further notes “eight other awardees who
received ‘Outstanding’ ratings in Factor 1 were found to have the same number of Most
Valuable and Beneficial Strengths as CollabraLink—five.” Id. at 16.
CollabraLink’s reliance on the number of “Most Valuable and Beneficial Strengths” it
was assigned is misplaced for a number of reasons. Id. For one thing, as the table in
CollabraLink’s MJAR shows, all of the awardees who received “Outstanding” ratings had more
than the six strengths CollabraLink’s proposal earned—indeed, their average number of strengths
(approximately twelve) was twice the number assigned to CollabraLink. Id. at 17. Further, while
the Solicitation distinguished between weaknesses that were “significant” and those that were
not, it made no distinction among strengths based on which had the “potential to yield the most
valuable and beneficial results.” AR Tab 65d at 4911; AR Tab 65 at 4659.68. 12
CollabraLink theorizes that the basis for identifying which strengths have the “potential
to yield the most valuable and beneficial results” may be divined from an evaluation worksheet
12
The five strengths in CollabraLink’s proposal which the agency characterized as having the
potential to yield the most valuable and beneficial results were: 1) the proposal’s “clear, concise,
and well detailed” showing concerning “how [its] core competencies of Innovation align with
DISA’s mission needs and operating principles”; 2) its description of [***]; 3) its “detailed
[knowledge management] methodology”; 4) its description of “a clear track record of developing
solutions and successfully sustaining the solutions from infancy to full maturity”; and 5) its
[***]. AR Tab 65d at 4910–11. The sixth strength CollabraLink earned, which was not
characterized as having the potential to yield the most valuable and beneficial results, was its
[***]. Id. at 4911.
59
prepared by the SSEB Chair. See CollabraLink MJAR at 15. According to CollabraLink, the
worksheet reveals that “the SSEB Chair thought that some Strengths assigned to offerors under
Factor 1 were ‘At Risk Strengths’ because they aligned with ‘Core Competency’ requirements of
the Solicitation.” Id. Therefore, CollabraLink states, the SSEB Chair created a chart which took
the total number of strengths assigned (six in CollabraLink’s case) and subtracted from it the
number of so-called “Competency Strengths” (one) with the difference representing the “most
valuable and beneficial” strengths. Id. (citing AR Tab 173).
The Court does not find CollabraLink’s explanation of the worksheet helpful. The
worksheet is entitled “Factor 1: InnovationAt Risk Strengths.” AR Tab 173 at 29853. It is not
comprehensive; in fact, it covers only twenty-nine of the ninety-nine proposals the agency
evaluated. Id. For each proposal the spreadsheet includes a column labelled “Current Counts,”
which appears to reflect the number of strengths, weaknesses, significant weaknesses, and
deficiencies assigned by the IEB. Id. Another column is entitled “Counts w/ Competency
Strength Removed.” Id. CollabraLink’s view is that the table distinguishes between so-called
“competency” strengths and strengths that “have the potential to yield the most valuable and
beneficial results.” CollabraLink MJAR at 15.
But CollabraLink’s theory raises more questions than it answers, including what it means
for a strength to be “At Risk” or to be a “core competency” strength and why the latter strengths
would categorically not have the potential to yield the most valuable and beneficial results. Id.
Moreover, CollabraLink’s entire theory is undermined by its acknowledgements that the
worksheet “appears to be only a preliminary analysis because it does not analyze all Factor 1
proposals,” and that “many of the ‘most valuable and beneficial’ Strengths counts in the SSAC
and SSDD Reports are different than the Strength ‘Counts w/ Competency Strengths Removed’
in the SSEB Chair’s worksheet.” Id. at 15 n.3 (citing AR Tab 173 at 29853; AR Tab 63 at 4570–
651; AR Tab 65 at 4659.6–.96).
In any event, “a qualitative evaluation of proposals is not governed by a simple count of
strengths and weaknesses.” N. S. Consulting Grp., LLC v. United States, 141 Fed. Cl. 549, 557
(2019); see also LOUI Consulting Grp., Inc., B-413703.9, 2017 CPD ¶ 277 (Comp. Gen. Aug.
28, 2017) (“[T]he evaluation of quotations and assignment of adjectival ratings should generally
not be based upon a simple count of strengths and weaknesses, but on a qualitative assessment of
the quotations consistent with the evaluation scheme.”). And an agency has “broad discretion to
weigh an offeror’s strengths and weaknesses as it sees fit.” Tetra Tech, Inc. v. United States, 137
Fed. Cl. 367, 386 (2017). Therefore, CollabraLink cannot establish that the agency acted
irrationally when it did not give dispositive weight to the number of strengths each offeror
earned that were deemed to have “the potential to yield the most valuable and beneficial results.”
There is similarly no merit to CollabraLink’s argument that the agency’s Factor 1
evaluation was “manifestly unreasonable” because CollabraLink earned an “Outstanding” rating
under Factor 4, where it earned the same number of strengths (six) and weaknesses (zero).
CollabraLink MJAR at 13–14. See Wellpoint Military Care Corp., B-415222.5, 2019 CPD ¶ 168
(Comp. Gen. May 2, 2019).
Moreover, there are material differences between the criteria for assigning adjectival
ratings under Factors 1 and 4. The adjectival rating assigned under Factor 4 does not depend
60
upon the number of strengths and weaknesses assigned or the balance between the two. See AR
Tab 5 at 392. Instead, under Factor 4 the ratings are based on the quality of the proposal’s
“approach and understanding of the small business objectives,” i.e., whether that approach and
understanding is “thorough” or “exceptional.” Id. There is therefore no inconsistency between
the agency’s assignment of ratings to CollabraLink’s proposal under Factors 1 and 4.
Finally, CollabraLink contends that the SSEB Chair’s worksheet shows that he
recommended a rating change under Factor 1 to “Purple/Blue” i.e., “Good/Outstanding,” but that
the final SSEB report did not reflect this recommendation. CollabraLink MJAR at 18 (citing AR
Tab 173 at 29853). But it is unclear to the Court what a “Purple/Blue” designation signifies. AR
Tab 173 at 29853. Presumably it means that the SSEB Chair concluded that it was a close call
whether to rate the proposal “Good” or “Outstanding.” Further, the worksheet does not reflect a
recommendation that CollabraLink’s rating be changed; the table lists a “Purple/Blue” rating in
both the “Current Counts” column and the “Rating Change” column. Id.
In short, the agency’s decision to assign CollabraLink’s Factor 1 proposal a “Good”
rating was reasonable and adequately documented. CollabraLink’s protest based on its Factor 1
rating therefore lacks merit. 13
b. Factor 3, Problem Statement No. 4 Rating
CollabraLink’s next contention is that the Agency made a “clear error” in assigning it a
“Marginal” rating under Factor 3 for Problem Statement No. 4. Specifically, it contends that “the
record unambiguously shows that the PSEB rated CollabraLink’s [Problem Statement No. 4]
proposal as ‘Acceptable’ and that the ‘SSEB concurred’ with this finding.” CollabraLink MJAR
at 24. The Court disagrees with CollabraLink’s interpretation of the record.
The Solicitation states that a “Marginal” rating may be assigned to a problem statement
where a proposal: 1) “does not clearly meet [the] requirements and has not demonstrated an
adequate approach and understanding of the requirements”; 2) “has one or more weaknesses
which are not offset by strengths”; and 3) reflects a high risk of “unsuccessful performance.” AR
Tab 5 at 391. An “Acceptable” rating indicates that a proposal: 1) “meets requirements and
indicates an adequate approach and understanding of the requirements”; 2) possesses strengths
and weaknesses that “are offsetting or will have little or no impact on Contract performance”;
and 3) reflects a risk of “unsuccessful performance [that] is no worse than moderate.” Id.
13
In its MJAR, CollabraLink presses a disparate treatment argument that is derivative of its
argument that it should have been assigned an “Outstanding” rating for Factor 1. CollabraLink
MJAR at 22. It claims unequal treatment based on the fact that it earned more strengths that the
agency characterized as potentially yielding the most valuable and beneficial results than seven
offerors who received an “Outstanding” rating. CollabraLink’s disparate treatment argument
fails because, as shown above, the number of strengths and weaknesses assigned to a proposal
was not the determinant of its adjectival ratings. Further, the proposals themselves are materially
different.
61
CollabraLink is correct that the PSEB Consensus Report states that it assigned
CollabraLink an “Acceptable” rating for Problem Statement No. 4. AR Tab 172 at 29552. This
rating is consistent with the narrative in the Consensus Report, which stated that the “[p]roposal
meets requirements and indicates an adequate approach and understanding of the requirements,”
and that the “two strengths, one significant weakness, and one weakness are offsetting.” Id.
CollabraLink is also correct that the SSEB report incorrectly represented that the PSEB’s
final rating for Problem Statement No. 4 was “Marginal.” AR Tab 61 at 3699. 14 But this error is
of no moment because the SSEB report states that its final rating for Problem Statement No. 4
was “Marginal” and, most significantly, provides a narrative that is consistent with a “Marginal”
and not an “Acceptable” rating. Id. (stating that “[p]roposal does not clearly meet requirements
and has not demonstrated an adequate approach and understanding of the requirements,” that
“[t]he proposal contained two (2) strengths, one (1) significant weakness, and one (1) weakness,
but considering the specific strengths, weakness, and significant weakness, the weakness and
significant weakness are not offset by the strengths,” and that “the overall risk of unsuccessful
performance is high”). The “Marginal” rating was then reflected in the SSAC and the SSA
reports. AR Tab 63 at 4627; AR Tab 65 at 4659.70.
To be sure, there is nothing in the SSEB report which reflects that it understood that the
“Marginal” rating it assigned was a change from the rating the PSEB assigned. But the
“Marginal” rating is, as noted, supported by the narrative in the SSEB report. Indeed,
CollabraLink does not even argue that it was entitled to an “Acceptable,” rather than a
“Marginal” rating for Problem Statement No. 4. Whatever error the SSEB made by inaccurately
recording the PSEB’s rating for Problem Statement No. 4 is therefore a harmless one that does
not undermine the rationality of the agency’s final rating decision.
c. The Agency’s Best Value Tradeoff Analysis
The grounds upon which CollabraLink challenges the agency’s best value tradeoff
analysis are similar to those presented by Tapestry and DVS. It argues that DISA’s best value
analysis was flawed because it relied exclusively on adjectival ratings without considering the
underlying technical merits of the proposals. CollabraLink MJAR at 30. It further contends that
the agency did not adequately compare proposals and failed to meaningfully consider price. Id. at
30, 35.
14
The Court notes that the SSEB Chair’s worksheets characterize the PSEB’s rating
inconsistently. A PSEB rating for Problem Statement No. 4 of “Acceptable” is reflected at AR
Tab 173 at 29850. The rating for the same problem statement is elsewhere recorded as
“Marginal.” Id. at 29841. The Court is not sure what to make of the worksheets. It notes,
however, that the worksheet which reflects a “Marginal” rating contains a remark which states
that the two strengths it received under Problem Statement No. 4 “may not offset” the one
weakness and one significant weakness. Id. This suggests to the Court that the Chair may have
been explaining why it was necessary to assign a lower rating for Problem Statement No. 4 than
the “Acceptable” rating the PSEB assigned.
62
As the Court has explained, the agency is entitled to great deference when it decides
which proposals reflect the best value to the government. Further, the Court has already
addressed at length and rejected the kind of generic arguments that CollabraLink makes
regarding whether the agency gave appropriate consideration to price and/or whether it relied
excessively on adjectival ratings. Therefore, the Court concludes that CollabraLink’s arguments
provide no basis for overturning the agency’s conclusion that it was not entitled to receive an
award.
F. Sealing Technologies (Case No. 19-1296)
The Agency’s Evaluation and Award Decision
a. Initial Evaluation
Sealing Technologies, Inc. (“Sealing”) submitted its proposal to DISA on April 4, 2017.
AR Tab 154. The IEB assigned the proposal five strengths and two weaknesses under Factor 1,
which resulted in an overall rating of “Good.” AR Tab 65d at 5542–44. For Factor 2, the PPEB
found “Not Relevant” two of the three past performance references Sealing submitted. Id.
at 5545. It concluded that Sealing had not complied with the Solicitation’s instructions to supply
task order numbers for references that involved performance under an IDIQ contract. Id. The
third reference was found “Somewhat Relevant” and assigned a quality rating of “Very Good.”
Id. The PSEB assigned the proposal three strengths, two weaknesses, and one significant
weakness under Problem Statement No. 3, which resulted in a “Marginal” rating, id. at 5546–47,
and one strength and no weaknesses for Problem Statement No. 4, which earned it a “Good”
rating, id. at 5547–48. Finally, under Factor 4, Sealing received an “Outstanding” rating based on
the assessment of six strengths and no weaknesses. Id. at 5549–50.
Upon review, the SSEB made one change to the ratings the TEBs assigned, downgrading
Sealing’s rating for Problem Statement No. 4 from “Good” to “Acceptable.” Id. at 5548. The
SSEB explained that although Sealing’s response to Problem Statement No. 4 earned one
strength and no weaknesses, “when considering the entire proposal for this factor . . . , the
proposal did not demonstrate more than an adequate approach and understanding of the
Government requirement for this evaluation factor.” Id.
Sealing’s final ratings as ratified by the SSA were as follows:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – PS4 Factor 4–
Innovation Performance Rating Rating Small Business
Good Neutral Marginal Acceptable Outstanding
AR Tab 65 at 4659.86.
Sealing’s proposed price was [***], which was the [***] out of the ninety-nine proposals.
Id. The SSA agreed with the SSAC recommendation not to award Sealing a contract on the
grounds that the “proposal was priced in the upper half of all of the Offerors” and “was not one
63
of the highest technically rated proposals,” and so did “not represent a best value to the
Government.” AR Tab 65 at 4659.88–.89.
b. Remand
After this protest was filed, the agency requested and the Court granted a remand of
Sealing’s protest, among others, to consider whether it erred 1) “in its application of the
Solicitation’s criteria with regard to Problem Statement No. 3”; and 2) in finding two of
Sealing’s past performance references “Not Relevant” under Factor 2. AR Tab 174 at 30001
(SSA decision on remand).
On remand, the SSA reversed its earlier decision to assign a weakness and a significant
weakness to Sealing’s responses to Problem Statement No. 3 and upgraded its rating for that
problem statement from “Marginal” to “Acceptable.” Id. at 30012. On the other hand, the SSA
sustained the agency’s original decision finding the past performance references non-compliant
with the Solicitation and therefore not relevant. Id. at 30010.
Notwithstanding the upward rating adjustment for Problem Statement No. 3, the agency
again concluded that Sealing should not receive a contract award. Id. at 30012 (observing that
Sealing’s proposal “[was still not] amongst the most highly rated or the lowest priced and does
not represent a best value to the Government”). The agency further stated that even if it assumed
that one of Sealing’s past performance references was relevant, and raised its Factor 2 rating to
“Satisfactory,” it would still not have selected Sealing for an award. Id. at 30012. Thus, the
agency reaffirmed its initial determination that “Sealing Tech’s proposal d[id] not merit selection
for award.” Id.
As a result of the agency’s review on remand, Sealing’s final ratings are as follows:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – Factor 4–
Innovation Performance Rating PS4 Rating Small
Business
Neutral [also considered
Good Acceptable Acceptable Outstanding
as Satisfactory]
Id. at 30010.
Sealing’s Protest
a. Alleged Errors in Factor 1 Evaluation
Sealing contends that the agency committed several errors in conducting its evaluation of
Sealing’s proposal under Factor 1. For the reasons set forth below, the Court concludes that
Sealing’s arguments lack merit.
64
i. Weakness regarding measuring the effectiveness of
innovation efforts
Under the “Corporate Philosophy/Culture on Innovation” aspect of Factor 1, the
Solicitation required offerors to explain: “How [they] assess Innovation? Measure it? Track it?”
and “What are the methods for measuring the effectiveness of Innovation efforts?” AR Tab 5
at 377. DISA concluded that Sealing’s proposal “does not showcase an adequate method for
measuring the effectiveness of Innovation efforts” and assigned it a weakness on that basis. AR
Tab 65d at 5543. This “flaw,” the agency explained, increased the risk that Sealing would fail to
innovate “because an Offeror without a clear method for measuring the effectiveness of
innovation efforts is less likely to succeed in performing SETI tasks that require mature and
detailed measures to manage complex system development in future SETI task orders.” Id.
at 5543–44.
Sealing challenges the assignment of this weakness on several interrelated grounds. First,
Sealing complains that the Solicitation did not set forth any metrics or standards for determining
the adequacy of an offeror’s proposed method for measuring the effectiveness of innovation. Pl.
Sealing Technologies Inc. 2d Am. Mot. for J. on the Admin. R. & Supp. Mem. of Law at 3
(“Sealing MJAR”), ECF No. 102. Therefore, it contends, the agency improperly relied on
“[u]nstated [e]valuation [c]riteria,” id., when it assigned the proposal a weakness for failing to
“showcase an adequate method for measuring the effectiveness of innovation efforts,” id.
(quoting AR Tab 65d at 5543–44).
This line of attack fails. While it is well established that an agency is required to evaluate
proposals based only on the criteria stated in the Solicitation, to show a violation of that
requirement a protester must demonstrate, among other things, that the agency “used a
significantly different basis in evaluating the proposals than was disclosed.” Wellpoint Military
Care Corp. v. United States, 144 Fed. Cl. 392, 404 (2019), aff’d, 953 F.3d 1373 (Fed. Cir. 2020)
(citing Academy Facilities Mgmt. v. United States, 87 Fed. Cl. 441, 470 (2009)). Here, as noted,
the Solicitation instructed offerors to describe how they assessed, measured, and tracked
innovation and also to state “the methods for measuring the effectiveness of Innovation efforts.”
AR Tab 5 at 377. It could therefore have been no surprise to Sealing that the agency would
evaluate its proposal to judge the quality of those methods.
Indeed, the Court concludes that Sealing’s quarrel is not really with the use of unstated
evaluation criteria, but with the agency’s explanation of why it found inadequate Sealing’s
description of its method for measuring the effectiveness of innovation efforts. In fact, the thrust
of the arguments in Sealing’s MJAR is that—contrary to the agency’s view—its proposal
“clearly” included “a thorough and systemic approach for measuring the effectiveness of
Innovation efforts.” Sealing MJAR at 4. In other words, Sealing challenges the reasonableness of
the agency’s determination that its proposal did not include the kind of “mature and detailed
measures” needed “to manage complex system development in future SETI task orders.” Tab
65d at 5543–44.
As this Court has noted, the scope of its review of this kind of exercise of technical
judgment is extremely narrow. See E.W. Bliss Co., 77 F.3d at 449 (stating that “the minutiae of
the procurement process in such matters as technical ratings . . . involve discretionary
65
determinations of procurement officials that a court will not second guess”). The Court’s job is
not to decide whether or not Sealing’s proposal “showcase[d] an adequate method for measuring
the effectiveness of Innovation efforts.” AR Tab 65d at 5543. It is instead to determine whether
the agency had a rational basis for its decision that the methods Sealing proposed were not
adequate. And so long as the result the agency reached is not an irrational one, and that “the
agency’s path” to that result “may reasonably be discerned,” the agency’s determination must be
upheld. Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43 (quoting Bowman Transp., 419 U.S. at 286).
The agency’s decision here passes this modest test. As explained in its proposal,
Sealing’s method of measuring the effectiveness of innovation efforts focuses on the [***]. AR
Tab 154 at 23831. The proposal stated that the [***]. Id. “[***],” the proposal states, “[***].” Id.
Further, [***]. Id. [***]. Id.
The agency’s conclusion that Sealing’s methodology was not “mature and detailed”
enough to “manage complex system development in future SETI task orders” is facially rational.
AR Tab 65d at 5544. Sealing’s proposal emphasizes the procedures that it uses when tracking its
innovation efforts, but does not offer much explanation regarding what metrics, if any, it uses to
measure their effectiveness.
Sealing observes that the agency assigned a strength to awardee Innoplex’s proposal
where it also [***]. Sealing MJAR at 8. Therefore, Sealing argues, assigning its proposal a
weakness reflects disparate treatment.
But the two proposals are clearly distinguishable, notwithstanding that both mention the
[***]. Innoplex received a strength for its “comprehensive approach to assessing innovation, and
measuring the effectiveness of those innovation efforts.” AR Tab 65d at 5152. The agency
explained that Innoplex’s [***]. Id.; see also AR Tab 142 at 17444 (Innoplex proposal)
(describing [***], see id. at 17440–42).
In short, Sealing’s attack on the agency’s decision to assign it a weakness lacks merit. Its
contention that the agency’s evaluation of its proposal in this regard reflected disparate treatment
is even less persuasive. Its remaining arguments regarding the agency’s evaluation of its
methodology for measuring the effectiveness of innovation have been considered, and the Court
finds them unpersuasive. Therefore, the Court rejects Sealing’s argument that it should have
received a strength rather than a weakness for this aspect of its proposal.
ii. The agency’s failure to assess a strength to Sealing for
its physical and virtual investment in laboratory/testing
spaces
Under the Physical Investment/Dedicated Resources/[]Virtual[] Investment aspect
of the Factor 1 evaluation criteria, the Solicitation required offerors to describe their
“physical investment in Laboratory/Testing space.” AR Tab 5 at 377. They were also
instructed to “[d]escribe the size, locations and uses of these spaces in detail.” Id. In
addition, offerors were directed to “[d]escribe other dedicated resources available to the
company, their size, location and uses of the resources.” Id. Offerors were permitted to
“include employees whose main job is invention/Innovation” among the resources
66
identified. Id. In addition, they were advised to describe, if applicable, “the company’s
‘Virtual Investment’ such as cloud technology for ‘lab space’ and monetary investment of
non-physical assets or virtual models.” Id.
Sealing claims that the agency engaged in disparate treatment when it assigned a
strength to BCMC’s proposal based on its response to this prompt, but did not similarly
assign a strength to Sealing’s proposal. Sealing MJAR at 9. According to Sealing, its
proposal described a laboratory similar to BCMC’s that had a similar capability. Id.
Sealing’s observations and assertions do not establish disparate treatment.
Sealing’s proposal states that it [***]. AR Tab 154 at 23833. [***]. Id. The
proposal states that [***]. Id. Further, the proposal notes that [***]. Id. In addition,
Sealing stated, [***]. Id. The proposal further [***]. Id. Finally, the proposal [***]. Id.
In its proposal, BCMC explains that [***]. AR Tab 132 at 13106. The proposal [***]. Id.
BCMC notes that [***]. Id. at 13107. Its proposal also [***]. Id.
The agency assigned BCMC a strength for this aspect of its proposal because it [***]. AR
Tab 65d at 4837. DISA explained that BCMC’s [***]. Id. at 4838.
As is readily apparent, the features of the labs described in the proposals are different.
Sealing’s argument focuses on some of the high-level similarities between the proposals—i.e.,
[***]. The gravamen of its argument is that the differences between the features of the labs are
not material and that if a strength was assigned to one proposal, it was irrational not to assign it
to the other. But Sealing does not contend that the features of the labs are substantively
indistinguishable. Therefore, the Court cannot “comparatively and appropriately analyze the
agency’s treatment of proposals without interfering with the agency’s broad discretion in these
matters.” Office Design Grp., 951 F.3d at 1373. And to state the obvious, the Court lacks the
technical expertise to pass judgment as to the relative value of the labs anyway. Sealing’s
disparate treatment argument must therefore be rejected.
iii. Agency’s failure to assess a strength for Sealing’s
Innovation Council
Sealing alleges that it was arbitrary for the agency to assign NetCentric a strength based
on its proposal to use a [***]. Sealing MJAR at 12. This contention lacks merit for any number
of reasons, including that the purposes of the [***] appear to be different, and that NetCentric
provided a far more detailed description of the work of its [***].
Sealing’s proposal, as noted, includes [***]. According to the proposal, its [***]. AR
Tab 154 at 23831. [***]. Id. It [***]. Id. at 23831–32. It also [***]. Id. at 23832.
NetCentric described [***]. AR Tab 150 at 21353. [***]. Id. at 21353–54. [***]. Id.
[***]. Id. at 21354. [***]. Id.
The agency assigned a strength to this aspect of NetCentric’s proposal. It concluded that
[***]. AR Tab 65d at 5359.
67
The description of tasks performed by NetCentric’s [***] highly detailed. NetCentric’s
proposal shows that [***]. The discussion of Sealing’s [***] is less extensive and, more
importantly, Sealing’s [***]. See AR Tab 154 at 23831. The proposals are not substantively
indistinguishable. The agency’s decision to assign a strength to NetCentric’s proposal, but not to
Sealing’s, is facially rational and Sealing’s disparate treatment argument lacks merit.
b. Alleged Errors in Factor 2 Evaluation
In its initial evaluation, DISA designated two out of Sealing’s three past performance
references “Not Relevant” because for each one Sealing had been a major subcontractor on IDIQ
contracts and had listed the entire contracts as references rather than specifying a particular task
order on which it had performed work. AR Tab 65d at 5545; AR Tab 174 at 30047. Sealing’s
proposal did not comply with the directions in the Past Performance Questionnaire requesting
that offerors “include task number if applicable.” AR Tab 1 at 147 (§ I, no. 3). Further, by
referring to the entire IDIQ contract, Sealing’s past performance references defied the explicit
warning in the Solicitation that “Individual Task Orders under an ID/IQ Contract are each
considered to be one past/present performance effort.” AR Tab 5 at 379. In fact, during the
question and answer period, the agency responded “no” when an offeror asked whether the
government would “consider allowing an offeror who is the only prime on a single award ID/IQ
to use the ID/IQ contract rather than individual task/delivery orders?” Id. at 343 (no. 379).
Because Sealing failed to comply with these requirements as to its first two references, the PPEB
stated, it “was unable to determine the scope and level of effort involved in an individual task
order.” AR Tab 171 at 29214, 29216.
As noted above, at the government’s request, this Court remanded Sealing’s protest to
consider whether the agency erred in not evaluating the two past performance references. AR
Tab 174 at 30001. Specifically, DISA explained that “[b]ecause the non-compliant past
performance submissions w[ere] signed by DISA personnel,” it had decided “to reassess the past
performance rating as information that may have been close at hand to DISA at the time of past
performance evaluation.” Id. at 30002; see Int’l Res. Recovery, Inc. v. United States, 64 Fed. Cl.
150, 163 (2005) (observing that “some [past performance] information is simply too close at
hand to require offerors to shoulder the inequities that spring from an agency’s failure to obtain
and consider the information”) (quoting Int’l Bus. Sys., Inc., B-275554, 97-1 CPD ¶ 114, at 4
(Comp. Gen. Mar. 3, 1997))).
For the first reference, the CO conducted a search using the IDIQ number Sealing
provided. Id. at 30047. That search yielded some 12,396 “actions” issued pursuant to the base
IDIQ contract. Id. The CO concluded that “[w]hile not all actions were direct delivery orders” it
was “not practicable to locate the needle in a haystack task order to which Sealing Tech’s past
performance submission referred.” Id. He further explained that “it would be difficult to
determine which of the delivery orders was at issue particularly where the reference pointed to a
subcontracted piece.” Id. Thus, the CO could not make a relevancy or quality assessment of
Reference 1. Id. The reference therefore retained its “Not Relevant” rating. Id.
The Past Performance Questionnaire for Sealing’s second reference, as noted, was
prepared by a DISA employee for a DISA contract. Id. The CO searched the DISA contract
writing system using the IDIQ number Sealing provided, which produced 806 associated
68
documents and twenty-two different task orders. Id. The CO could not determine the relevant
task order number based on this search, so he went further by asking the DISA employee who
completed the Past Performance Questionnaire for more information. Id. 30047–48. That
individual apparently based the evaluation in the Past Performance Questionnaire upon Sealing’s
performance on several task orders as she “provided three different contracts/orders/tasks that
she associated with her assessment of Sealing Tech.” Id. The Solicitation provided, however, that
offerors were to submit past performance references for no more than three recent
contracts/orders. AR Tab 5 at 379. The agency therefore again found the reference
non-compliant. AR Tab 174 at 30048. It noted that it would be unfair to consider this reference
because the limitation on the number of references “was enforced across all offerors and none
were permitted to have multiple references considered.” Id. The agency reaffirmed the rating of
“Neutral Confidence” for Factor 2. Id. at 30048–49.
Sealing argues that the agency did not need the task order numbers to evaluate its
references because “the bulk of what the Agency needed to evaluate SealingTech’s past
performance was located in the substance of the proposal, not the reference number.” Sealing
MJAR at 20–21. The government responds that—under the reasoning of Blue & Gold Fleet, L.P.
v. United States, 492 F.3d 1308, 1313 (Fed. Cir. 2007)—it is too late for Sealing to argue that the
requirement that it provide task order numbers was unnecessary. Gov’t MJAR at 100–01.
Further, the government cites the court’s decision in By Light Prof’l IT Servs. V. United States,
131 Fed. Cl. 358, 368 (2017), rejecting a similar argument on the merits. Id. at 101.
The Court rejects the government’s waiver argument. The Solicitation states that “[n]on-
conformance with the instructions provided in this Information to Offerors may result in removal
of the proposal from further evaluation.” AR Tab 5 at 367. The Court does not understand
Sealing to be challenging the Solicitation’s requirement that a task order number be supplied
with the past performance reference, but rather the reasonableness of the agency’s decision to
designate its references “Not Relevant” solely because it did not comply with that requirement.
Here, the Court concludes that it was reasonable for the agency to determine that—
without the task order numbers—it could not complete its evaluation of the past performance
reference. Among other things, it could not confirm the scope and level of effort involved in the
performance example, whether the example involved the entire IDIQ contract (which would be
impermissible), or whether it represented work performed on more than one task order (which is
also impermissible). Indeed, as the agency discovered on remand, the Past Performance
Questionnaire completed for reference two did, in fact, involve more than one task order.
The agency required offerors to provide task order numbers to facilitate its review of the
past performance references the offerors submitted. Sealing failed to comply with that
requirement. After initially rating the references “Not Relevant” based on Sealing’s
non-compliance, the agency took a second look and was still unable to confirm to which task
order or orders References 1 and 2 pertained. The Court concludes therefore that the agency
acted within its discretion when it found those references non-compliant with the Solicitation and
treated them as “Not Relevant.” Sealing’s challenge to its “Neutral” rating for Factor 2 is
therefore without merit.
69
c. Problem Statement No. 3: Weakness Regarding Risk
Management
Sealing’s second challenge to its “Marginal” rating on Problem Statement No. 3 concerns
the agency’s decision to assign it a weakness based on flaws in its approach to risk management.
The Solicitation required offerors to submit a “Risk Management Plan” (“RMP”) that explained
how the offeror would “mitigate uncertainties that include, but are not limited to, unknown
conditions, revised mission requirements, Innovation risk, [and] schedule risks,” as well as “cost
and time growth” that may result from such uncertainties. AR Tab 5 at 381. Offerors were also
instructed to discuss how they planned to mitigate delays from unforeseen problems and make up
time to complete the project by the scheduled completion date. Id. In addition, offerors were
required to identify risks that could delay or hinder completion of the project and to propose
mitigation measures to address those risks. Id.
The PSEB assigned a weakness to Sealing’s RMP. It observed that in its response to
Problem Statement No. 3, AR Tab 154 at 23881, Sealing had “described an approach to risk that
differs from traditional risk management methods,” but that it “did not discuss what the
mitigations would be for the risks that [it] identified,” AR Tab 65d at 5546. In its MJAR, Sealing
challenges the accuracy of the agency’s assertion, referencing the [***] it included in its
proposal. Sealing MJAR at 25.
The Court is not persuaded that the agency lacked a rational basis for assigning the
weakness at issue. Sealing’s proposal identified the following risks: [***]. AR Tab 154 at 23881.
The risk descriptions set forth in the tables contained in Sealing’s proposal do not identify any of
these circumstances as risks, and therefore do not include strategies to mitigate those risks. And
Sealing fails to explain either in its proposal or its MJAR how the risk descriptions in the table
correspond to the risks it identified in its proposal. The Court therefore lacks any grounds for
finding that the agency’s decision to assign a weakness to the proposal lacks a rational basis.
d. Problem Statement No. 4
The PSEB assigned Sealing a rating of “Good” for Problem Statement No. 4, on the
grounds that its response demonstrated a “thorough” approach and understanding of
requirements. AR Tab 65d at 5547–48. The SSEB, however, downgraded the rating to
“Acceptable.” Id. at 5548. It acknowledged that the one strength Sealing earned “did numerically
outweigh weaknesses, as there was no identified weakness.” Id. Nonetheless, the SSEB
concluded, “when considering the entire proposal for this factor, even in light of the specific
merit of the strength, the proposal did not demonstrate more than an adequate approach and
understanding of the requirements.” Id. Therefore, the SSEB concluded that Sealing’s response
to Problem Statement No. 4 did not merit a “Good” rating. Id.
Sealing challenges the SSEB’s decision. It observes that the SSEB similarly downgraded
all offerors that the PSEB had assigned a “Good” rating and who had earned one strength and no
weaknesses for Problem Statement No. 4. Sealing MJAR at 31–32. It also notes that each time it
did so the SSEB used the same justification that the proposal reflected an “adequate” as opposed
to “thorough” approach to requirements. Id. at 32. It contends that the SSEB’s real reason for
downgrading all of the proposals was to make them consistent with the “Acceptable” rating the
70
PSEB assigned to another offeror that had earned one strength and no weaknesses (Credence).
Id. at 32. Sealing states that the SSEB’s approach was “problematic,” given the Solicitation’s
representation that the SSEB would not compare proposals against one another. Id. at 32; see AR
Tab 5 at 394. It also notes that some of the training materials provided to the SSEB advised that
it should “[a]void technical leveling or transfusion of one parties’ technical approach with
another.” Sealing MJAR at 31 (quoting AR Tab 169 at 27967). Finally, it notes that the praise
the PSEB offered for its proposal shows that the PSEB, at least, found that it had a “thorough”
understanding of Problem Statement No. 4. Sealing MJAR at 33.
Sealing’s argument—that the SSEB downgraded Sealing’s rating for Problem Statement
No. 4 in order to make it consistent with the rating the PSEB assigned to Credence—is pure
speculation and is not supported by anything in the record. Indeed, the argument is
counterintuitive. For if rote consistency were its goal, the SSEB could have more easily
accomplished such consistency by upgrading Credence’s rating, rather than lowering the ratings
of the other offerors.
Further, the SSEB was charged with reviewing the TEB evaluations to ensure an
“equitable, impartial, and comprehensive evaluation” against the Solicitation requirements. AR
Tab 5 at 393. In so doing, the SSEB would necessarily have to review the adjectival ratings
assigned and—to the extent that its assessment was different from that of the TEBs—adjust those
ratings. See L-3 Commc’ns Integrated Sys., L.P. v. United States, 79 Fed. Cl. 453, 462 (2007)
(quoting Speedy Food Serv. Inc., B-258537, 95-2 CPD ¶ 111 (Comp. Gen. May 2, 1995))
(noting that the court generally “will not object to the higher-level official’s judgment, absent
unreasonable or improper action, even when the official disagrees with an assessment made by a
working-level evaluation board or individuals who normally may be expected to have the
technical expertise required for such evaluations”). As the Court has noted earlier in this opinion,
the fact that the SSEB disagrees with a TEB regarding whether a proposal reflects an “adequate”
or a “thorough” understanding of the Solicitation’s requirements does not, in and of itself, signal
that either body’s determination was an irrational one. AR Tab 5 at 391. This protest ground
therefore lacks merit.
e. Assignment of Adjectival Ratings
Finally, Sealing complains that the agency did not assign adjectival ratings “evenly”
because proposals that had the same number of strengths and weaknesses did not always receive
the same adjectival rating. Sealing MJAR at 35. Specifically, it observes that it received a
“Marginal” rating for Factor 3, Problem Statement No. 3 where it earned three strengths and was
assigned two weaknesses and one significant weakness. Id. On the other hand, GOVCIO
received the same number of strengths and weaknesses under Factor 1, but was nonetheless rated
“Acceptable.” Id. Sealing also observes that the agency gave Cyber Data Technologies a
“Marginal” rating under Problem Statement No. 3, even though it had earned fewer strengths
than Sealing, and was assigned more weaknesses and significant weaknesses. Id. Finally, it notes
that Tenica was assigned an “Acceptable” rating for its Problem Statement No. 4 where it earned
two strengths and was assigned two weakness, one of which was “significant.” Id. at 35–36.
As the Court has previously observed, agencies are not required to, and in fact are
prohibited from, assigning adjectival ratings solely on the basis of the number of strengths and
71
weaknesses a proposal receives. Further, Sealing’s call for adjectival ratings to be assigned
“evenly” ignores that different TEBs evaluated each factor separately and that, as the Court has
previously explained, the criteria for awarding particular adjectival ratings varied based on the
factor being evaluated. Id. at 35. The issue is therefore not whether the ratings were assigned
“evenly” but whether they were consistent with the criteria set forth in the Solicitation. Id.
For example, a rating of “Marginal” is appropriate under Factor 3 where the proposal has
“one or more weaknesses which are not offset by strengths.” AR Tab 5 at 391. While Cyber Data
was assigned more weaknesses than Sealing for Problem Statement No. 3, the ratings criteria
were satisfied because both proposals were assigned weaknesses that were not offset by
strengths. And although Tenica had two strengths, one weakness, and one significant weakness
for Problem Statement No. 3, it was still eligible to receive an “Acceptable” rating based on the
agency’s conclusion that—notwithstanding that its weaknesses were not offset by strengths—the
imbalance would “have little or no impact on Contract performance.” Id.
In short, the agency followed the Solicitation’s criteria in assigning adjectival ratings.
Sealing’s challenge to the agency’s decision-making process therefore lacks merit.
G. Foxhole Technology, Inc. (Case No. 19-1168C)
The Agency’s Evaluation
Foxhole Technology, Inc. (“Foxhole”) submitted its proposal on April 4, 2017. AR
Tab 140 at 16165. The IEB assigned the proposal one strength and one weakness under Factor 1,
earning Foxhole an “Acceptable” rating for that factor. AR Tab 65d at 5076. The proposal
received a “Substantial Confidence” rating under Factor 2. Id. at 5078. For Factor 3, Problem
Statement No. 3, the PSEB gave the proposal an “Outstanding” rating on the basis of its three
earned strengths and no weaknesses. Id. at 5079. Foxhole also received an “Outstanding” rating
for Factor 3, Problem Statement No. 4, earning two strengths and no weaknesses. Id. at 5080. It
also earned an “Outstanding” rating for Factor 4 based on its seven strengths and no weaknesses.
Id. at 5082–83.
The SSEB concurred with the TEBs on all ratings except the one assigned to Problem
Statement No. 4. It downgraded that rating from “Outstanding” to “Good” on the grounds that
“the specific merit of the two (2) strengths identified [for Problem Statement No. 4] did not
indicate an exceptional approach and understanding of the requirements; rather, it indicated a
thorough approach and understanding of the requirements.” Id. at 5080–81.
The final ratings assigned to Foxhole’s proposal by the agency were as follows:
Factor 1 – Factor 2 – Past Factor 3 – PS3 Factor 3 – Factor 4– Small
Innovation Performance Rating PS4 Business
Rating
Acceptable Substantial Outstanding Good Outstanding
Id. at 5084. Foxhole’s proposal was priced at [***]. AR Tab 63a at 4658.
72
The SSAC observed that Foxhole’s proposal “was priced in the upper half of all of the
Offerors” and “was not one of the highest technically rated.” AR Tab 65e at 5865. It therefore
recommended against awarding a contract to Foxhole.
The SSA agreed. He explained that “the [S]olicitation was looking for Innovative
Offerors.” AR Tab 65 at 4659.94. He acknowledged Foxhole’s established track record of past
performance and that it had also shown its ability to solve the problems set forth in the problem
statements. Nonetheless, Foxhole had failed to achieve a rating higher than “Acceptable” in the
most important factor, and for that factor it was assigned “one weakness that was merely offset
by only one strength.” Id. He also noted Foxhole’s relatively high price, which he stated he was
not willing to tradeoff given that Foxhole had demonstrated a greater risk of unsuccessful
performance than those offerors that achieved either a “Good” or “Outstanding” rating under
Factor 1. Id.
Foxhole’s Protest
In its amended MJAR, Foxhole challenges several aspects of the agency’s evaluation of
its proposal. It argues that the agency should have assigned it at least a “Good” rating for
Factor 1 because the weakness the agency assigned its proposal was unjustified and because the
agency failed to recognize three additional strengths contained in the proposal. Pl. Foxhole
Tech., Inc.’s Am. Mot. for J. on the Admin. R. (“Foxhole MJAR”) at 22–23, ECF No. 108. In
addition, Foxhole mounts a more global challenge to the agency’s entire source selection
process, similar to the challenges mounted by DVS, Tapestry, and CollabraLink. It contends that
the agency did not conduct an adequate comparative assessment of the proposals, that it placed
too much emphasis on the offerors’ Factor 1 ratings, and that it failed to meaningfully consider
price. Id. at 6. For the reasons set forth below, the Court finds that these arguments lack merit.
a. Factor 1 Evaluation
i. Weakness regarding cost of failure
Under section L.4.2.3.1 of the Solicitation, offerors were required, among other things, to
discuss their approach to risk. They were instructed to explain how they “manage risk in an
innovative environment”; how they “determine the level of acceptable risk”; and “how [they]
and DoD should share risk.” AR Tab 6 at 376. They were also asked to describe “the cost of
failure,” “who should pay for it,” and “how much failure” the government should accept. Id.
In its response to the agency’s questions about the cost of failure, Foxhole stated that
[***]. AR Tab 140 at 16224–25. Foxhole then discussed [***]. Specifically, Foxhole stated that
[***]. Id. at 16225. It proposed that [***]. Id.
The agency assigned a weakness to Foxhole’s response because it found that the proposal
“did not provide adequate information as to what the cost of failure is, who should pay for the
cost of failure, or how much failure they believe the Government should accept.” AR Tab 65
at 4659.92. It noted that Foxhole had [***] but that it did not [***] who should pay for failure or
how much failure the Government should accept.” Id. This flaw, the agency observed, “raises the
risk of failure to be innovative” because Foxhole’s “approach to failure sharing . . . is not well-
developed and could potentially impede innovation on future SETI projects.” Id.
73
Foxhole objects to the agency’s assessment of a weakness. It contends that the “cost of
failure” language in the agency instructions was ambiguous and that it “reasonably interpreted
the question to refer to literal cost.” Foxhole MJAR at 23–24. It further argues that it was
“arbitrary and capricious for the Agency to assign a weakness to Foxhole for failing to interpret
‘cost of failure’ as the Agency intended.” Id. at 24.
Foxhole’s argument is somewhat of a non-sequitur. Foxhole’s discussion of [***] was
unresponsive to the agency’s questions. Other than its generic observation that the “[l]ack of
innovative input increases the mediocrity of the final output, and never tests the waters of ‘what
if’ and ‘why not,’” AR Tab 140 at 16224, Foxhole’s response was not about the cost of failure at
all (whether “literal” or figurative). Nor is there any discussion in its proposal about who should
pay the costs of failure or how much failure the government should accept.
The Court is not persuaded by Foxhole’s contentions that, in any event, it addressed
failure sharing within its “overall response” to the questions posed under the Corporate
Philosophy/Culture on Innovation category. Foxhole MJAR at 24. The statement Foxhole
references was that [***]. Id. (emphasis removed) (citing AR Tab 140 at 16339). But this
language (which was included in response to a different set of questions about incentivizing
innovation by employees) concerns [***]. It does not address the extent to which such costs are
ever shared with the government or how much failure the government should accept. Foxhole
has failed to persuade the Court that it was irrational for the agency to assign it a weakness based
on its failure to provide an adequate response to the agency’s questions regarding the cost of
failure.
ii. Additional strengths
Foxhole also contends that the agency should have assigned it three additional strengths
under Factor 1. See Foxhole MJAR at 25–30. For example, the Solicitation provided that
proposals “may be evaluated more favorably and achieve higher ratings” where they
demonstrate, among other things, “sustained, year-after-year investment in technologies and
innovative ways to develop new capability, improve service, reduce costs and create efficiencies
[or] . . . ongoing corporate investment in tools, training, facilities, personnel and equipment.” AR
Tab 5 at 388. Foxhole argues that the frequent references in its proposal to its [***] support the
assignment of a strength under this criteria. See Foxhole MJAR at 22–23 (observing that its
proposal “included [***],” as well as descriptions of its benefits, and that [***]).
But as the Court has had frequent occasion to remark in this opinion, the decision
whether to assign a proposal a strength based on its technical quality is one that lies within the
exclusive discretion of the agency’s subject-matter experts, subject to review only under a highly
deferential rational basis standard. And while Foxhole argues that it should have been assigned a
strength for its [***], it provides the Court no basis for concluding that by not doing so the
agency violated the terms of the Solicitation or made a decision that lacked a rational basis.
Nor does Foxhole demonstrate that the agency engaged in disparate treatment by not
assigning it a strength based on its Innovation Laboratory. Its contention that RedTeam,
ValidaTek, and Rigil were all awarded a strength based on their possession or access to [***] is
74
unavailing because Foxhole does not show that the [***] were substantively indistinguishable.
See id. at 26–28.
Rigil’s proposal is not in the record. And the proposals of the other two offerors contain
[***]. Compare AR Tab 140 at 16345 (Foxhole proposal), with AR Tab 152 at 22163–64
(RedTeam’s proposal) and AR Tab 163 at 27255 (Validatek proposal). In fact, the agency relied
on the [***] in assigning strengths to the proposals. See, e.g., AR 30032 (observing that the
RedTeam proposal [***]).
Foxhole similarly contends that it should have earned two strengths under the “History of
Engineering and Deploying Innovative Solutions” category. Foxhole MJAR at 28. The first
strength the agency should have assigned, according to Foxhole, was based on [***]. Id. at 28. It
claims that [***]. Id. (referencing AR Tab 5 at 388). But as with its contentions regarding [***],
this argument represents a mere disagreement with the discretionary judgments of the agency’s
experts. Even if the Court possessed the technical expertise to offer its own judgment, the Court
lacks the authority to do so under the highly deferential standard of review applicable to such
determinations.
The Court similarly rejects Foxhole’s argument that it should have been assigned another
strength for its [***]. Id. at 29. Foxhole notes that the agency found that the work Foxhole
performed on such a project was of “Very Good” quality when it was submitted as a past
performance reference under Factor 2. Id. at 30 (citing AR Tab 61 at 3861). But different
information is sought under Factors 1 and 2, and for different purposes. In addition, the
evaluations are conducted by different TEBs, using different criteria, and considering only that
information which is provided by the offeror in the applicable volume. Therefore,
notwithstanding Foxhole’s views regarding the value of this experience, it was within the
agency’s discretion not to credit it with a strength.
b. The Agency’s Source Selection Determination and Best Value
Tradeoff Analysis
Finally, Foxhole contends that—even assuming that the agency reasonably assigned it
only an “Acceptable” rating for Factor 1—its protest should be sustained because the agency’s
source selection process was flawed. Specifically, it argues that the award decision was based on
“a mechanical ranking of proposals, in which offerors that received the highest ranking under
Factor 1 were all ranked first, regardless of their ratings for other factors.” Id. at 7. Relatedly, it
argues that the agency “placed [] undue weight on Factor 1” which it claims “ma[d]e the other
evaluation factors essentially meaningless.” Id. at 11–12. Finally, it argues that “in its focus on
Factor 1,” the agency “ignored the offerors’ proposed prices.” Id. at 20.
These arguments—which are similar to those made by DVS, CollabraLink, and
Tapestry—are not supported by the record. As described above, the SSA carefully compared
Foxhole’s proposal to those of other offerors with higher ratings on Factor 1 and explained
why—despite Foxhole’s impressive showing on the other technical factors—he did not view its
relatively high-priced proposal as providing best value to the government. See AR Tab 65
at 4659.94.
75
There is no question that proposals rated “Outstanding” under Factor 1 enjoyed a
considerable advantage in this procurement. But that was entirely consistent with the
Solicitation, which expressly stated that Factor 1 was the most important factor and which
repeatedly emphasized the agency’s “paramount” interest in innovation. See AR Tab 5 at 375
(Solicitation stating that “fostering a creative culture and driving innovation in defense of the
country[] are paramount success criteria in executing the SETI contract”).
Foxhole acknowledges that it understood that Factor 1 would be given the most weight. It
contends, however, that the quantum of weight afforded to Factor 1 was so disproportionate as to
constitute an unstated evaluation criterion. But as noted earlier in this opinion, to successfully
make this claim, an offeror must show that the agency used “a significantly different basis” than
the one stated in the Solicitation for making its decision. Wellpoint Military Care Corp., 144 Fed.
Cl. at 404. Foxhole has not made that showing here because the record does not support its
argument that the importance that the agency placed on Factor 1 was so overwhelming as to
make meaningless all other considerations, including price.
For one thing, five of the twenty-three offerors that received an “Outstanding” rating
under Factor 1 were not awarded a contract. Three of these offerors were disqualified at the
outset because their proposals received “Unacceptable” ratings for Factor 3. AR Tab 65
at 4659.4–.5. The other two offerors (one of whose proposals was the lowest priced of all ninety-
nine evaluated) were not awarded contracts because they received “Marginal” ratings on the
problem statements and lacked a record of relevant past performance. See id. at 4659.46–.48; id.
at 4659.48–.52.
The Court also finds contrary to the record Foxhole’s argument that the agency failed to
give meaningful consideration to price. Foxhole’s proposal was priced higher than fourteen of
the eighteen awardees that received “Outstanding” ratings for Factor 1. See AR Tab 174
at 30012–13. And the SSA specifically remarked on the significance of Foxhole’s relatively high
price, explaining that he was unwilling to pay a higher price for a proposal that had achieved
only an “Acceptable” rating for Factor 1. AR Tab 65 at 4659.94.
The five awardees that received “Good” ratings for Factor 1 had technical ratings for the
remaining factors that were either better than, equivalent to, or only slightly less favorable than
Foxhole’s. AR Tab 174 at 30013. And all of the awardees that received a “Good” rating for
Factor 1 were ranked lower in price. Id.
The specific examples Foxhole cites to prove that the agency gave undue weight to
Factor 1 at the expense of other technical factors do not hold water. For example, it complains
that ValidaTek, “which received an ‘Outstanding’ in Factor 1 but only a ‘Satisfactory’ in Factor
2,” was ranked higher than Foxhole, which “had a lower rating in Factor 1 but a rating two levels
above ValidaTek in Factor 2.” Foxhole MJAR at 9. But because Factor 1 was more important
than Factor 2, there was nothing illogical about the agency choosing to make an award to the
offeror whose proposal was rated two levels higher for Factor 1, rather than the proposal that was
rated two levels higher for Factor 2, particularly since [***].
Foxhole similarly states that “it is simply illogical to conclude that an offeror such as
Affinity Innovations, which received a ‘Neutral’ in Factor 2, ‘Marginal/Acceptable’ in Factor 3,
76
and ‘Good’ in Factor 4, was objectively a technically superior offeror than Foxhole who received
ratings two levels higher in every category other than Factor 1.” Id. at 9–10. But Foxhole fails to
mention that [***], as well as one of the lowest among all of the awardees, and that it was
significantly lower than Foxhole’s. See AR Tab 65b at 4676.
It bears noting that there were another twenty offerors that received “Good” ratings on
Factor 1 and yet did not receive an award because of their weaker ratings on one or more of the
other three technical factors. In short, the record establishes that while the agency gave the
greatest weight to Factor 1 (consistent with the Solicitation), Foxhole’s argument that the other
technical factors or price were “essentially meaningless,” is not supported by the record.
The Court has examined Foxhole’s remaining objections to the agency’s decision to
make awards to offerors that had lower ratings on Factors 2, 3, or 4 and finds them unpersuasive.
It is for the agency to decide which proposals present the best value to the government, and so
long as those determinations are not irrational and are adequately documented, the Court must
uphold them. See Galen Med. Assocs., Inc. v. United States, 369 F.3d 1324, 1330 (Fed. Cir.
2004); E.W. Bliss Co., 77 F.3d at 449. Here, the agency carefully evaluated close to one hundred
proposals using a two-level review process to assess their strengths and weaknesses and to assign
adjectival ratings to each of the four technical factors. It documented the basis for its evaluations
and rating decisions. The SSAC provided a comparative analysis of the proposals and made
recommendations to the SSA. The SSA made its best value determinations, giving the greatest
weight to Factor 1 (as provided by the Solicitation). The SSA explained his justification for
selecting some but not other offerors for an award. Foxhole’s claims regarding the overall
process the agency employed in making its best value determination are therefore rejected.
CONCLUSION
Based to the foregoing, the Court issues the following orders:
1. The government’s motion for judgment on the administrative record is
GRANTED. ECF No. 123.
2. TIA’s motion for judgment on the administrative record is DENIED.
ECF No. 106.
3. DVS’s motion for judgment on the administrative record is DENIED.
ECF No. 84.
4. Tenica’s motion for judgment on the administrative record is DENIED.
ECF No. 71.
5. Tapestry’s motion for judgment on the administrative record is DENIED.
ECF No. 80.
6. CollabraLink’s motion for judgment on the administrative record is DENIED.
ECF No. 104.
77
7. Sealing’s motion for judgment on the administrative record is DENIED.
ECF No. 102.
8. Foxhole’s motion for judgment on the administrative record is DENIED.
ECF No. 108.
IT IS SO ORDERED.
s/ Elaine D. Kaplan
ELAINE D. KAPLAN
Judge
78