OPINION AND ORDER
WOLSKI, Judge.Plaintiff USfalcon, Inc. brings a post-award bid protest challenging a procurement decision of the United States Army Communications-Eleetronies Life Cycle Management Command (“CECOM” or “agency”). The agency excluded USfalcon from the competitive range in a negotiated procurement to award multiple Indefinite Delivery/Indefinite Quantity (“IDIQ”) contracts, under which awardees would have the opportunity to compete for future task orders. See Admin. R. (“AR”) at 11199. This decision was based on CECOM’s determination that USfalcon’s response to one of three sample tasks was “Unacceptable” because of a deficiency. AR at 10909-13.
Plaintiff contends that the decision to exclude it from the competitive range was arbitrary, arguing that one clause of the definition of “Unacceptable”- — -that “none of these conditions can be corrected without a major rewrite or revision of the proposal,” AR at 7026 — cannot reasonably apply given the nature of the identified deficiency. See Compl. ¶¶ 66-77; Pl.’s Mem. in Supp. of its Mot. for J. on the Admin. R. and for Perm. Inj. (“Pl.’s Br.”) at 1-2, 18-23. USfalcon has moved for judgment on the administrative record and for permanent injunctive relief, and the government has cross-moved for judgment on the administrative record. For the reasons that follow, the Court DENIES the motions of USfalcon, and GRANTS the cross-motion of the government.
I. BACKGROUND
A. The Solicitation
This case concerns IDIQ contracts for the Army’s Rapid Response (“R2”) Project Office, which provides “rapid means of accessing competent industry capabilities without the traditional acquisition lead time,” particularly in the servicing of government equipment. AR at 3. The procurement is for the third iteration of contracts under this office, hence the name “Rapid Response Third Generation” (“R2-3G”). See id. The R2-3G solicitation, No. W15P7T-08-R-E001 (“Solicitation”), was issued by CECOM on May 15, 2008. See AR at 1. It contemplated awards of up to ten contracts, with four reserved for small businesses whose proposals were found acceptable — including one disadvantaged business under section 8(a) of the Small Business Act, 15 U.S.C. § 637(a), and another that is a Service Disabled Veteran Owned Small Business (“SDVOSB”) concern. AR at 3.
The contract awardees would be given the opportunity to compete for task orders which may total up to $16.4 billion, with a minimum of $25,000 guaranteed each. AR at 4, 11. The work would be for the U.S. Department of Defense and other federal agencies, and state and local governments where legally authorized (such as for anti-terrorism technology or services). AR at 3. The nature of the work to be performed under R2-3G task orders “include[d] technology insertion, system integration/installation, fahrieation/pro-totyping, testing/certification, studies/analy-ses, logistic support services, training and engineering support services, including re-engineering and reverse engineering, for a range of equipments and services.” Id.; see also AR at 124.
The offerors were directed to submit proposals consisting of five volumes. AR at 106. The first volume was for technical files, and was to contain the offerors’ responses to three sample tasks — including narratives of up to twenty pages in length, and schedules. AR at 107, 109. The second volume was a listing of relevant government contracts, with information organized into seven sections, for purposes of performance risk analysis. AR at 106-11. The third volume was the Small Business Participation Plan (“SBPP”), with information pertaining to seven specific elements. AR at 106, 108, 112-16. Volume IV was to contain price information, spread*441sheets showing five years’ worth of labor and associated costs for a large number of labor categories, broken down and organized as Firm Fixed Price (“FFP”), Cost Reimbursement (“CR”), and Time and Materials (“T & M”) contracts. See AR at 6-8, 106, 108-09, 116-17. The fifth volume was for various representations and certifications. AR at 106,109,117.
The Solicitation announced that contracts would be awarded on a “best value” basis, with the agency intending to make awards after conducting discussions with offerors whose proposals were in the competitive range. AR at 4, 118. The stated evaluation factors, in order of importance, were: (1) Technical; (2) Performance Risk; (8) SBPP; and (4) Price. Id. The Technical Factor, composed of three Sample Task Sub-Factors of equal importance, was significantly more important than the Performance Risk Factor. Id. The Performance Risk Factor was slightly more important than the SBPP Factor which, in turn, was slightly more important than the Price Factor. Id. To be considered for a contract award, an offeror’s proposal needed to receive “a rating of no less than Acceptable” for the Technical Factor and Sample Task Sub-Factors, and for the SBPP Factor. Id. The Solicitation contained no definition for “Acceptable” or any other evaluation rating. See AR at 118-20.
The Solicitation explained that “[t]he Sample Tasks were designed to test the offer-ors[’] expertise and capabilities in performing task orders within the time constraints imposed by the R2-3G program.” AR at 119. The three sample tasks were released fifteen days before proposals were initially due to be submitted. See AR at 3621-22, 3713-37 (Amendment 0006, released July 22, 2008).2 Although discussions were contemplated, see AR at 4, offerors were cautioned that they “will not be given an opportunity to correct or revise a Sample Task response.” AR at 119. The Solicitation required that the technical evaluators consider three things in reviewing the sample task responses: (1) understanding of the problems; (2) feasibility of approach; and (3) realism. Id. The Solicitation defined these considerations as follows:
1. Understanding of the Problems. The proposal will be evaluated to determine the extent to which it demonstrates a clear understanding of all features involved in solving the problems and meeting the requirements, and the extent to which uncertainties are identified and resolutions proposed.
2. Feasibility of Approach. The proposal will be evaluated to determine the level of confidence provided the Government with respect to the offeror’s methods and approach in successfully meeting the requirements in a timely manner.
3. Realism. The evaluation will also consider the realism of the allocated hours, labor categories and price being proposed in the offeror’s response to the sample tasks.
Id.
The three-step process to be used by CE-COM to select awardees was also described in the Solicitation. AR at 4, 118. First, all proposals were to be evaluated. Id. Next, up to six offerors in the competitive range would be selected for award, regardless of the size of the offeror. Id. Step three concerned the small business reservations. Only small businesses in the competitive range were further considered. If any small business offer-ors remained in the competitive range, reserve awards would be made to them in the following sequence: first, to the SDVOSB offeror with the best overall proposal; second, to the section 8(a) offeror with the best overall proposal; third, to the two offerors remaining in any small business category with the best overall proposals. Id. Small business offerors that qualified in more than one small business category would be considered for an award in each such category, if not already selected for award. Id. If no SDVOSB award was made, the government reserved the right to award an additional *442contract to a small business offeror in any category. Id.
B. The Source Selection Evaluation Plan
The Source Selection Evaluation Plan (“SSEP”) for the R2-3G procurement was dated May 9, 2008, and was approved by the Source Selection Authority (“SSA”) on July 7, 2008.3 The SSEP outlined the technical evaluation organization and responsibilities. AR at 7023. The Source Selection Evaluation Board (“SSEB”) was composed of a Chairperson, Factor Chairpersons and a team of evaluators. AR at 7024. It was charged with conducting “a comprehensive review and evaluation of proposals against the solicitation requirements and the approved evaluation criteria”; preparing and submitting its evaluation reports to the Source Selection Authority (“SSA”) and the Source Selection Advisory Council (“SSAC”); and briefing the SSA and SSAC, as requested. Id. The SSAC was to monitor and provide guidance to the SSEB, and also brief the SSA on the progress of the evaluation process. AR at 7023.
The Contracting Officer (“CO”) was to advise the SSEB, conduct “negotiations as necessary to clarify proposals received from offerors,” and make the competitive range determination “with the SSA’s approval.” AR at 7024. The SSA was “responsible for the proper conduct of the source selection process and for making the final source selection decision.” AR at 7023. He was to “ensure that ... [t]he SSEP and evaluation criteria are consistent with the requirements of the solicitation and applicable regulations.” Id. And, as already noted, he was to approve the CO’s “determination to exclude offerors from the competitive range.” Id.
The SSEP required that “[t]he non-price factors will be evaluated and rated based upon the general and specific instructions supplied in Section IV of this Plan,” and that its rating definitions will be used in the evaluations. AR at 7028. Under the referenced section of the SSEP, an offeror could receive one of four ratings for the Technical Factor: Outstanding, Good, Acceptable, and Unacceptable. AR at 7026. A proposal was “Acceptable” if it “at least meets all of the Government’s requirements, contains at least minimal detail, demonstrates at least a minimal understanding of the problems, and is at least minimally feasible (moderate to high risk).” Id. A proposal was “Unacceptable” if it “contain[ed] a major error(s), omission(s) or deficiency(ies) that indicates a lack of understanding of the problems or an approach that cannot be expected to meet requirements or involves a very high risk; and none of these conditions can be corrected without a major rewrite or revision of the proposal.” Id,4 A “deficiency” was defined as a “material failure of a proposal to meet a Government requirement or a combination of significant weaknesses in a proposal that increases the risk of unsuccessful contract performance to an unacceptable level.” AR at 7028.
The evaluation process was also detailed in the SSEP. After evaluators read their assigned section of a proposal to “determine if errors, omissions or deficiencies exist,” the Factor Chairperson was to “prepare a summary roll-up of his/her respective factor, setting forth his/her recommended factor and sub-factor ratings,” to be forwarded to the SSEB Chairperson. Id. The latter was then to review the summary and recommendations, assign the appropriate ratings and place them in an Initial Evaluation Report— which was to “be forwarded to the SSA/ SSAC upon request.” AR at 7029. The CO, with SSA approval, was then to determine which proposals were not “among the most highly rated” and exclude these from the competitive range (and from further consideration). Id. Discussions, evaluations leading to an Interim Evaluation Report, an amended competitive range, final proposal revisions, and a Final Evaluation Report were to follow involving the remaining offerors, with the SSA making the final determination of offerors to be awarded contracts. Id.
*443C. Sample Task # 2
For the sample tasks, offerors were instructed to “address each of the three” and to “provide the schedule, labor mix, hours and proposed price required to perform each sample task.” AR at 109. As with any portion of the proposal, offerors were told that they “shall confine submissions to essential matters, sufficient to define the proposal and provide an adequate basis for evaluation,” and were “responsible for including sufficient details, in a concise manner, to permit a complete and accurate evaluation of each proposal.” AR at 107.
Three sample tasks, each with its own submission instructions, were issued to offer-ors. AR at 3713-37. Sample Task # 1 involved inspection and maintenance of aircraft engines, including engineering analysis and the redesign of parts, to be proposed on a T & M basis. AR at 3713-17, 3729-31. Sample Task #3 concerned the development of an automated biometric identification system, which would be performed on a Cost-Plus-Fixed-Fee basis. AR at 3723-28, 3735-37. Sample Task # 2, the evaluation of which is at issue in this bid protest, presented a fictitious request to develop a Ground Perimeter Detection/Security System to guard the Gettysburg National Military Park from terrorism and related threats, to be performed on an FFP basis. AR at 3718-22, 3732-34.
This sample task described the four perimeters to be secured, which totaled fifteen to twenty miles in length. AR at 3719. Offer-ors were required to design system architectures specific to the varying terrains of the perimeters, including “combinations of the most efficient and effective sensor technologies available.” Id. The proposed solution was to address “reliability, availability, maintainability, survivability, security, interoperability, Command and Control (C2) design parameters”; to meet a “minimum System Operational Availability”; to employ sensors “automated to the maximum extent”; and to utilize a system that was “to be frequency spectrum jam resistant and shall not interfere with local frequency spectrum usage.” Id. Offerors were to first develop a mobile prototype and a system test plan for field testing the prototype. AR at 3720. The proposal next was to include a description of efforts to “plan, assemble and install a fixed system,” to draw up demonstration plan procedures and to conduct a user acceptance test for the proposed fixed system design. Id. Under Sample Task # 2, testing and installation would be addressed first for the eastern perimeter, then for the three remaining perimeters to create a fully integrated system. AR at 3720-21.
In addition to the design, development, testing and installation of the perimeter security system, Sample Task # 2 also included program support requirements. AR at 3721-22. Among these were development and implementation of a maintenance and supply concept, including logistics support of hardware and software; development of an “Operator and Technical/Maintenance” manual for the overall system for use as a reference and in training; and the provision of qualified technical maintenance support. Id. The bid protest revolves around one of these requirements, that an offeror:
[d]evelop the required operator and maintenance training plans and courses. The offeror is required to present the training classes at one location within Gettysburg National Military Park. For proposal purposes, assume two classes of 25 students each. One class is required at the time of Government acceptance of the eastern sector system. A second class is required at the time of Government acceptance of the fully integrated system protecting the complete perimeter.
Id.
For Sample Task #2, offerors were instructed to provide a “Narrative Section” limited to twenty pages, including a “description of the detailed efforts required to perform and accomplish (how you would execute) all requirements of the Sample Task.” AR at 3732. A “Schedule Section” was also required, which was to be “detailed,” to include one “Integrated Master Schedule,” and to “address start and completion times for all work breakdown efforts and deliverables outlined in the Sample Task.” Id.5
*444D. USfalcon’s Proposal
USfaleon submitted its original proposal on August 9, 2008. See AR at 7052. Its Sample Task #2 response included a twenty-page long narrative, AR at 7144-63, and a five page “Integrated Master Schedule.” AR at 7211-15. The narrative included four and one-half pages describing plaintiff and the team of businesses it would assemble to perform the task, AR at 7144-48; one page outlining the steps involved to meet the sample task’s requirements, AR at 7148-49; about three pages on the design of the system architecture, AR 7149-52; more than two pages on developing and testing the mobile system, AR at 7152-54; a page on its initial approach to placement of sensor sites, AR at 7154-55; two pages on installing the fixed perimeter system, AR at 7155-57; approximately two pages describing program support, AR at 7157-59; and nearly four pages dedicated to systems, procedures, and performance metrics. AR at 7159-63.
Training was discussed in three portions of the narrative. A two-paragraph section on the topic was part of the program support subsection. AR at 7158-59. The first paragraph described the development of the training plan and course materials, AR at 7158, and the second paragraph read:
The first training course is scheduled to ensure that this class of 25 students will be fully trained and available to support the installation of the fixed system on the eastern perimeter, operate and maintain. We propose using the highly effective “train-the-trainer” approach, so we will recommend the inclusion of representatives from Adams County and [Gettysburg National Military Park] in this first training class. Training will include exercises using training aids, desktop computer simulation and hands-on with the mobile C3 unit and associated hardware/software. We anticipate that the trainers trained during Class 1 will be prepared to conduct Class 2 prior to installation of the southern perimeter. USfaleon personnel will monitor the classes to ensure all training objectives are accomplished.
AR at 7158-59. Two bullet points on training were also among the steps outlined earlier in the response. The first was to “[develop and provide system training to designated Government operators,” and the second was to “[u]pdate the system training, and train additional Government personnel.” AR at 7149.6
Initial briefings on the evaluation of proposals were presented to the SSAC and the SSA on November 24, 2008 and December 1, 2008, respectively. See AR at 10499, 10698. The identity of each offeror was kept from the SSA, with a letter of the alphabet substituted for each name for purposes of the review. AR at 11187; see also AR at 11302 (list of offerors by name and letter). USfal-eon was offeror “W.” AR at 11302.7 In its initial evaluation, the SSEB gave USfaleon a “Good” rating for the Sample Task # 2 Sub-Factor, noting one significant strength, four strengths, and one weakness (that USfaleon “[a]ssumed grid power without supporting detail”) in the briefing to the SSAC. AR at 10580. One week later, in the briefing to the SSA, this rating was dropped to “Acceptable,” and the one significant strength was eliminated. AR at 10761.8 No concern was expressed relating to the training requirements, and one of USfalcon’s strengths was a “[d]etailed discussion in all areas of program support,” AR at 10761, the category containing these requirements. See AR at 3721.
*445USfalcon’s response to Sample Task # 1 was initially rated “Unacceptable” in the briefing to the SSAC, but was rated “Acceptable” in the briefing to the SSA after one weakness and two significant weaknesses were apparently reconsidered. See AR at 10579, 10732. Plaintiff received a “Good” rating for Sample Task # 3 when the SSAC was briefed, AR at 10581, and seemingly received an “Acceptable” rating when the SSA was briefed in December. See AR at 10790,10795.9
E. Evaluation After Amendment 0009
When proposals were initially evaluated, CECOM discovered that several offerors had submitted schedules with their sample task responses that were inconsistent with the accompanying narratives — including responses to Sample Task #2. See, e.g., AR at 10514, 10520, 10529, 10532, 10571. The agency concluded that this was the result of an ambiguity in the sample tasks and instructions, and it issued Amendment 0009 to revise these portions of the Solicitation to specify that work should be expressed in calendar days. See AR at 6953-54 (Tab 6).10 Offerors were told that the sample task schedules “must accurately and completely depict the graphic representation of the time and task milestones and shall be measured in calendar days.” AR at 6954. Only schedule revisions were allowed, and the amendment emphasized: “No revisions to any other part of the B2-3G proposal are permitted and will not be evaluated.” Id. (underlining in original).
Sample Task # 2 was revised by adding a parenthetical specification that the twelve month period of performance was for “365 calendar days.” AR at 6954; compare AR at 3718 (period expressed only in months) with AR at 6987 (period also expressed in days). The agency also amended the submission instructions for Sample Task #2. AR at 6976. The new portion of the instructions read: “All schedules must be prepared utilizing MS Project Software and submitted in a ‘pdf file. The Government shall evaluate the printed version. Time shall be in calendar days. The schedule shall accurately and completely depict the graphic representation of the integration of time and task milestones.” AR at 6976.
In response to Amendment 0009, USfalcon wrote a letter to CECOM explaining that its initial submission already complied with the amendment but that it “added some clarification to each of the sample task schedules to show that we do understand the requirements of the sample task and the revised instruction in” Amendment 0009. AR at 7052, 10897; see AR at 7216-20 (revised Integrated Master Schedule adding a reference to a 365-day project duration and adding the days of the week to each milestone date). When the SSEB subsequently reviewed US-falcon’s proposal, the Sample Task # 2 evaluators found that USfalcon “demonstrates an understanding of the problem.” AR at 10911. The evaluators, however, gave USfal-con a rating of “Unacceptable” for Sample Task #2 because of a deficiency which affected the “feasibility of approach” consideration. AR at 10909-13. They found USfal-con’s “Training Methodology and Schedule failed to meet the requirement for an offeror presented fully integrated system operator/maintenanee training class.” AR at 10910.
In the “Technical Subfactor Report” for Sample Task #2, signed by the two evaluators, the Technical Factor Chairperson and the SSEB Chairperson, this deficiency was explained:
The Government’s task explicitly requires the offeror to present two training classes. The first class is at the time of the Government acceptance of the Eastern Sector system and the second class is at the time of Government acceptance of the fully integrated system protecting the complete perimeter. [USfalcon’s] methodology proposed in the narrative of “training-the-*446trainer” does not meet the requirement that the offeror must present a training class at the time of Government acceptance of the fully integrated system protecting the complete perimeter. Moreover, the offeror scheduled the training class for the Eastern Sector system; but failed to schedule the required training class at the time of Government acceptance of the fully integrated system protecting the complete perimeter. Therefore, the offeror failed to meet all of the training requirements.
AR at 10912 (emphasis in original). The evaluators found that USfalcon’s response “presents an unfeasible approach (very high risk).” Id. The report concluded with a rationale for the “Unacceptable” rating, stating USfaleon’s was “[a] proposal that contains a deficiency that indicates an approach that cannot be expected to meet requirements and involves a very high risk; and none of these conditions can be corrected without a major rewrite or revision of the proposal.” AR at 10913.
The SSEB Chairperson adopted the “findings in the Sample Task Subfactor Reports in their entirety,” including those for Sample Task # 2. AR at 10898. An updated initial briefing was given to the SSA, in which USfalcon received an “Unacceptable” rating for Sample Task # 2 and, as a consequence, for the Technical Factor. AR at 11037, 11071. These ratings were employed by the CO in her Initial Competitive Range Determination. See AR at 11188-89, 11195.11 Because an “Unacceptable” rating was received for one of the Technical Sample Task Sub-Factors, and the Solicitation advised that those portions of the proposals could not be corrected or revised, the CO determined that USfalcon was ineligible for award and thus not among the fifteen offerors included within the competitive range. AR at 11197, 11199, 11201.
F. Contract Award and Debriefing
In a letter dated January 14, 2009, USfal-con was informed by the CO that it was determined to be “outside the competitive range,” because of the “Unacceptable” rating for Sample Task # 2 and the Technical Factor. AR at 11206-07.12 Nine offerors, including USfalcon, were excluded from the competitive range because of an “Unacceptable” rating for at least one sample task. AR at 11197-99 (offerors E, H, I, L, M, O, P, U and W). Three other offerors were excluded from the competitive range despite having received an “Acceptable” Technical Factor rating. AR at 11199-200 (offerors Z, C and G). From among the fifteen offerors in the competitive range, see AR at 11201 (offerors A, B, D, F, J, K, N, Q, R, S, T, V, X, Y and AA), after discussions were conducted, see AR at 11187, CECOM decided to award seven contracts on May 12, 2009. AR at 11214— 15.
Five of the businesses awarded R2-3G contracts were large businesses: Raytheon Company, Lockheed Martin Integrated Systems Inc., Booz Allen Hamilton, Computer Sciences Corporation, and General Dynamics Global Force, LLC (offerors B, R, S, J and X, respectively). AR at 11172, 11214, 11252, 11302. Under the third step of the process, two small businesses (offerors A and AA) were awarded R2-3G contracts: Adams Communications & Engineering Technology, Inc. (“ACET”) and R4, Inc. (“R4”). AR at 11172, 11215, 11252, 11302.
On May 19, 2009, USfalcon received a debriefing on the procurement process. AR at 11254, 11257. The agency showed plaintiff a chart demonstrating the Sample Task # 2 evaluation that was identical to the one used in the updated SSA briefing — listing three strengths, one weakness, and the “Training *447Methodology and Schedule” deficiency, and assigning a rating of “Unacceptable.” Compare AR at 11037 (SSA briefing) with AR at 11246 (debriefing). USfalcon was permitted to submit follow-up questions after CECOM’s presentation concluded. AR at 11257. One of these questions asked: “Is it the government’s position that clarifying who would teach the 2nd training class would require a major rewrite or revision of the proposal?” AR at 11255. In response, CECOM pointed to the Solicitation’s warning that no corrections or revisions of sample task responses were allowed, explaining: “The sample tasks are designed to be a test; therefore, no clarifications are allowed.” Id. The agency stated that USfalcon’s Sample Task # 2 response contained “a deficiency that failed to meet a Government requirement,” and then quoted from the sample task: “The offeror is required to present the training classes at one location within the Gettysburg National Military Park.” Id. (underlining in original).
G. Protests of the R2-3G Procurement
A few days after receiving its debriefing, USfalcon submitted to the U.S. Army Materiel Command Headquarters a letter protesting its exclusion from the competitive range. AR at 11258-76. That appeal was dismissed due to the pendency of a protest concerning the R2-3G procurement that was before the Government Accountability Office (“GAO”).13 AR at 11277. USfalcon then filed a protest with the GAO. AR at 11278-11301. Plaintiff maintained that the decision to exclude it from the competitive range was arbitrary and unreasonable — arguing that there was no deficiency in its Sample Task #2 response, and that CECOM had no basis for finding that a major rewrite or revision was needed to correct any perceived deficiency. AR at 11291-98. The GAO denied plaintiffs protest on August 31, 2009. AR at 11345-49.
USfalcon then filed a complaint in our Court, alleging that CECOM acted arbitrarily and lacked a rational basis for excluding the offeror from the competitive range. See Compl. ¶¶ 26, 50-90. Plaintiff contends that its Sample Task # 2 response complied with the training requirements, id. ¶¶ 51-64, and that there was no basis for CECOM’s conclusion that any deficiency in this response could not be corrected without a major rewrite or revision. Id. ¶¶ 65-90. Plaintiff requests both declaratory and injunctive relief, id. ¶¶ 91-104, as well as bid preparation and proposal costs. Compl. at 23.
Subsequent to the filing of the complaint, in response to bid protests by other disappointed offerors, CECOM took corrective action by effectively increasing the size of the competitive range from seven to eighteen offerors. Pl.’s Resp. to Def.’s Cross-Mot. for J. on the Admin. R. (“Pl.’s Reply”) at 2-3; see also Tr. (Nov. 17, 2009) (“Tr.”) at 98-99. The eighteen offerors in this revised competitive range include the seven original contract awardees; eight offerors who were determined to be within the competitive range but were not initially awarded contracts; plus three offerors who were rated “Acceptable” but were eliminated from the competitive range based on high price or other factors. Pl.’s Reply at 3 n. 2.
USfalcon has moved for judgment on the administrative record and for a permanent injunction, solely on the ground that there was no basis for the agency’s conclusion that a major rewrite or revision was needed to correct its Sample Task # 2 response. See Pl.’s Br. at 16-23; Pl.’s Reply at 2. Plaintiff argues that the identified deficiency was a minor matter, Pl.’s Br. at 18-23; Pl.’s Reply at 11-16; that CECOM’s finding that USfal-con demonstrated an understanding of the problem is inconsistent with an “Unacceptable” rating for the sample task, Pl.’s Br. at 23-24; Pl.’s Reply at 16-19; and that if a deficiency requiring a major rewrite to correct were contained in the proposal, the evaluators would have noticed it during their initial review. Pl.’s Reply at 20-21. Defendant has cross-moved for judgment on the administrative record, arguing that the record supports the conclusion that USfalcon failed to meet the second training class requirement. Def.’s Cross-Mot. for J. upon the Admin. R. & Opp’n to Pl.’s Mot. (“Def.’s Br.”) at 13-22; Def.’s Reply in Supp. of its *448Cross-Mot (“Def.’s Reply”) at 7-17. The government contends that this alone is enough to justify USfalcon’s exclusion, without regard to whether a major rewrite is needed, Def.’s Br. at 22-23; Def.’s Reply at 6-7, but adds that if the latter is reviewed, under the appropriate deference to evaluators’ technical judgments the decision should not be disturbed. Def.’s Br. at 24-25; Def.’s Reply at 2-4, 7-10. A hearing has been held on these motions, and this opinion issues after the Court has carefully considered the record and the arguments of counsel.
II. DISCUSSION
A. Standard of Review
Post-award bid protests are heard by this Court under the Tucker Act, as amended by the Administrative Dispute Resolution Act of 1996 (“ADRA”), Pub.L. No. 104-320, §§ 12(a)-(b), 110 Stat. 3870, 3874 (1996). 28 U.S.C. § 1491(b)(1) (2006). This provision requires our Court to follow Administrative Procedure Act (“APA”) standards of review. 28 U.S.C. § 1491(b)(4). Those standards, incorporated by reference, provide that a:
reviewing court shall ... (2) hold unlawful and set aside agency action, findings, and conclusions found to be — [H](A) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law; [H](B) contrary to constitutional right, power, privilege, or immunity; [H](C) in excess of statutory jurisdiction, authority, or limitations, or short of statutory right; [H](D) without observance of procedure required by law; [H](E) unsupported by substantial evidence in a case subject to sections 556 and 557 of this title or otherwise reviewed on the record of an agency hearing provided by statute; or [10(F) unwarranted by the facts to the extent that the facts are subject to trial de novo by the reviewing court. In malting the foregoing determinations, the court shall review the whole record or those parts of it cited by a party, and due account shall be taken of the rule of prejudicial error’.
5 U.S.C. § 706 (2006).
Based on an apparent misreading of the legislative history, see Gulf Group, Inc. v. United States, 61 Fed.Cl. 338, 350 n. 25 (2004), the Supreme Court had determined, before the 1996 enactment of the ADRA, that the de novo review standard of 5 U.S.C. § 706(2)(F) does not usually apply in review of informal agency decisions — decisions, that is, such as procurement awards. See Citizens to Pres. Overton Park, Inc. v. Volpe, 401 U.S. 402, 415, 91 S.Ct. 814, 28 L.Ed.2d 136 (1971) (“Overton Park ”). Instead, courts in those cases are supposed to apply the standard of 5 U.S.C. § 706(2)(A): whether the agency’s acts were “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” See Overton Park, 401 U.S. at 416, 91 S.Ct. 814 (citation omitted); see also Advanced Data Concepts, Inc. v. United States, 216 F.3d 1054, 1057 (Fed.Cir.2000) (applying 5 U.S.C. § 706(2)(A)); but see Impresa Construzioni Geom. Domenico Garufi v. United States, 238 F.3d 1324, 1332 n. 5 (Fed.Cir.2001) (“Domenico Garufi”) (also citing 5 U.S.C. § 706(2)(D) as applicable in bid protests). The “focal point for judicial review should be the administrative record already in existence, not some new record made initially in the reviewing court.” Camp v. Pitts, 411 U.S. 138, 142, 93 S.Ct. 1241, 36 L.Ed.2d 106 (1973). This applies even where, as here, the matter under review was not the product of a formal hearing. See Fla. Power & Light Co. v. Lorion, 470 U.S. 729, 744, 105 S.Ct. 1598, 84 L.Ed.2d 643 (1985).
A motion for judgment on the administrative record under Rule 52.1 of the Rules of the United States Court of Federal Claims (“RCFC”) differs from motions for summary judgment under RCFC 56, as the existence of genuine issues of material fact does not preclude judgment on the administrative record. See Bannum, Inc. v. United States, 404 F.3d 1346, 1355-57 (Fed.Cir.2005); Fort Carson Supp. Servs. v. United States, 71 Fed.Cl. 571, 585 (2006). Rather, a motion for judgment on the administrative record examines whether the administrative body, given all the disputed and undisputed facts appearing in the record, acted in a manner that complied with the legal standards governing the decision under review. *449See Fort Carson, 71 Fed.Cl. at 585; Greene v. United States, 65 Fed.Cl. 375, 382 (2005); Arch Chems., Inc. v. United States, 64 Fed.Cl. 380, 388 (2005). Factual findings are based on the evidence in the record, “as if [the Court] were conducting a trial on the record.” Bannum, 404 F.3d at 1357; see also Carahsoft Tech. Corp. v. United States, 86 Fed.Cl. 325, 337 (2009); Gulf Group, 61 Fed.Cl. at 350.
Under the “arbitrary and capricious” standard, the Court considers “whether the decision was based on a consideration of the relevant factors and whether there has been a clear error of judgment” by the agency. Overton Park, 401 U.S. at 416, 91 S.Ct. 814. Although “searching and careful, the ultimate standard of review is a narrow one. The court is not empowered to substitute its judgment for that of the agency.” Id. The Court will instead look to see if an agency has “examine[d] the relevant data and articulate[d] a satisfactory explanation for its action,” Motor Vehicle Mfrs. Ass’n v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43, 103 S.Ct. 2856, 77 L.Ed.2d 443 (1983), and “may not supply a reasoned basis for the agency’s action that the agency itself has not given.” Bowman Transp., Inc. v. Ark.-Best Freight Sys., Inc., 419 U.S. 281, 285-86, 95 S.Ct. 438, 42 L.Ed.2d 447 (1974). The Court must determine whether “the procurement official’s decision lacked a rational basis,” Domenico Garufi, 238 F.3d at 1332 (adopting APA standards developed by the D.C. Circuit); see also Delta Data Sys. Corp. v. Webster, 744 F.2d 197, 204 (D.C.Cir.1984).14 The applicable test is “whether ‘the contracting-agency provided a coherent and reasonable explanation of its exercise of discretion.’” Domenico Garufi, 238 F.3d at 1333 (quoting Latecoere Int’l, Inc. v. United States Dep’t of Navy, 19 F.3d 1342, 1356 (11th Cir.1994)).
Because of the deference courts give to discretionary procurement decisions, “the ‘disappointed bidder bears a heavy burden of showing that the award decision had no rational basis.’” Id. (quoting Saratoga Dev. Corp. v. United States, 21 F.3d 445, 446 (D.C.Cir.1994)). In particular, the evaluation of proposals for their technical excellence or quality is a process that often requires the special expertise of procurement officials, and thus reviewing courts give the greatest deference possible to these determinations. See E.W. Bliss Co. v. United States, 77 F.3d 445, 449 (Fed.Cir.1996); Arch Chems., 64 Fed.Cl. at 400; Gulf Group, 61 Fed.Cl. at 351; Overstreet Elec. Co. v. United States, 59 Fed.Cl. 99, 102, 108, 117 (2003). Challenges concerning “the minutiae of the procurement process in such matters as technical ratings ... involve discretionary determinations of procurement officials that a court will not second guess.” E.W. Bliss, 77 F.3d at 449. “[N]aked claims” of disagreement with evaluations, “no matter how vigorous, fall far short of meeting the heavy burden of demonstrating that the findings in question were the product of an irrational process and hence were arbitrary and capricious.” Banknote Corp. of Am. v. United States, 56 Fed.Cl. 377, 384 (2003), aff'd, 365 F.3d 1345 (Fed.Cir.2004).
The presence (by the government) or absence (by the protester) of any rational basis for the agency decision must be demonstrated by a preponderance of the evidence. See Gulf Group, 61 Fed.Cl. at 351; Overstreet, 59 Fed.Cl. at 117; Info. Tech. & Appl’ns Corp. v. United States, 51 Fed.Cl. 340, 346 (2001) (citing GraphicData, LLC v. United States, 37 Fed.Cl. 771, 779 (1997)), aff'd, 316 F.3d 1312 (Fed.Cir.2003). If arbitrary action is found as a matter of law, the Court will then decide the factual question of whether the action was prejudicial to the bid protester. See Bannum, 404 F.3d at 1351-54.
B. Was USfalcon Potentially Prejudiced by the Challenged Actions?
The issue of prejudice must first, however, be considered in the context of *450standing, before the Court may turn to the merits. See Info. Tech., 316 F.3d at 1319. In bid protests, prejudice “is a necessary element of standing,” and in all cases “standing is a threshold jurisdictional issue.” Myers Investigative & Sec. Servs., Inc. v. United States, 275 F.3d 1366, 1369-70 (Fed.Cir.2002); see also Labatt Food Serv., Inc. v. United States, 577 F.3d 1375, 1378-79 (Fed.Cir.2009). Under the ADRA, actions may be brought “by an interested party objecting ... to a proposed award or the award of a contract or any alleged violation of statute or regulation in connection with a procurement or a proposed procurement.” 28 U.S.C. § 1491(b)(1). The Federal Circuit has construed the term “interested party” to have the same definition as under the Competition in Contracting Act, encompassing “actual or prospective bidders or offerors whose direct economic interest would be affected by the award of the contract or by failure to award the contract.” Am. Fed’n of Gov’t Employees, AFL-CIO v. United States, 258 F.3d 1294, 1302 (Fed.Cir.2001); see 31 U.S.C. § 3551(2).
An offeror establishes an interest sufficient to support a post-award bid protest by showing that it would have had a substantial chance of receiving the contract award absent the alleged procurement errors. See Labatt, 577 F.3d at 1380; Info. Tech., 316 F.3d at 1319. This prejudice test for purposes of standing is the same as the test employed in the merits determination. See Labatt, 577 F.3d at 1378-80 (relying on, inter alia, the decisions on the merits in Bannum, 404 F.3d at 1354, 1358, and Statistics Inc. v. Christopher, 102 F.3d 1577, 1581 (Fed.Cir.1996)); Info. Tech., 316 F.3d at 1319 (relying on, inter alia, the merits decision in Alfa Laval Separation, Inc. v. United States, 175 F.3d 1365, 1367 (Fed.Cir.1999)). But even though the question of prejudice involves the same test and a factual analysis in both contexts, the answers might differ due to the procedural posture.
Since the prejudice determination for purposes of standing necessarily occurs before the merits of a protest are reached, the Court must accept the well-pled allegations of agency error to be true. See, e.g., Info. Tech., 316 F.3d at 1319 (finding that standing was established assuming the protester succeeds on its argued grounds); Beta Analytics Int’l, Inc. v. United States, 67 Fed.Cl. 384, 396 (2005) (same).15 Normally, if the protester’s case rests on just one allegedly irrational action, or just one purported violation of a law or regulation, the finding of prejudice in the standing context will be replicated on the merits, once the asserted error is confirmed. But a different outcome is possible if more than one ground is raised, as multiple errors might cumulatively establish prejudice, but not a smaller combination of them. For instance, a proposal may have been found unacceptable for several independent reasons, each corresponding with a separate argument that a law or regulation was violated, and the protester will not be prejudiced unless it succeeds on all its arguments.16 Similarly, several allegedly arbitrary decisions with regard to evaluation factors or subfactors taken together might support the allegation that a contract award decision lacked a rational basis and was thus prejudicial, see Gulf Group, 61 Fed.Cl. at 353; Beta Analytics, 67 Fed.Cl. at 396, but each taken alone might not have had a material impact on the award decision.17
*451Even though the government does not challenge USfaleon’s standing, the Court finds it advisable that this jurisdictional question be explicitly determined.18 Here, plaintiff contends that the evaluation of its Sample Task # 2 response was arbitrary, and should have yielded the determination that the response was acceptable. Compl. ¶¶ 66-89; Pl.’s Br. at 1-2, 15-26; Pl.’s Reply at 9-21. Assuming this is true for purposes of standing, USfalcon would have been prejudiced by this error. Had it received an acceptable rating for that Sample Task, it would have been eligible for the competitive range, as plaintiff’s proposal received an “Acceptable” rating for the other two Technical Sample Task Sub-Factors, and the SBPP Factor. See AR at 118, 11195. The Technical Factor was the most important factor, and of the six eligible offerors whose score was merely “Acceptable” for this factor, three were included in the competitive range, based on their “significantly lower” prices. See AR at 11199-11201. One of the latter group, offeror AA, was, like USfalcon, a Service Disabled Veteran Owned Small Business. See AR at 11172. Were USfalcon to have received an “Acceptable” rating for the Technical Factor, it would have had the same rating as AA for the two most important factors (Technical and Performance Risk), a better rating for the SBPP Factor (“Acceptable” as opposed to “Susceptible to Being Made Acceptable”), and a price that was lower by $[XXXXXXXXX], or [XX] percent. Prices ranging from about $[XX] billion to $[XX] billion were the reason why the other three offerors with an “Acceptable” Technical Factor rating were excluded from the competitive range. AR at 11200. Given its low price ($9,806,649,558) in comparison with that of AA, it is clear that plaintiff would have been in the competitive range, had it received an “Acceptable” rating in the Technical Factor.
Since the Solicitation reserved up to four contract awards for small businesses with acceptable proposals, see AR at 3-4, 118, and only two small businesses were included in the competitive range (and thus received awards), see id. at 11172, 11201, 11252, inclusion in the competitive range would almost certainly have resulted in a contract award to USfalcon, as a small business concern. A substantial chance of receiving a contract award has thus been established. Moreover, as a result of other bid protests, this post-award protest is in an unusual posture — as a corrective action has terminated the seven awarded contracts, and all eighteen offerors whose proposals were deemed technically acceptable are now being considered for up to eighteen contract awards, under Amendment 0012. See Tr. at 99, 104; Ex. 1 to Pl.’s Mot. to Supp. (Docket No. 37) at 9,12; Ex. 1 to Def.’s Resp. to Pl.’s Mot. to Supp. (Docket No. 40) (explaining that Amendment 0012 applies to a “revised competitive range”).19 The revised or effec*452tive competitive range includes four small businesses (offerors A, C, Z and AA), see AR 11172, 11199-201, competing for four contracts reserved for small businesses, and for fourteen contracts for businesses of any size. Ex. 1 to PL’s Mot. to Supp. (Docket No. 37) at 9,12. Compared to the six offerors in the revised competitive range with “Acceptable” ratings for the Technical Factor (C, G, T, Y, Z and AA), USfalcon has the same Performance Risk rating as each of them, a better SBPP rating than five of them, and a lower price than four of them. See AR at 11173, 11189-91, 11194-97. Were USfalcon to have an “Acceptable” rating for the Technical Factor, it would obviously have a substantial chance to receive one of up to eighteen contracts to be awarded.20
C. The Relevance of Source Selection Plans to Court Review
As the ratings used in the technical evaluation of each offeror’s proposal, and their definitions, were provided in the SSEP rather than the Solicitation,21 a consideration of the relevance of a source selection plan to our Court’s review is in order. As the government notes, see Def.’s Br. at 33-34, our Court has acknowledged the long-standing rule of the GAO that source selection “plans generally do not give outside parties any rights.” ManTech Telecomms. & Info. Sys. Corp. v. United States, 49 Fed.Cl. 57, 67 (2001). The rule traces back to a decision in which the GAO analogized such plans to directives and other internal agency regulations, and found that failure to comply with such an “internal instruction” was no “basis for questioning the validity of [an] award since an internal agency guideline does not have the force and effect of law.” Robert E. Derecktor of R.I., Inc., 84-1 CPD ¶ 40, 1984 WL 43785, at *4 (Comp.Gen. Feb.2, 1984).
Thus, unlike the treatment of a statute, the prejudicial violation of a source selection plan is not, per se, a ground for a protest. Since the agency is free to change or waive its internal policies, under the GAO approach “agencies are permitted to deviate from their stated evaluation plans so long as the agency’s evaluation is reasonable.” Textron Marine Sys., 91-2 CPD ¶ 162, 1991 WL 165241, *3 n. 3, 1991 U.S. Comp. Gen. LEXIS 967, at *9 n. 3 (Comp.Gen. Aug. 19, 1991). On the other hand, adherence to a source selection plan may result in an unreasonable procurement decision and thus a protest that is sustained — for instance, when an evaluation methodology blurs all technical distinctions and effectively makes a lesser important price factor decisive. See Trijicon, Inc., 71 Comp. Gen. 41, 91-2 CPD ¶ 375, 1991 WL 237785, *4-5, 1991 U.S. Comp. Gen. LEXIS 1245, at *10-12 (1991); see also The MIL Corp., 2005 CPD ¶ 29, 2004 WL 3190217, *3-4, 2004 U.S. Comp. Gen. LEXIS 283, at *9-11 (Comp.Gen. Dec. 30, 2004) (finding evaluation unreasonable when an unfavorable rating was given to an offeror for a past performance subfactor concerning which the offeror had no relevant information). To the GAO, what is relevant is what an agency actually did in its evaluation, not what it may have intended to do — unless the intention was expressed in the Solicitation itself. See Frank E. Basil, Inc., 69 Comp. Gen. 472, 90-1 CPD ¶ 492, 1990 WL 269573, *2,1990 U.S. Comp. Gen. LEXIS 518, at *5-6 (1990) (holding that “agencies do not have the discretion to announce in a solicitation that one evaluation plan will be used and then follow another in the actual evaluation,” unless offerors are informed and given the “opportunity to structure their proposals with the new evaluation scheme in mind”).
This Court has developed a refinement of the GAO approach, which recognizes that the *453act of choosing an evaluation methodology is itself a discretionary decision in the evaluation process, and which takes stock of the natural and logical consequences of this act. See, e.g., Fort Carson, 71 Fed.Cl. at 592-93; Beta Analytics, 67 Fed.Cl. at 399, 407. The FAR does not require that agencies use any particular rating method, see 48 C.F.R. § 15.305(a), or disclose rating methods in the solicitation, except for the “approach for evaluating past performance.” 48 C.F.R. §§ 15.304(d), 15.305(a)(2)(ii). No law or regulation appears to prevent an agency’s changing from one undisclosed rating method to another, and in that respect a source selection plan is similar to general guidelines or internal policies which are adopted by an agency. But source selection plans differ from these other internal policies in a significant respect — as, presumably, when an agency adopts a source selection plan for a particular procurement, its officials are employing their expertise to select a rating methodology they believe will best meet the agency’s needs that are the subject of the specific solicitation.
In a negotiated procurement under the FAR, the SSA, acting on behalf of the agency head, is ultimately responsible for the evaluation and the best value decision. 48 C.F.R. §§ 15.303(b), 15.308. He must assemble an “evaluation team” which is “tailored for the particular acquisition” and possesses the “expertise to ensure a comprehensive evaluation of offers.” 48 C.F.R. § 15.303(b)(1). He must ensure that the agency is consistent in the information requested of offerors, and that “proposals are evaluated based solely on the factors and subfaetors contained in the solicitation.” 48 C.F.R. § 15.303(b)(3)-(4). And although “the SSA may use reports and analyses prepared by others, the source selection decision shall represent the SSA’s independent judgment.” 48 C.F.R. § 15.308. On this last point, nothing prevents the SSA from basing his judgment upon the evaluations and ratings of others, and indeed the provision cited expressly allows the SSA’s decision to be based on “business judgments and tradeoffs made or relied on by the SSA.” Id. (emphasis added).
Since the SSA is responsible for the source selection decision, and will want to efficiently make use of the talents and expertise of the evaluation team, the contents of a source selection plan guiding his subordinates would naturally be his concern. For Department of Defense procurements, including those of the military departments, the Defense Federal Acquisition Regulation Supplement (“DFARS”) requires that for “high-dollar value” and certain other acquisitions, the SSA “shall approve a source selection plan before the solicitation is issued.” 48 C.F.R. § 215.303(b)(2).22 A mandatory procedure, see 48 C.F.R. § 202.101 (definition for “Procedures, Guidance, and Information ” (“PGI”)), requires that this source selection plan include “[a] description of the evaluation process, including specific procedures and techniques to be used in evaluating proposals.” PGI § 215.303(b)(2)(C)(3). Thus, at least where procedures similar to the Defense Department’s are followed, the source selection plan is drawn up before any proposals are seen by evaluators, and is expressly approved by the SSA.23 The evaluation procedures and techniques embodied in these plans may vary, of course, given the particular needs being addressed in a procurement, and could employ detailed and objective rules, on the one hand, or more subjective and deferential standards, on the other. But the important point for our purposes is that once the SSA approves a particular source selection plan, he should expect, unless in*454formed to the contrary, that it has been followed. Thus, mandatory instructions that evaluators “must” and “shall” rate proposals a certain way are fixed among the foundational assumptions of the SSA.
Because the SSA relies on the evaluators working for him to follow source selection plan mandates, departures from the plan could undermine the rationality of the ultimate source selection decision. He might believe, for instance, that a certain rating was produced by one formula, when in fact another formula was used by evaluators, creating a disconnect between his decision and the underlying facts. Now, it could be the case that the formula used was better adapted to the needs of the agency than the original one selected — but it is also possible that the change in formula was an unintended error or, perhaps worse, reflected a (conscious or unconscious) reaction to the actual features and qualities presented by a particular proposal. Thus, the failure of evaluators to follow the specific procedures and techniques mandated by a source selection plan, to the prejudice of a protester, could be evidence of an erroneous or biased evaluation. See Fort Carson, 71 Fed.Cl. at 592-93. Any such suspicions could be dispelled by records showing that the departure from the plan was consciously chosen prior to the viewing of proposals, or articulating a rational reason for the departure. Although an agency may have wide discretion in selecting the procedures and techniques to be used in an evaluation, provided these have some possible relation to the relevant factors and sub-factors, when its evaluators choose to depart from these after being exposed to offerors’ proposals, the integrity of the process would demand that some reason be documented. And to ensure that there is no disconnect between the SSA’s decision and its underlying assumptions, the choice to depart must have been made known to him so that it may receive at least implied approval.
There are, of course, many good reasons why a source selection plan may be changed midstream, and documentation of these is but a minor burden. The record in this case provides one example of this, as the SSEP was changed to eliminate a consideration from the ratings definitions for the SBPP factor because this consideration was not contained in the Factor elements described in the Solicitation. AR at 10447-48. This change was memorialized in a contemporaneous memorandum from the CO, id., and resulted in a revised SSEP, AR at 10449-98, that was expressly approved by the SSA. See AR at 10450. In other circumstances, the evaluators may conclude that the plan’s definitions are too rigid or otherwise not well-suited for the factors they are judging. But if no reason is given for departing from a source selection plan (or the departure is not highlighted to allow the SSA to articulate a reason), a departure could be due to error and the resulting ratings may be different than the evaluators intended. Or, in extreme (and, the Court hopes, rare) cases, the departure could have been intended to benefit a particular offeror.24 Particularly when the SSA bases his ultimate decision not on the proposals themselves but on briefings in which the ratings are presented as the inputs for his calculus, an unjustified departure from a source selection plan may rob this ultimate decision of its rational basis.
Source selection plans could be relevant to court review for other reasons. The regularity presumed by an SSA may also be presumed by the court. Thus, if the source selection plan required a certain event to take place, such as a consensus meeting, the presumption of regularity will support that event’s occurrence, absent other record evidence. Cf. Beta Analytics, 67 Fed.Cl. at 397 (finding no presumption because there was no plan requirement) (citing Tecom, Inc. v. United States, 66 Fed.Cl. 736, 769-70 (2005)). Under this presumption, “predicate acts that were required of public officials could be presumed upon proof of their natural results.” Tecom, 66 Fed.Cl. at 769. The *455presumption can apply to substantive matters, such as the inference that a particular finding was made to support a rating, see Beta Analytics, 67 Fed.Cl. at 400, or to procedural matters — for instance when a plan articulates the circumstances under which fewer than all proposals would get a second review. Cf. id. at 407 (finding no plan provision justifying a reevaluation for just one offeror). In the latter case, a decision that at first glance might appear to treat offerors unequally would instead be seen as the fair application of a preexisting rule (under which not every offeror qualified for the particular treatment).
A source selection plan may also be the source of a restriction on evaluators, foreclosing certain actions. Thus, an action that might initially appear to be fair procedurally, such as a re-scoring of all proposals, could be the ground for a successful protest when such reevaluations are specifically forbidden by the source selection plan. But in the absence of such a restriction — even if the reevaluation resulted in a change in ordering or ranking of offerors — the action would not be objectionable, so long as the offerors were given equal treatment. See Fort Carson, 71 Fed.Cl. at 599.
And, of course, when a source selection plan is the source of the ratings definitions that are followed in the course of evaluating offerors, it figures prominently in court review of a procurement decision. This review “entails identifying the judgments made by the relevant officials and verifying that the relevant information was considered, the relevant factors were employed, and a satisfactory explanation was articulated.” Id. at 592 (citing Overton Park, 401 U.S. at 416, 91 S.Ct. 814 and Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856). A procurement decision’s explanation necessarily includes the ratings assigned to proposals. When a rating, given its definition, cannot be squared with an actual proposal, the decision to assign that rating is arbitrary. But since courts are not to second guess discretionary determinations that are technical and subjective in nature, see E.W. Bliss, 77 F.3d at 449, the less objective the rating criteria happen to be, the harder it is for a protester to establish that a rating decision was arbitrary. See Beta Analytics, 67 Fed.Cl. at 399.
With these various ways in which a source selection plan may be relevant to court review, it is little wonder that the source selection plan is identified in our rules as among the “relevant core documents” of the administrative record which may be produced early to expedite a case. RCFC App. C, ¶ 22(b). In this particular ease, neither party contends that the government departed from the SSEP. See, e.g., Pl.’s Reply at 19; Tr. at 83, 86. The CO based her Initial Competitive Range Determination, with SSA approval, see AR at 11202, on the SSEB evaluation results — providing a synopsis of these results, AR at 11188-97, and explicitly basing her determination to exclude offerors from the competitive range “upon the evaluation results.” AR at 11197; see, e.g., AR at 11199 (CO determining that USfaleon was ineligible for contract award, based on the ratings and determinations of the SSEB); see also Tr. at 79-80. For plaintiffs proposal, the SSEB Chairperson adopted the “findings in the Sample Task Subfactor Reports in their entirety.” AR at 10898. The briefing provided to the SSA repeated verbatim the Sample Task # 2 deficiency found by the evaluators, see AR at 10909-10, 11037, and the Technical Factor rating definitions from the SSEP. See AR at 7026, 10982. The evaluators applied the SSEP definition of “Unacceptable” to USfalcon’s Sample Task # 2 response. AR at 10913. Because of this rating, and the resulting Technical Factor rating of “Unacceptable,” plaintiff was excluded from the competitive range. AR at 11199. The question for the Court is whether the government was arbitrary in determining that USfaleon deserved a subfactor rating of less than “Acceptable” for its Sample Task # 2 response.
D. Did the Sample Task # 2 Rating Have a Rational Basis?
As the CO relied upon the SSEB evaluation results to exclude USfaleon from the competitive range, the agency’s decision ultimately traces back to the subfaetor evaluation report for plaintiffs Sample Task #2 response. See AR at 10898, 10906-13. The CO explained that the “Unacceptable” rating *456for Sample Task # 2 was given to USfalcon because “it was determined that [USfalcon’s] proposal contained an approach that failed to provide training as explicitly required in the Government task,” and plaintiff “failed to schedule the training required for the fully integrated system protecting the complete perimeter.” AR at 11199. She had earlier noted that receipt of “an ‘Unacceptable’ rating in at least one of the Technical Sample Task Sub-Factors” would result in exclusion from the competitive range, due to the Solicitation provision stating that “to l’eeeive consideration for award, a rating of no less than ‘Acceptable’ must be achieved” for the Technical Factor and its subfactors, and because Sample Task responses “cannot be corrected in accordance with the RFP.” AR at 11197.
The SSEB evaluators determined that plaintiff’s Sample Task # 2 response contained a deficiency, finding the “Training Methodology and Schedule failed to meet the requirement for an offeror presented fully integrated system operator/maintenance training class.” AR at 10909-10. This deficiency was addressed under the “Feasibility of Approach” consideration. The evaluators elaborated that USfalcon’s “methodology proposed in the narrative of ‘training-the-trainer’ does not meet the requirement that the offeror must present a training class at the time of Government acceptance of the fully integrated system protecting the complete perimeter.” AR at 10912 (emphasis in original). They added that USfalcon “failed to schedule the required training class” and thus “failed to meet all of the training requirements.” Id. For this consideration, the evaluators concluded that plaintiff “present[ed] an unfeasible approach (very high risk).” Id. As a consequence the rating of “Unacceptable” was given for the subfactor, under the rationale that USfalcon’s was “[a] proposal that contains a deficiency that indicates an approach that cannot be expected to meet requirements and involves a very high risk; and none of these conditions can be corrected without a major rewrite or revision of the proposal.” AR at 10913.
The parties urge upon the Court diametrical approaches to the review of this evaluation. According to the government, the evaluators merely viewed USfalcon’s proposal through the prism of the evaluation factors and subfactors, and determined that a Solicitation requirement was not met. See Tr. at 46-47; Def.’s Br. at 13-16, 37. Defendant brushes aside the SSEP definition of “Unacceptable” under the Technical Factor as only “internal guidance on how to interpret the failure to comply with Solicitation requirements.” Def.’s Reply at 7; see also Def.’s Br. at 13, 37. Whether the identified deficiency could “be corrected without a major rewrite or revision of the proposal,” AR at 10913, the government argues, is the sort of minutiae that courts are to avoid. See Def.’s Br. at 39-40. Instead, the government maintains that the Court should but confirm that the evaluators identified the Solicitation requirement that was not met, and based their decision on an enumerated evaluation factor or subfactor. See, e.g., Def.’s Br. at 13-22; Tr. at 46-47.
Plaintiff, on the other hand, focuses on the last clause of the finding that its Sample Task # 2 response was “Unacceptable,” arguing that “the basis for the agency’s conclusion” was that a “perceived deficiency could not be corrected without a major rewrite or revision of the proposal,” and thus “it is appropriate for this Court to review whether that basis is accurate and reasonable.” Pl.’s Br. at 19. USfalcon contends that the “perceived” deficiency concerning the second training session reflected at worst a “lack of clarity,” Pl.’s Reply at 6, and that it could not take a major rewrite or revision to clarify who would present the class and when this would occur. Pl.’s Br. at 19-20. Plaintiff relies on one early precedent from our court and some aged GAO opinions which rejected agency determinations that a major rewrite or revision was needed to correct a proposal, and contends that the same approach should be used in this proceeding — scrutinizing whether the matters to be corrected were relatively minor in magnitude. See id. at 20-22 (discussing Rockwell Int’l Corp. v. United States, 4 Cl.Ct. 1, 4-6 (1983); Loral EOS/STS, Inc., 88-1 CPD ¶ 467, 1988 WL 227133, 1988 U.S. Comp. Gen. LEXIS 495 (Comp. Gen. May 18, 1988); and Coastal Gov’t Servs., Inc., 93-1 CPD ¶ 167, 1993 WL 57697, *4571993 U.S. Comp. Gen. LEXIS 201 (Comp. Gen. Feb. 23, 1993)).
1. Sample Task Requirements Are Not Per Se Material Terms and Conditions
After careful consideration, the Court concludes that under the circumstances presented, neither proffered approach is quite right. The government’s approach is premised upon the notion that every requirement to be addressed in sample task responses is a material term of a solicitation, merely because the sample tasks are evaluation subfactors. Defendant suggests that this notion is implicit in the FAR mandate that evaluations be based “solely on the factors and subfactors specified in the solicitation,” see Tr. at 65; Def.’s Br. at 19 (citing, inter alia, 48 C.F.R. §§ 15.303(b)(4), 15.305(a)), or is “simply understood.” Tr. at 70. No regulation or precedent has been identified to support this notion, which is inconsistent with the Federal Circuit’s borrowing of the FAR explanation of “immaterial” from the context of sealed bidding (in which sample tasks have no place) to define material terms for purposes of negotiated acquisitions. See Centech Group, Inc. v. United States, 554 F.3d 1029, 1038 (Fed.Cir.2009) (citing 48 C.F.R. § 14.405). If a variation from a term or condition is not material “when the effect on price, quantity, quality or delivery is negligible when contrasted with the total cost or scope of the supplies or services being acquired,” 48 C.F.R. § 14.405, then USfaleon’s failure to comply with a required element in a sample task would not seem to involve a material term or condition. These tests are not the “service[] being acquired,” and in this case the prices being compared were not those connected with the sample tasks. See AR at 116-17. Moreover, the services that were ultimately being acquired were to be obtained through a process in which proposals “that depart from stated requirements” could be the basis of awards. See AR at 133. Thus, the ability to avoid departures from task order requirements is not a necessary quality of an awardee’s task order proposals.
The Court concludes that the requirements of a sample task are not per se material terms and conditions of a solicitation.25 Unlike the situation presented here, when the sample tasks are based on the precise services being acquired or factor into the price comparison, they could be material terms. And a particular solicitation may effectively make them so, by defining an acceptable proposal as one that meets all sample task requirements or by adopting an evaluation methodology which mandates that all such requirements be met. See, e.g., Veterans Tech., LLC, 2008 CPD ¶ 31, 2008 U.S. Comp. Gen. LEXIS 21, at *2 (Comp.Gen. Jan. 7, 2008) (where sample task responses needed to show scope and requirements of task were understood, technical approach was sound, and “technical depth necessary to complete the task order” was possessed); Serco, Inc., 2006 CPD ¶ 120, 2006 WL 2482978, *2, 2006 U.S. Comp. Gen. LEXIS 131, at *6 (Comp.Gen. Aug. 9, 2006) (where sample task factor required that responses “shall demonstrate that offerors fully understand the specific and unique requirements”). The R2-3G Solicitation, however, did not define “Acceptable.” See AR at 118. And the evaluation considerations for the sample task subfactors were not expressed in terms of whether or not all requirements were understood and would be successfully met, but were rather phrased much less definitely — as “the extent to which” a proposal showed such an understanding, or “the level of confidence provided the Government” that the offeror could timely meet the requirements of the test. AR at 119. This methodology, as described in the Solicitation, did not foreclose the possibility of an “Acceptable” rating for a proposal which failed to meet all sample task requirements.
Accordingly, review of this procurement decision is not so simple as merely verifying that the government identified a sample task requirement that it reasonably determined was not met. Since the applicable laws, regulations, and Solicitation provisions do not *458definitively mandate that such a failure would result in a proposal that is “less than Acceptable,” at least one deeper level of analysis is called for in the course of court review. The nexus between the failure to meet a sample task requirement and exclusion from the competitive range must be found either in the SSEP or the agency’s explanation of the evaluation. But while this review is “searching and careful,” Overton Park, 401 U.S. at 416, 91 S.Ct. 814, the Court rejects the approach to review urged upon it by plaintiff, which would “substitute [the Court’s] judgment for that of the agency.” Id.
2. The Agency Did Not Adopt a Doubt-Favors-the-Ojferor Approach
The precedents cited by USfalcon employ a manner of review that was tailored for a different approach to the competitive range than currently required by the FAR. The original FAR provided that the competitive range “shall include all proposals that have a reasonable chance of being selected for award,” and elaborated: “When there is doubt as to whether a proposal is in the competitive range, the proposal should be included.” 48 C.F.R. § 15.609(a) (1997); 48 Fed.Reg. 42,202 (Sept. 19, 1983). The Federal Circuit rightly recognized “a tension between the necessarily broad discretion of an agency, acting through the contracting officer, to determine what bids are realistically competitive,” and this initial FAR mandate that “when there is doubt, the questionable bid should be included.” Birch & Davis Int'l, Inc. v. Christopher, 4 F.3d 970, 973 (Fed.Cir.1993). By limiting exclusion to circumstances where the offeror’s likely rejection was beyond all doubt, this standard shifted the focus of review from the reasonableness of the CO’s decision to “whether the excluded competitors have a reasonable chance of being selected after opportunity for improvement.” Id. at 974 (emphasis in original). The question for a court was thus in a sense inverted — it was no longer whether a reasonable person could agree with the CO, but rather whether a reasonable person could have disagreed, and thought the offeror had a chance of success. The key consideration in such a review was the extent to which the proposal was deficient, as it could have been excluded only when “so technically inferior or out of line as to price, as to render discussions meaningless.” Id.
The GAO had, before the advent of the FAR, adopted the view “that the basic purpose of the negotiated procurement is to determine whether deficient proposals are reasonably subject to being made acceptable through discussions.” TM Sys., Inc., 56 Comp. Gen. 300, 1977 WL 10361, *5, 77-1 CPD ¶ 61, 1977 U.S. Comp. Gen. LEXIS 191, at *14 (1977) (citation omitted). This approach was carried over into protests of competitive range determinations under the doubt-favors-the-offeror standard of the original FAR, as proposals with a reasonable chance of selection included those “which are reasonably susceptible of being made acceptable through discussions.” Loral EOS/STS, Inc., 88-1 CPD ¶ 467, 1988 WL 227133, *2-3, 1988 U.S. Comp. Gen. LEXIS 495, at *6 (Comp.Gen. May 18,1988). This susceptibility was measured in terms of “magnitude,” id., 1988 WL 227133, *1, *6, 1988 U.S. Comp. Gen. LEXIS 495, at *2, *17, with the GAO considering whether the identified deficiencies were of the type or number that were “minor” and “easily corrected,” Coastal Gov’t Servs., Inc., 93-1 CPD ¶ 167, 1993 WL 57697, *2-3, *4, 1993 U.S. Comp. Gen. LEXIS 201, at *6, *10 (Comp.Gen. Feb. 23,1993), “rather simply remedied,” Loral, 1988 WL 227133, *5, 1988 U.S. Comp. Gen. LEXIS 495, at *12, or “clarified easily,” Dynalantic Corp., 97-1 CPD ¶ 101, 1997 WL 10341, *6-7, 1997 U.S. Comp. Gen. LEXIS 124, at *16 (Comp.Gen. Feb. 25, 1997); or instead required a “major” or “substantial” proposal rewrite, Loral, 1988 WL 227133, *5, *6, 1988 U.S. Comp. Gen. LEXIS 495, at *12, *17; Dynalantic Corp., 1997 WL 103418, *7, *7-8, *8-9, 1997 U.S. Comp. Gen. LEXIS 124, at *17, *19, *22-23, “major revisions,” Allenhurst Indus., Inc., 1994 U.S. Comp. Gen. LEXIS 586, at *4, *7 (Comp.Gen. July 8, 1994); The Hines-Ike Co., 96-1 CPD ¶ 158, 1996 WL 115360, *1-2, 1996 U.S. Comp. Gen. LEXIS 170, at *4 (Comp.Gen. Mar. 15, 1996), or the equivalent of a “new proposal.” Loral, 1988 WL 227133, *6, 1988 U.S. Comp. Gen. LEXIS 495, at *17. These are the GAO precedents *459upon which USfalcon relies. See Pl.’s Br. at 17-18,21-22; Pl.’s Reply at 13-15.
In a similar vein was the one precedent from our court which considered whether proposal corrections entailed “minor revisions” or a “major rewrite.” See Rockwell Int’l Corp. v. United States, 4 Cl.Ct. 1, 5 (1983). That case concerned the exclusion of an offeror from the competitive range under a pre-FAR regulation which apparently required that the competitive range “ ‘include those proposals which through written or oral discussions with the offeror, have a reasonable chan[e]e of selection when all factors are considered, including cost,’ ” id, and pursuant to an evaluation plan which defined unacceptable proposals as those which “ ‘could not be made acceptable through minor written revisions and/or oral discussions.’ ” Id. In light of the “regulatory provision restricting the circumstances where an offeror can be preliminarily excluded from the competitive range,” the court determined the extent of the “curative measures to satisfy [the agency’s] concerns,” and found that only “relatively minor revisions” were needed. Id.26
The FAR standard for inclusion in the competitive range was changed in 1997 to be much narrower, and the range is now “comprised of [sic] all of the most highly rated proposals.” 48 C.F.R. § 15.306(c). As a consequence, the FAR itself is no longer cause to follow the approach to review taken in the old precedents, which were applying the now-obsolete “reasonable chance” or “doubt” standard. It is possible, though, that an agency could choose to self-impose the old standard, through the evaluation procedures and methodology incorporated in its source selection plan. Plaintiff seems to contend that CECOM in effect did just this in the procurement at issue, see Pl.’s Reply at 11-16, by adopting and following a definition for “Unacceptable” which was based on “major error(s), omission(s) or deficiency(ies) ... none of [which] ... can be corrected without a major rewrite or revision of the proposal.” AR at 7026; see also AR at 10913.
The Court finds that CECOM did not adopt in the SSEP an evaluation methodology calling for the type of review employed in the outdated line of GAO opinions. The reason that those opinions considered whether the Comptroller General found — or any reasonable person could find — that the identified deficiencies were not of the magnitude requiring a major rewrite or revision, was not because an agency determined a major rewrite was needed. Rather, this manner of review was undertaken because of the then-existing FAR mandate that all proposals with a reasonable chance of selection be included in the competitive range, and the instruction that this even included proposals for which there was doubt as to this chance. In the R2-3G procurement, CECOM did not profess to follow such a procedure. The Solicitation provided that “a rating of no less than Acceptable must be achieved for the Technical Factor, all Technical Sample Task Sub-Factors, and the SBPP Factor.” AR at 118. The Solicitation contemplated that proposals could be corrected or revised only regarding the last of these, as offerors were cautioned that they “will not be given an opportunity to correct or revise a Sample Task response.” AR at 119. This did not, however, rule out the possibility that “Acceptable” would be defined to include all proposals that would have a reasonable chance of selection if minor revisions were allowed — regardless of whether revisions were employed in the context of the sample task test.27
Such a definition was not, however, adopted by CECOM for this procurement. *460The matter is somewhat obscured by the “major rewrite or revision” language used in the definition of “Unacceptable” for purposes of the Technical Factor. See AR at 7026. It is, of course, not necessarily the case that CECOM viewed this clause as some legal term of art whose content is supplied by the outdated GAO opinions. But in any event, given an evaluation procedure in which no corrections or revisions were allowed for the Sample Task Sub-Factors, there would be no opportunity for proposals to be improved in this area, and it would be senseless to include in the competitive range proposals that did not qualify for award. The key issue, then, is not the definition of “Unacceptable,” but rather that of “Acceptable,” since “a rating of no less than Acceptable must be achieved.” AR at 118. The SSEP did not define “Acceptable” with reference to the magnitude of revisions or rewrites needed, but instead as: “A proposal that at least meets all of the Government’s requirements, contains at least minimal detail, demonstrates at least a minimal understanding of the problems, and is at least minimally feasible (moderate to high risk).” AR at 7026 (emphases added).
The definition of “Acceptable” makes no reference to the presence of major errors or omissions, and in light of the definition of “Unacceptable” it may be presumed that such may be tolerated, at least insofar as they may be “corrected without a major rewrite or revision of the proposal.” See id. The same consideration cannot be given a deficiency, however, which the SSEP defined as “[a] material failure of a proposal to meet a Government requirement or a combination of significant weaknesses in a proposal that increases the risk of unsuccessful contract performance to an unacceptable level.” AR at 7028.28 Since an acceptable proposal must be one which meets all the requirements and poses a feasible level of risk, a proposal with a deficiency necessarily fails this test. The definition of “Acceptable,” then — which must be met for a proposal to qualify for award and thus, under this evaluation procedure, for inclusion in the competitive range — does not allow the presence of any deficiencies, even if the deficiencies could be corrected with a minor rewrite or revision. Nothing approximating the “reasonable chance” or “doubt” standard was mandated by the SSEP.
The Technical Factor rating definitions employed by CECOM were, to be sure, unusual in that they seem to construct a sort of limbo — a proposal may fail to merit an “Acceptable” rating due to the presence of deficiencies, but be better than “Unacceptable” when the deficiencies may be corrected with a minor rewrite. Perhaps a “Susceptible to Being Made Acceptable” category, such as the one used for the SBPP evaluation, see AR at 10478, was initially envisioned.29 Or perhaps the agency viewed a “major rewrite or revision,” which was undefined in the SSEP, to mean any correction of a failure to meet a requirement.30 In any event, a proposal that failed to meet the definition of “Acceptable” is “less than Acceptable,” whether it resided in a particular rating category or in limbo. And while the use of the “major rewrite or revision” clause may have proven clumsy or confusing in this context, the Court concludes that an agency cannot be subject to the manner of review developed for the “substantial chance” or “doubt” standard unless the agency quite clearly mandated in its source selection plan that the competitive range must include all proposals with a reasonable chance of selec*461tion or which can be made acceptable with minor revisions. This CECOM did not do.
Accordingly, the Court will approach the determination of whether the procurement decision under review was arbitrary and capricious in the usual way. The search for a “rational basis,” Domenico Garufi, 238 F.3d at 1332, involves the inquiry into whether the agency “examine[d] the relevant data and artieulate[d] a satisfactory explanation for its action.” Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856. The rational basis that is required to uphold a procurement decision is, however, more than the “minimum rationality” employed in due process and equal protection eases. Id. at 43 n. 9, 103 S.Ct. 2856. Rational basis in the latter cases is satisfied by “any reasonably conceivable set of facts,” because of “a strong presumption of validity” which places the onus upon challengers “to negative every conceivable basis which might support” legislation — including reasons never considered by the relevant policymakers and “rational speculation unsupported by evidence or empirical data.” FCC v. Beach Commc’ns, Inc., 508 U.S. 307, 313-15, 113 S.Ct. 2096, 124 L.Ed.2d 211 (1993) (citations and internal quotation omitted); see also Behm v. United States, 68 Fed.Cl. 395, 401 (2005). In contrast, under APA-style review, the reasoning of the agency at the time of decision — including its connection with record evidence — is what matters, and a court “may not supply a reasoned basis for the agency’s action that the agency itself has not given.” Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856.
3. The Findings Are Neither Contradicted by Objective Information nor Inconsistent with Other Subjective Analysis
As discussed above, the SSEB evaluators determined that USfalcon’s Sample Task # 2 response failed to meet the requirement that a second training class be presented by the offeror in a particular time frame. See AR at 10909-10,10912. The sample task expressly stated that “[t]he offeror is required to present the training classes at one location within the [park],” AR at 6990, with the first “required at the time of Government acceptance of the eastern sector system” and the second “required at the time of Government acceptance of the fully integrated system protecting the complete perimeter.” AR at 6991. The evaluators apparently interpreted USfalcon’s response as wanting in two respects. First, they seemed to think that under the “train-the-trainer” approach, US-falcon was shifting the responsibility for presenting the second training class to government personnel trained in the first class, rather than fulfilling this duty itself. See AR at 10912. Second, due to the absence of this second training class from the Integrated Master Schedule appended to the offeror’s response, USfaleon failed to provide that the class would be presented at the required time. See id. The evaluators accordingly concluded that USfalcon’s proposal contained a “deficiency that indicates an approach that cannot be expected to meet requirements and involves a very high risk,” and also found that “none of these conditions can be corrected without a major rewrite or revision of the proposal.” AR at 10913.
Plaintiff focuses its attention on the last part of this rationale, regarding the effort needed to correct the identified deficiency. See Pl.’s Br. at 18-23; Pl.’s Reply at 11-16,19-20. But this is a subjective matter, and “the minutiae of the procurement process in such matters as technical ratings” are among the “discretionary determinations of procurement officials that a court will not second guess.” E.W. Bliss, 77 F.3d at 449. USfal-con faults the evaluators for not explaining why correcting the identified deficiency required a major rewrite or revision, rather than a minor one. But for our purposes, it is enough that the evaluators applied the proper definition of “Unacceptable” — and they did. See AR at 7026,10913. With no definition of “major rewrite or revision” to constrain the evaluators and provide an objective standard for reviewing their decision, the Court cannot say that this determination is irrational. The “major rewrite or revision” concept is a creature of the agency’s own making, and in the absence of objective signposts the Court must defer to the agency’s judgment construing this concept. See Beta *462Analytics, 67 Fed.Cl. at 400-01. Had the evaluators made the opposite finding — that the deficiency could be corrected without a major rewrite or revision — in the course of concluding that the proposal was unacceptable, the evaluation would have been arbitrary for failure to follow the SSEP. But a finding-consistent with the SSEP was made, and a reason relating to the proposal and the Solicitation was given. Perhaps, as the government seemed to concede, see Tr. at 72, certain things, such as a minor typographical error, could not possibly require a major rewrite or revision to correct — but the identified deficiency was far from a typographical error.
It does not matter whether the Court, applying its own definition of what constitutes a major rewrite or revision of a proposal, believes such an effort was needed to correct the identified deficiency. “If the court finds a reasonable basis for the agency’s action, the court should stay its hand even though it might, as an original proposition, have reached a different conclusion as to the proper administration and application of the procurement regulations.” Weeks Marine, Inc. v. United States, 575 F.3d 1352, 1371 (Fed.Cir.2009) (internal quotation marks and citations omitted). But while this matter involves a discretionary decision based on subjective values, the Court must take one additional step to confirm that the agency’s stated basis is reasonable. As the Federal Circuit recently reiterated, an agency’s procurement decision may be found arbitrary and capricious “when the agency ‘... offered an explanation for its decision that runs counter to the evidence before the agency, or [the decision] is so implausible that it could not be ascribed to a difference in view or the product of agency expertise.’” Ala. Aircraft Indus., Inc.-Birmingham v. United States, 586 F.3d 1372, 1375 (Fed.Cir.2009) (quoting Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43, 103 S.Ct. 2856).
This additional step does not seek to answer whether USfalcon’s Sample Task #2 response adequately addressed the second training class requirement, which would break the taboo against second-guessing technical judgments, see Fort Carson, 71 Fed.Cl. at 596, and in any event is not even sought by plaintiff. See Pl.’s Br. at 16; Pl.’s Reply at 2. The Court’s inquiry is not so much into the what that the agency concluded, but rather the why that was supplied by the agency. This involves verifying that objective elements contained in the agency’s analysis, such as the description of the offer- or’s narrative, correspond to the evidence in the record, see Beta Analytics, 67 Fed.Cl. at 402; Fort Carson, 71 Fed.Cl. at 600-01; and checking to see if subjective judgments are reached elsewhere in the analysis that contradict the evaluators’ conclusions, see Beta Analytics, 67 Fed.Cl. at 399, making the decision too “implausible.”
Plaintiff makes an argument which touches on this second point, contending that the agency’s conclusion under the “Understanding of the Problems” consideration is inconsistent with the finding that the Sample Task # 2 response was unacceptable. See Pl.’s Br. at 23-24; Pl.’s Reply at 16-19. But even if this consideration was called the “primary purpose” of the sample task testing regime in various other procurements, in this particular one it was but one of three considerations. The “Understanding of the Problems” consideration, as stated in the Solicitation and repeated verbatim in the SSEP, entailed an evaluation of the proposal “to determine the extent to which it demonstrates a clear understanding of all features involved in solving the problems and meeting the requirements, and the extent to which uncertainties are identified and resolutions proposed.” AR at 119, 7003. While this consideration focuses on the offeror’s ability to discern the features needed to meet the hypothetical sample task requirements, leaving to the second consideration the feasibility of the proposed approach, it is, of course possible that in addressing the former, the evaluators would mention the latter. Thus, CECOM’s evaluators coidd have stated in this portion of their report that USfalcon’s approach to the second training class met the Sample Task # 2 requirement for this class, fatally contradicting the conclusion that it did not.
But the evaluators did not mention that particular requirement at all, nor did they state in general terms that all requirements *463would be met. See AR at 10910-11. Rather, they discussed “the problem” which was understood in the singular, as “that of a perimeter security system,” and provided detail concerning the technical aspects associated with the system. Id. And even if implicit in the conclusion that USfaleon “demonstrate[d] an understanding of the problem” is a finding that USfaleon understood that two training classes were required, this is not the same thing as finding that the requirement would satisfactorily be met. Nothing in the discussion of the first consideration contradicts the conclusion that the response was unacceptable under the second consideration. In this regard, the Court notes that the evaluators were careful not to include the alternative ground for finding a proposal unacceptable— that the identified error, omission, or deficiency “indicates a lack of understanding of the problems,” AR at 7026 — in their rationale for giving USfaleon a rating of “Unacceptable.” See AR at 10913.
Turning to USfalcon’s Sample Task #2 response, the Court does not find any information of an objective nature which contradicts the evaluators’ assessment of the offer- or’s approach to the second training class. As the evaluators noted, see AR at 10909, plaintiff proposed using a “ ‘train-the-trainer’ approach,” and accordingly “recommend[ed] the inclusion of representatives from Adams County and [Gettysburg National Military Park] in [the] first training class.” AR at 7158. USfaleon stated that it “anticipated that the trainers trained during Class 1 will be prepared to conduct Class 2 prior to installation of the southern perimeter,” and explained that its “personnel will monitor the classes to ensure all training objectives are accomplished.” AR at 7159; see also AR at 10909. Plaintiffs narrative is thus consistent with the agency’s conclusion that government employees trained in the first class, not the offeror’s personnel, would be presenting the second class. See AR at 10912.
Concerning the failure to schedule the second training class, plaintiff concedes that it omitted this class from the Integrated Master Schedule. Pl.’s Reply at 2 n. 1. Moreover, nothing in the narrative states that the second training class was scheduled for a particular time. USfaleon stated that the first class “is scheduled to ensure that this class of 25 students will be fully trained and available to support the installation of the fixed system on the eastern perimeter, operate, and maintain,” AR at 7158, and apparently included it in the Integrated Master Schedule. See AR at 7214, 7219.31 Plaintiffs narrative did not refer to the second class as being “scheduled,” but instead stated that the trainers would “be prepared to conduct Class 2 prior to installation” of the second of four perimeter detection systems, and that its “personnel will monitor the classes.” AR at 7159.
The Court notes that had USfaleon stated in its narrative that a second training class was scheduled for or would be taught at any particular time, then the absence of this class from the Integrated Master Schedule would reflect a typographical error — the correction of which could not reasonably be viewed as requiring a “major rewrite or revision.” But the narrative was ambiguous at best. In addition to the section of the response devoted to training, already discussed, USfaleon mentioned the second training class in a list of the “steps” composing the “concept” plaintiff “applied to the Gettysburg requirements.” AR at 7148-49. The provision of “system training” is listed immediately before items relating to the initial user acceptance test. AR at 7149. The entry to “[u]pdate the system training, and train additional Government personnel” falls immediately after the step in which the perimeter detection system is fully installed, but before the steps in which the user acceptance test is planned and government acceptance is obtained for the final three perimeters. Id. The list of steps is not strictly in *464chronological order, as some of the functions overlap — for instance, the Integrated Master Schedule shows that the user acceptance test report for the southern perimeter system would be delivered on July 24, 2009, while the western perimeter system was still being installed (through July 31, 2009) and installation of the northern perimeter system had yet to begin (scheduled for August 3 through September 14, 2009). See AR at 7211, 7214, 7216, 7219. Thus, whether USfalcon intended the second training class to be held after the fourth perimeter was installed (September 14, 2009), before the user acceptance test was conducted for the southern perimeter (roughly June 12, 2009),32 or any time in-between cannot be discerned from this list of steps.
The issue of failing to schedule the second training class should be distinguished from the issue of whether USfalcon failed to understand when this class was required to be taught. While nothing in plaintiffs proposal indicates that it intended that the class be at the time of acceptance of the fully integrated system, the contemporaneous documents from the agency do not ascribe any significance to this timing, either. Though the sample task stated that the “second class is required at the time of Government acceptance of the fully integrated system protecting the complete perimeter,” AR at 3721-22, 6991, it also “required” the first class “at the time of Government acceptance of the eastern sector system.” AR at 3721, 6991. Yet the agency evaluators did not find it problematic that USfalcon scheduled the first class to occur earlier than required, see AR at 7214, 7219, noting approvingly that plaintiff “scheduled the training class for the Eastern Sector system.” AR at 10912.
In describing the deficiency regarding the second training class, the agency did not mention a problem with timing (or any link to the act of government acceptance) in either the evaluation report, AR at 10910, the briefing to the SSA, AR at 11037, the Competitive Range Determination, AR at 11199, the debriefing provided to USfalcon, AR at 11246, or the post-debriefing answers to USfalcon’s questions. AR at 11255. This issue first appears in the record five months after the evaluation, in a declaration from one of the evaluators that was filed in the GAO proceeding. See AR at 11321a.4-6. Courts must review such non-contemporaneous documents with care, to make sure that the statements are a reflection of the views held by the evaluators when the evaluation occurred rather than “’post hoc’ rationalizations.” See Overton Park, 401 U.S. at 419-20, 91 S.Ct. 814. The evaluator’s statements describing how the proposal failed to satisfy the requirement that the offeror present both training classes clearly place the views in the appropriate time frame — by claiming that USfalcon’s “concept was understood by the evaluators,” AR at 11321a.4; noting what “[t]he real issue for the evaluators was,” AR at 11321a.5; and relating that the “proposal left the evaluators with La] distinct and logical conclusion.” AR at 11321a.6. The statements regarding the significance of the timing of the second class, however, fail to provide the Court with an adequate basis to conclude that they reflect views held at the time of evaluation, and as a consequence these statements will not be considered in this proceeding. See Fort Carson, 71 Fed.Cl. at 591-92. Thus, the Court concludes that the notion that USfalcon proposed holding the second training class at a time that was too early to be effective had no bearing on the evaluation of USfalcon’s Sample Task # 2 response.
In sum, the Court finds there was a rational basis for CECOM’s decision to exclude USfalcon from the R2-3G procurement’s competitive range. The evaluators followed the rating definitions from the SSEP in finding the Sample Task # 2 response to be unacceptable, and identified the deficiency upon which this finding rests. Whether the Court believes the deficiency was a matter *465that could be easily corrected is not a proper consideration in bid protests — unless the agency adopted an evaluation methodology which expanded the competitive range to include all offerors with a reasonable chance of receiving an award, or which employed an objective standard in a definition of “major rewrite or revision.” In the R2-3G procurement, CECOM did neither. The Court’s review of the administrative record does not reveal any objective information which contradicts the agency’s findings, nor any subjective analysis which is inconsistent with the agency’s conclusion. The Court finds that CECOM did not act arbitrarily in determining that USfalcon’s approach to the second training class materially failed to meet a Sample Task # 2 requirement, and in determining that a major rewrite or revision would be needed to correct this deficiency. Accordingly, plaintiffs motion for judgment on the administrative record is DENIED and defendant’s cross-motion for judgment on the administrative record is GRANTED. Having failed to prevail on the merits, USfalcon’s motion for permanent injunctive relief must necessarily also fail, and is thus DENIED. See Gulf Group, 61 Fed.Cl. at 364; Computer Scis. Corp. v. United States, 51 Fed.Cl. 297, 323 (2002).
III. CONCLUSION
For the foregoing reasons, the Court DENIES plaintiffs motions for judgment on the administrative record and for injunctive relief, and correspondingly GRANTS defendant’s cross-motion for judgment on the administrative record. The Clerk of the Court is directed to enter judgment accordingly.
IT IS SO ORDERED.
. Proposals were initially due August 6, 2008. AR at 1. This deadline was changed to August 12, 2008. See AR at 5348-49. The Performance Work Statement for the R2-3G program indicated that awardees would normally have seven work days to submit plans to address potential task orders. AR at 131.
. A revised SSEP, containing changes that are not relevant to this case, was issued on October 6, 2008, and approved by the SSA on December 1, 2008. See AR at 10447-50.
. The definitions of "Outstanding” and “Good” have no bearing on this case.
. A "Price Estimate Section,” not relevant to these proceedings, was also required. AR at *4443732-34.
. The "training of] Government personnel for the User Acceptance Test” was also described in the subsection on installing the fixed perimeter system. AR at 7156.
. To minimize the need for redactions, the letter designations will be used in (his opinion for offerors other than plaintiff and the initial awar-dees.
. Most offerors initially got credit for the significant strength of "us[ingl documented ISO 9001 certified systems and processes,” see, e.g., AR at 10517, which was dropped from the evaluation after the first briefing. For five offerors (H, N, P, S and V) in addition to USfaleon, this resulted in a rating downgrade when the SSA was briefed in December. Compare AR al 10535, 10553, 10559, 10568 and 10577 with AR at 10746, 10752, 10754, 10757 and 10760 (ratings dropping from "Good” to "Acceptable” and from "Outstanding” to "Good,” apparently due to the elimination of the ISO 9001 significant strength).
. There is some confusion on this point because a summary table later in the briefing identifies the rating as "Good.” See AR at 10896.
. The Administrative Record has two sets of pages numbered 6953 through 6964, as Amendment 0008, behind Tab 5, runs through page 6964, while Amendment 0009, behind Tab 6, begins with page 6953. The pages behind Tab 6 are the ones referenced in this opinion.
. The agency rated USfalcon's Sample Task # 1 and # 3 responses as "Acceptable.” AR at 11008, 11066, 11071, 11195. USfalcon received the best possible rating of "Low Risk” for the Performance Risk Factor, see AR at 7027, and was given a rating of "Acceptable” for the SBPP Factor. AR at 11129, 11166, 11173, 11195. US-falcon's proposed price of $9,806,649,558 was the second lowest among the ten small business offerors. See AR at 11172, 11195, 11203.
. Curiously, although this letter was dated one day after the CO signed the Competitive Range Determination, see AR at 11202, the determination was dated one day after the letter, coinciding with the date of SSA approval. See AR at 11187, 11202.
. The GAO had been, prior to July 7, 2004, known as the "General Accounting Office.” See 31 U.S.C. § 702 note (setting out Pub.L. No. 108-271, § 8, 118 Stat. 811, 814 (2004)).
. A second ground for setting aside a contract award, not relevant here, is when the protester can show that "the procurement procedure involved a violation of regulation or procedure.” Domenico Garufi, 238 F.3d at 1332. This showing must be of a “dear and prejudicial violation of applicable statutes or regulations.” Id. at 1333 (quoting Kentron Haw., Ltd. v. Warner, 480 F.2d 1166, 1169 (D.C.Cir.1973)).
. The use of the same proceeding to consider jurisdictional facts, such as those relating to standing, as well as facts relevant to the merits, is not peculiar to bid protests. Such a procedure may be followed whenever "jurisdictional facts are inextricably intertwined with the merits." Forest Glen Props., LLC v. United States, 79 Fed.Cl. 669, 678 (2007) (internal quotation and citations omitted).
. This might explain the result in Bannum, where a merits determination of insufficient prejudice was upheld. See Bannum, 404 F.3d at 1358. The protester in that case did not prevail on one of two arguments that the FAR was violated. Id. at 1353. But since standing was not even mentioned in the opinion, perhaps no conclusion as to jurisdiction may be inferred, following the Supreme Court's admonition that such “drive-by jurisdictional rulings” should not be taken as precedential. See Steel Co. v. Citizens for a Better Env't, 523 U.S. 83, 91, 118 S.Ct. 1003, 140 L.Ed.2d 210 (1998).
. In cases in which standing has been satisfied, when the SSA eschews a rigid, numerical evaluation approach, any arbitrary decision that may have influenced a best value decision is likely *451prejudicial, see Fort Carson, 71 Fed.Cl. at 593— unless it concerns a factor or subfactor that is "pass/fail” in nature and the protester has inferi- or ratings to the winning offeror in the other comparative categories.
. One argument of defendant may be construed as implicating standing — the contention that US-falcon could not be awarded a contract because its Sample Task #2 response failed to comply with Solicitation requirements. See Def.'s Br. at 19-20; Def.’s Reply at 7-9. But no precedent has been identified to support the notion that a required element of a Sample Task — a mere exercise and not the good or service actually being procured — can be a material term and condition of a Solicitation. Nor is this a situation where an offeror can obtain a pricing advantage by cutting corners on Sample Task requirements, as these prices were only evaluated for realism, and did not factor into the award determination. See AR at 119. And although offerors were cautioned that no Sample Task response revisions would be allowed, id., in fact some revisions were allowed, under Amendment 0009. AR at 6953-97 (Tab 6). Moreover, the Solicitation provides that awardees, when competing for task orders, might be allowed to submit proposals "that depart from stated requirements.” AR at 133. Thus, failure to conform to Sample Task requirements — which was denied by plaintiff, see Compl. ¶¶ 25-26, 34, 52, 57-59, 61, 63-64, in allegations which are taken as true for standing purposes and would settle the matter in any event — would not make an offeror ineligible for one of multiple IDIQ contracts under these circumstances.
. Amendment 0012 and related documents were added to the record before the Court, but not to the Administrative Record, for purposes of determining prejudice and other issues pertinent to the question of relief. See Order (Nov. 18, 2009); Tr. at 6-7; see also AshBritt, Inc. v. United States, 87 Fed.Cl. 344, 366-67 (2009).
. Since ''jurisdiction is determined at the time the suit is filed,” F. Alderete Gen. Contractors, Inc. v. United States, 715 F.2d 1476, 1480 (Fed.Cir.1983), the Court has analyzed standing under the post-award standard. If this matter were to be considered a pre-award protest, due to the subsequent corrective action which withdrew the initial awards, see Tr. at 99, prejudice would slill be present under the more lenient pre-award standard — as the arbitrary evaluation of a proposal would inflict “a non-trivial competitive injury which can be redressed by judicial relief.” Weeks Matine, Inc. v. United States, 575 F.3d 1352, 1363 (Fed.Cir.2009).
. A partial exception is the rating "Acceptable,” which is mentioned but not defined in the Solicitation. See AR at 4, 118 (stating “a rating of no less than Acceptable must be achieved”).
. The SSEP for the R2-3G was in final form as of May 9, 2008, see AR at 6998, a week in advance of the issuance of the Solicitation. See AR at 1. Although the SSA's approval was dated two months later, AR at 7049, this deviation from the DFARS (assuming a $16.4 billion procurement counts as “high-dollar value”) was immaterial, as the approval was secured prior to the submission of proposals.
. For its first thirteen and one-half years, the FAR contained a similar provision governing “formal” source selection processes, used in high-dollar value acquisitions. See 48 C.F.R. § 15.612 (1997); 48 Fed.Reg. 42,102, 42,203 (Sept. 19, 1983). This provision required that the SSA approve a source selection plan before a solicitation issued, and that the plan include “[a] description of the evaluation process, methodology, and techniques to be used.” 48 C.F.R. § 15.612(b)(3), (c)(5) (1997).
. This is not to suggest that it would be improper for evaluators, recognizing that superior qualities exhibited in an offer were not adequately measured by the adopted evaluation criteria, to ever change the criteria in response. But suspicions of favoritism in such circumstances could be dispelled by an open articulation of the reason for changing the criteria, explaining why the new criteria will better achieve the agency’s goals.
. Support for this conclusion may be gleaned from JWK International Corp. v. United States, 49 Fed.Cl. 371 (2001), in which an offeror’s proposal was considered for award despite failing to meet a sample task requirement. See id. at 383-84, 396-97.
. The court also employed "close scrutiny," because the competitive range consisted of just one offeror. Rockwell Int'l, 4 Cl.Ct. at 5; see also Birch & Davis Int’l, 4 F.3d at 974; Bean Stuyvesant, L.L.C. v. United States, 48 Fed.Cl. 303, 339-41 (2000). Two of the GAO opinions cited by plaintiff also employed this scrutiny. See Dynalantic Corp., 1997 WL 103418, *2-3, *9, 1997 U.S. Comp. Gen. LEXIS 124, at *6-7, *23-24; Coastal Gov't Servs., 1993 WL 57697, *2-3, 1993 U.S. Comp. Gen. LEXIS 201, at *6. This close scrutiny does not apply to the case at hand, which involves an initial competitive range of fifteen and a revised range of eighteen.
. This definition by itself would not have been enough to justify applying the approach taken in the old line of GAO opinions, absent an SSEP mandate that the competitive range include all "Acceptable" proposals.
. The SSEP borrowed the FAR definition of a deficiency. See 48 C.F.R. § 15.001.
. Even though no revisions to Sample Task responses were contemplated, see AR at 119, 7003, a “Susceptible to Being Made Acceptable” rating could have served a purpose — to accommodate the possibility that the agency would change its mind and allow revisions in this area, as the Solicitation could have been amended. See EP Prods., Inc. v. United States, 63 Fed.Cl. 220, 224-25 (2005); 48 C.F.R. § 15.206.
. One of the Sample Task evaluators suggested this, in a declaration submitted in the GAO proceeding. See AR at 11321 a.8. This portion of the declaration does not make clear, however, that it represents an explanation of the evaluators' views at the time the evaluation was performed, as opposed to '“post hoc' rationalizations." See Overton Park, 401 U.S. at 419-20, 91 S.Ct. 814. Accordingly, the Court gives it no weight in this proceeding. See Fort Carson, 71 Fed.Cl. at 591-92.
. The training included in the schedule was to run from March 16 through March 30, 2009. AR at 7214, 7219. The Court notes that this was scheduled to occur two weeks before the user acceptance test for the first installed perimeter was to begin, see id.., consistent with the narrative but inconsistent with the sample task requirement, if "at the time of Government acceptance of the eastern sector system" is construed to mean the precise time of this training. See AR at 3721.
. The schedule shows the user acceptance lest for the eastern perimeter as starting the da}' after installation is completed, and running from April 15 to April 29, 2009, with the report delivered May 20, 2009. See AR at 7211, 7214, 7216, 7219. The schedule includes only the delivery dates for the reports from the user acceptance tests of the other three perimeters, but omits the user acceptance test periods. See AR at 7211 — 20. The installation of the southern perimeter system was to be complete on June 12, 2009. AR at 7214, 7219.