OPINION
Before RABINOWITZ, C.J., and BURKE, MATTHEWS, COMPTON and MOORE, JJ. MOORE, Justice.In February 1984, John L. McKay, Jr. failed the Alaska bar examination by less than one point. The Committee of Law Examiners of the Alaska Bar Association (Committee) denied him certification for admission to practice law. McKay appealed to the Alaska Bar Association’s Board of Governors (Board), alleging numerous improprieties in the administration and grading of the bar exam. The Board denied McKay a hearing, McKay appealed, and this court remanded for a hearing on McKay’s allegations regarding the scoring of certain essay answers. Application of McKay, 706 P.2d 684 (Alaska 1985).
On remand, the Board appointed a master who conducted a hearing. The master concluded that the Committee had not abused its discretion in grading the essay answers, and recommended that McKay’s appeal be dismissed. The Board adopted the master’s proposed decision and dismissed the appeal. McKay again appealed to this court pursuant to Alaska Bar Rule 8 § 2. Because we agree that the Committee did not abuse its discretion, we affirm.
Consideration of McKay’s appeal requires an understanding of the process by which the Committee grades bar examination essay answers. In grading the essay answers, the Committee relies primarily on a set of “benchmark answers.” The benchmarks are actual applicant answers chosen through a “calibration” process in which a team of graders reads at least twenty answers to select five that represent the range of quality among all of the answers received for each question. The benchmarks are scored on a five-point scale, with one point signifying the lowest quality answer and five points signifying the highest.
The calibration team refers to a model answer drafted by the authors of each question when it selects the benchmark answers, but the model answer does not control. The team may amend the model answer during the calibration process to add issues suggested by applicants’ answers or to alter the relative weights assigned issues. In selecting and ranking the benchmark answers, the team is con*1116cerned not only with identification of the issues contained in the model answer, but also with the quality of the applicants’ analysis of those issues.
After the benchmark answers have been selected, two graders independently read and score each applicant’s answer. The graders compare the answers not to the model answer but to the benchmark answers, and assign a score on the five-point scale. The score is based on analytical quality as well as issue identification. If the two graders’ scores differ by more than one point, the graders reread and regrade the answer. Each applicant’s final score is the average of the scores assigned by the two graders.
We held in Application of Obermeyer, 717 P.2d 382, 387 (Alaska 1986), that the benchmark grading system is reasonable and within the Committee’s discretion. There is no evidence that the Committee deviated from this system in grading McKay’s exam.
In his current appeal, McKay challenges only the grading of his answer to essay question #3. That question describes a hypothetical statute requiring Medicaid beneficiaries to donate body organs upon their deaths, and asks applicants to “[discuss the constitutional challenges that may be made to the new statute by a group of Medicaid recipients and their families.” The two graders who read McKay’s answer to question # 3 assigned scores of 2.0 and 3.0; the Committee averaged these scores to award a final score of 2.5.
At the hearing, Stewart Jay, a professor of constitutional law at the University of Washington School of Law, testified on McKay’s behalf. Professor Jay testified that, in his opinion, considerations of standing and procedural due process were relevant to question # 3. In addition, he testified that there is room for disagreement as to the likelihood of success of the various challenges to the hypothetical statute as well as to the level of scrutiny that courts would apply when considering an equal protection challenge. Since the model answer failed to address these points, McKay argues that the model answer was defective, and his own answer was graded arbitrarily and to his detriment.1
Assuming without deciding that question # 3 did raise issues of standing and procedural due process and that substantive issues might have been resolved in more than one way, we are nevertheless unpersuaded that the model answer was defective. A model answer is defective if it is likely to result in unfair grading. Whether a defect exists depends largely on how the model answer is used in the grading process. Under the Committee’s benchmark grading system, in which the function of the model answer is to enable the graders to choose a set of benchmark answers, a model answer would be defective if it resulted in an inappropriate selection of benchmark answers.
McKay does not contend that the model answer misstates the law in any fundamental sense or that it fails to discuss the central issues raised by the question. Nor does he suggest that the defects he identified in the model answer are reflected in the selection of benchmark answers. Indeed, we observe that all of the benchmark answers for question # 3 address either standing or procedural due process or both.
We have reviewed the model answer, the benchmark answers, and McKay’s answer to question # 3. Our review convinces us that the Committee did not abuse its discretion either in employing this model answer, in selecting these benchmark answers, or in grading McKay’s answer as it did.
AFFIRMED.
. McKay's answer briefly addressed issues of standing and procedural due process.