Before becoming an architect, all candidates must take and pass the Architect Registration Examination® (ARE®)—a multi-part exam developed with the help of hundreds of volunteer architects, psychometricians, and other professionals.
Interested in learning more about how the exam is put together? NCARB is committed to being transparent about how the ARE is developed and administered, so candidates, licensing board members, and the public can trust the validity of ARE results. In part one of this blog series, we explored the individuals involved in developing the exam. In part two, we dove deeper into the process that exam questions go through before they become scored items. In part three, we explained how NCARB assembles exam forms. In this installment, we’ll dig into the topic of exam scoring.
How is the ARE scored?
Each operational question is worth one point if answered correctly. Any incorrect answers or questions left unanswered receive no points. No exam questions receive partial credit. To pass a division, a candidate must score at or above the cut score established for the form of questions that the candidate has taken.
How is the passing standard (or cut score) established for each ARE division?
NCARB’s psychometric consultants facilitated the use of the Modified Angoff method to establish the cut scores for each division of ARE 5.0, with the help of volunteer architects from around the country. The Modified Angoff method is used across the testing industry to establish passing standards.
NCARB has published detailed blogs on our website that cover exam scoring and the establishment of cut scores. Along with scoring information, NCARB regularly releases key ARE data in NCARB By the Numbers (NBTN). This resource includes data such as ARE pass rates, administration and performance information, and the effectiveness of NCARB’s divisional practice exams.
What is a scaled score, and how do I read it?
On your score report, you’ll receive a scaled score that provides you with big-picture insight you’re your overall exam performance. Your scaled score will be a range between 100-800. The higher the scaled score, the better your performance on the exam.
NCARB does not provide a table to candidates that translates their scaled score to their raw score (total questions answered correctly on an exam) because the scaled score is a mathematical measurement based on varying item statistics that make up each specific exam form.
The intent of the scaled score is to provide candidates with a common metric for interpreting scores across different administrations and divisions. The closer a candidate gets to 550, the closer they are to passing that division. As an example, if a candidate fails a division two times with a scaled score of 415 followed by a 525, they know their performance improved on their second exam attempt, and they are much closer to passing. Candidates receiving a scaled score of 500 or more are likely within a few questions of passing. Candidates scoring in the mid-400s are likely six to eight questions away from passing.
Candidates interested in understanding more about their exam performance on a failed attempt should reference page two of their score report. The table provides candidates with the percentage of scored items they answered correctly for each section of the exam compared to the average passing candidate. To learn more about ARE score reports, check out this blog.
What percentage of examinees fail ARE divisions within 1-question, 2-questions, 3-questions, and 4-questions?
To answer these questions, we looked at examination performance data over our two previous years of delivery.
Findings reveal that about 5% of candidates fail by just one point, another 5% by two points, and another 5% by 3 points. It wasn’t until we studied candidates four points below the cut score that we saw the percentage drop to 4%.
Similarly, 6% of candidates pass by scoring at the cut score for the division. In looking at how many candidates passed the exam just above the cut score, we found a similar pattern as those just below the cut score. Findings revealed that 6% of candidates passed one point above the cut score, another 6% two points above, and another 6% three points above.
In looking at the combined data over two years, it shows that almost 50% of all test administrations score on or within four points of the cut score for each division.
How does NCARB protect the exam from racial and gender bias?
To minimize bias that could occur on the ARE, we assemble a diverse group of architect volunteers for our committees that develop and review all exam content. The diversity of NCARB’s volunteer item writers, who come from various regions across the U.S. and work on many different project types at firms of all sizes, helps protect the exam from unintentional bias and assumptions. All volunteers are trained in how to avoid bias in item construction, and we use specific checklists to verify all items as they go through initial construction.
NCARB’s item writing standards prohibit the use of gendered pronouns, and our test development facilitators work closely with item writers to ensure jargon is not used in exam content. For example, an item would reference a “cold snowy climate” or a “hot humid climate,” rather than a specific location.
After items are authored and released for pretesting, NCARB performs initial psychometric analysis of the items to identify and address potential racial and gender bias prior to them becoming scored items. Every item going through the pretesting process and every item that remains operational on the exam is evaluated for bias regularly.
In addition, to address language challenges that may exist for some candidates that are non-native English readers/speakers, NCARB implemented ESL accommodations, which are accepted by all 55 NCARB jurisdictions.
Why don’t candidates receive more details about their passing score?
The purpose of the ARE is to assess whether candidates possess the amount of knowledge necessary to protect the health, safety, and welfare of the public.
Unlike entrance exams that attempt to determine levels of good, better, or best among a group of candidates using a norm-referencing approach, the ARE is a criterion-referenced exam that defines required architectural knowledge and skill. It’s similar to earning a driver’s license—what matters is that you have the skills to operate a vehicle safely, not how well you drive compared to others. Reporting quantitative or qualitative results beyond a passing score for the current ARE would be inappropriate, fraught with error, and likely cause gross misrepresentations by people using such information.
The ARE assesses candidates for minimal competence, and because any passing score indicates minimal competence, NCARB and its constituent jurisdictional licensing boards do not distinguish between one passing score and another. For instance, there is no difference between a “high” passing score and a “low” passing score when it comes to licensing an individual to practice architecture. That’s why ARE candidates are not provided with detailed quantitative information on their passing exam score reports. The goal for a candidate, then, is simply to pass the exam.
It’s with this goal in mind that NCARB will provide detailed quantitative information for failing scores but not for passing scores. The additional information on failing exam score reports may give candidates an idea of the knowledge, skills, and abilities they might need to develop more to meet the goal of taking an ARE division: to earn a passing score.
Why can't we see the test items after the test?
NCARB constantly re-uses well-performing exam items and therefore does not release exam questions after they are used on a form. By reusing these good operational items, NCARB can quickly score and provide candidates with their exam results.
Candidates can get exposure to sample ARE questions through the free demonstration exam and practice exams available through their NCARB Record. These items are developed through the same process we described earlier in this blog series, so they’re an accurate example of what candidates can expect on their actual exam.