Constructed-Response Automated Scoring Engine
Immediate Scoring. Valuable Time and Cost Savings.
CRASE® is the technology-based, scientifically-engineered solution that immediately scores students’ submissions to constructed-response test items. From short-answer and longer essays to technology-enhanced items, CRASE is proven to deliver accurate, reliable scores and achieve critical cost and time savings when compared with hand scoring.
High Scores in the Automated Scoring Assessment Prize (ASAP) Competition
Pacific Metrics was a participant in the vendor demonstration of the Automated Scoring Assessment Prize (ASAP) competition. This study examined the performance of nine automated essay scoring engines across eight writing prompts. CRASE was proven to score as well, or better, than human scorers in this national study.
CRASE consists of six modules which can be used in any combination to build scoring models:
|Non-Attempt Scorer||Catches non-attempts such as gibberish or off-topic responses|
|Constructed Response Scorer||Provides feedback on two- to four-sentence short-answer items|
|Math Scorer||Supports TEI graphing items and mathematical solutions, as well as equations/expressions|
|Short CR Scorer||Scores one- or two-sentence short-answer item responses, searching for the presence of absence of key phrases, and then uses rubric-based rules to assign a score or provide feedback|
|Cloze Scorer||Supports cloze items or one-word/phrase responses|
|Essay Scorer||Evaluates longer-form essay responses|
Scores Delivered in Seconds. Cost and Time Savings Delivered Consistently.
- CRASE helps you reduce the personnel, facilities, and training costs associated with handscoring. Including or retaining construction-response items in your assessments is now more affordable.
- Use CRASE to reduce teachers’ grading burden.
- CRASE also supports a drafting-revision model to help students prepare essays for teacher scoring.
- A new direction in scoring, CRASE saves time and money. In high stakes testing programs where open-ended items typically receive two human ratings, CRASE can replace one of those ratings.
- For practice tests or classroom-based testing, CRASE can deliver the first rating, with an optional read-behind by the teacher or a handscoring vendor.
- CRASE also scores numeric-based items using graph lines or points, questions requiring some computation, and combinations of numeric-based and short answer items.
How does CRASE work?
The CRASE engine analyzes a sample of human-scored student responses to produce a model that emulates human scoring behavior. Responses scored by CRASE flow through three stages:
- Preprocessing standardizes responses to prepare them for the later scoring stages.
- Feature extraction analyzes the response using Natural Language Processing tools to produce a set of numeric features that represents key elements of the rubric.
- Score prediction applies a statistical or machine learning model to the feature values to produce a rubric-based score.
CRASE Model Options
CRASE services include custom model development or use of existing scoring models at specific grade levels.
- We offer custom development with unique models built for each constructed-response item.
- We offer generalized scoring models for high school, grades 6–8, and grades 4–5 that can be used with any essay topic. Generalized models are appropriate for low-stakes uses and programs that do not have access to data needed to build a custom model.
Webinars. Demos. Discussions.
See the power of our platforms, tools, and products
presented by industry experts.