Research: Contributing to the field of assessment and measurement

Publications

The members of our Research and Psychometrics team actively publish results of their work in professional and association journals. Additionally, they present their work at professional conferences and meetings, sharing their results and insights with the education, measurement, and research community. The Research Library represents a sample of their work. This repository will be updated as additional publications and presentations become available.

Cut Score Estimation

Sukin, T., Segall, D., & Nicewander, A.

(April 2015) Cut score estimation: Comparing Bayesian and frequentist approaches. Paper presented at the meeting of the National Council for Measurement in Education, Chicago, IL.
Download

Standard Setting

Lewis, D. M., Mitzel, H. C., Mercado, R. L., & Schulz, E. M.

(October 2013) The bookmark standard setting procedure. In G. J. Cizek (Ed.), Setting Performance Standards (2nd ed.), 225–253. New York: Routledge.

Assessment Validity

Sukin, T. M., & Dunn, J.

(April 2013) Presenting a validity argument for a statewide alternative assessment: Addressing inferences with novel approaches. Paper presented at the meeting of the National Council for Measurement in Education, Vancouver, Canada.
Download

Item Equating

Nicewander, W. A.

(2010) Nicewander, W.A. IRT Equating using Parameterized Test Characteristic Curves. Presented at the 2010 of the National Council for Measurement in Education, May, 2010, Denver, CO.
Download

Item Equating

Sukin, T., Dunn, J., Kim, W., & Keller, R.

(May 2010) A balancing act: Common items nonequivalent groups (CINEG) equating item selection. Paper presented at the meeting of the National Council for Measurement in Education, Denver, CO.
Download

Computing Latent Trait Estimates

Nicewander, W.A. & Schulz, M.E.

(2010) A Comparison of Two Methods for Computing Latent Trait Estimates from the Number Correct Score. Presented at the 2010 National Conference on Student Assessment, Detroit, MI.
Download

Item Response Theory

Duong, M. Q., Subedi, D. R., & Lee, J.

(2008) Parameter estimation in multidimensional IRT models: A comparison of maximum likelihood and Bayesian approaches. Paper presented at the 2008 annual meeting of the American Educational Research Association, New York.
Download

Vertical Scaling

Lewis, D. M. & Haug, C.

(2005) Aligning policy and methodology to achieve consistent across-grade performance standards. Paper published in a special issue of Applied Measurement in Education on the topic of Vertically Moderated Standards.

Standard Setting

Schulz, E. M., & Mitzel, H.

(2009) A Mapmark method of standard setting as implemented for the National Assessment Governing Board. In Smith, Jr., E.V., & Stone, G.E. (Eds.). Applications of Rasch measurement in Criterion-Reference Testing: Practice Analysis to Score Reporting, 194–235. Maple Grove, MN: JAM Press.

Automated Scoring

Lottridge, Schulz, S. M., E. M., Mitzel, H. C.

(2013) Examining the Impact of Training Samples on Identifying Off-topic Responses in Automated Essay Scoring. Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring. The Handbook of Automated Essay Evaluation: Current Applications and New Directions, p. 233-250. New York: Routledge Academic.

 

Item Field Testing

Sukin, T., & Nicewander, A.

(April 2013) More for your Buck: Enhancing field-test data efficiency with the use of item-specific priors. Paper presented at the meeting of the National Council for Measurement in Education, San Francisco, CA.
Download

Automated Scoring

Burkhardt, A., & Lottridge, S.

(October 2013) Examining the impact of training samples on identifying off-topic responses in automated essay scoring. Paper presented at the annual meeting of Northern Rocky Mountain Educational Research Association, Jackson, WY.
Download

Constructed Response Items

Winter, P.C., Wood, S.W., Lottridge, S.M., Hughes, T.B., & Walker, T.E.

(September 2013) The utility of online mathematics constructed-response Items: Maintaining important mathematics in state assessments and providing appropriate access to students (Final Research Report). Monterey, CA: Pacific Metrics Corporation.

Automated Scoring

Lottridge, S., Winter, P., Mugan, L

(2013) The AS Decision Matrix: Using Program Stakes and Item Type to Make Informed Decisions about Automated Scoring Implementations. White paper published by Pacific Metrics, Inc. Research and Psychometrics department, Monterey, CA.
Download

Item Parameter Drift

Sukin, T. M., & Keller, L. A.

(April 2011) Item parameter drift as an indication of differential opportunity to learn: An exploration of item flagging methods & accurate classification of examinees. Paper presented at the meeting of the National Council for Measurement in Education, New Orleans, LA.
Download

Item Flagging Criteria

Sukin, T. M., & Keller, L. A.

(May 2011) The effect of deleting anchor items on the classification of examinees: An exploration of item flagging criteria. Paper presented at the meeting of the American Educational Research Association, Denver, CO.

Item Equating

Duong, M. Q., & von Davier, A. A.

(2008) Kernel equating with observed mixture distributions in a single-group design. Paper presented at the 2008 annual meeting of the National Council on Measurement in Education, New York.
Download

Connect With Us

Our researchers and program specialists attend and present at many conferences and professional meetings throughout the year. Check our conference schedule here. To talk with us about research currently underway or new projects, call us at 831-646-6400.