The National Conference on Student Assessment (NCSA) is the premiere forum for assessment practitioners to discuss what is happening in the real world of educational assessment – what’s new, what’s going on at the state and federal levels, what works and what does not. The annual conference draws large audiences of administrators and educators from the national, state, and district levels who have a strong interest in student assessments and curriculum at the local level, as well as large-scale assessment. The goal of the 2017 conference is to give states a forum to share and discuss the best practices, strategies, research studies, data, and resources that support states in implementing high-quality assessments and solid accountability systems, which improve academic achievement for all students and help close gaps.
This year’s NCSA conference is scheduled for June 28-30 in Austin, TX. Pacific Metrics plans to make a big splash by participating on multiple levels: as sponsors, exhibitors, and presenters!
Visit our booth to speak with the research scientists who lead the design & development of our assessment technologies. Or plan to attend one of our presentations.
- ELPA21: Standard Setting and Score Interpretation (90 min symposium) by Dr. Daniel Lewis
- Assessment Analytics–Opportunities for Achieving Perishable Insights (45 min roundtable) by Dr. Michelle Barrett
For more information, please visit: http://www.ccsso.org/ncsa.html
110 East 2nd Street, Austin, TX 78701, United States
The International Society for Technology in Education (ISTE) serves educators and education leaders committed to empowering connected learners in a connected world. Through their active membership, online communities, corporate partners and affiliate network, ISTE reaches more than 100,000 educators around the world.
The annual conference is on June 25-28 in San Antonio, TX. The ISTE Conference & Expo is the gathering place for more than 15,000 passionate educators from the education community. Each year, attendees and industry professionals engage ed tech’s hottest topics in a dynamic learning and networking environment. We’re proud of our affiliation as Corporate Members of ISTE and we’ll be sponsoring the event too.
This year, we’re teaming up with our partners at OpenEd–a nationally recognized leader in open curriculum resources.
Visit us in booth #1762 to speak with our experts or learn about our products & services.
For more information, please visit: http://conference.iste.org/2017/
900 E Market St, San Antonio, TX 78205, United States
IMS Global’s annual Learning Impact Leadership Institute is scheduled for May 16-19. The Learning Impact Leadership Institute takes the best thinking in education technology and puts it into practice. Pacific Metrics is proud to join other forward-thinking EdTech leaders from around the globe to reimagine technology in teaching and learning and have a voice setting a roadmap for the IMS community aligned to the needs of the education community across K-20. The power of Learning Impact comes from our community, a true collaboration of institutions and suppliers, envisioning and building a plug and play educational environment that enables institutions to innovate to provide better learning experiences.
At Learning Impact, meaningful progress to accelerate the deployment of an interoperable ecosystem is made together through a commitment to open standards. It’s a revolution to create the future of EdTech together! Our test delivery platform, Unity, has benefited greatly from IMS’ mission. Unity is among the first educational platforms to achieve OneRoster compliance and certification for Caliper Analytics!
Pacific Metrics is committed to interoperable technology. We’re proud to be a Contributing Member of IMS Global and we really look forward to the annual Learning Impact Leadership Institute. This year, we are sponsoring the event and we’ll have an exhibit booth! In addition to a handful of presentations.
Here’s a sneak peek at some of our presentations:
- Cohesive Formative and Summative e-Assessment using LTI, OneRoster, QTI, and Caliper Analytics by Dr. Michelle Barrett and Bill Mistretta
- Content Development and QTI by Tone Turner and Chris Jezek.
- Integrating Educational Resources with e-Assessment via Multiple Standards by Adam Blum and Adrian Hutchinson
For more information, please visit: https://www.imsglobal.org/lili2017
Sheraton Denver Downtown Hotel, 1550 Court Place, Denver, Colorado 80202, United States
The annual meeting for the National Council on Measurement in Education (NCME) is scheduled for April 26-30, in San Antonio, TX. Pacific Metrics has received confirmation that a handful of proposals to present were accepted! Our presentations will vary from presentations, to training sessions, to a symposium.
The theme of the 2017 Annual Meeting is “Advancing Large Scale and Classroom Assessment through Research and Practice.” This theme acknowledges the need to foster and strengthen connections between large-scale assessment practice and classroom use of assessment. In an era where data are critical for decision making for both accountability and internal improvements, it is extremely important to facilitate the integration of the two types of assessment so that information gathered from different sources can be streamlined and utilized to inform policy as well as improve practice. Furthermore, with the new advancements in innovative technology and measurement techniques, bridging the gap between summative and formative assessment has never been more viable through joint efforts among stakeholders in policy and education.
Here’s a preliminary list of our various presentations.
- ELPA21: Standard Setting and Score Interpretation by Dr. Daniel Lewis
- Training session on Shadow-Test Approach to Adaptive Testing by Dr. Wim van der Linden and Dr. Michelle Barrett
- Presentation on Item Response Theory (IRT) Parameter Linking by Dr. Wim van der Linden and Dr. Michelle Barrett
- Computer Adaptive Testing (CAT) symposium (collection of 4 papers) by Dr. Wim van der Linden, Dr. Qi Dao, Dr. Seung Choi, David King, and Bingnan Jiang
- CAT whitepaper accepted from Dr. Tia Sukin
For more information, please visit: http://www.ncme.org/ncme/ncme/Annual_Meeting/Next_Meeting.aspx/
101 Bowie St, San Antonio, TX 78205, United States
The annual Association of Test Publishers (ATP) Conference, Innovations in Testing, is scheduled for March 5-8, 2017.
ATP is widely recognized as the top venue to exchange ideas, best practices, research and applications for the testing industry. Pacific Metrics will be hosting an exhibitors booth and sponsoring the event. We’re proud of our association with ATP and look forward to another exciting event! In addition, our esteemed colleagues will be presenting on two topics: Fairness in Automated Scoring and Universal Test Assembly with CAT.
If you’d like to visit us or view a product demo, we’ll be in Booth # 68 & 69, right next to ACT. You’re invited to visit our booth to learn more about our products & services. This is an excellent opportunity to speak with the research scientists who lead the design & development of our assessment technologies! Some of the topics we’ll be discussing include:
- Automated Scoring
- Test Engineering & Automatic Item Generation
- Adaptive Testing Technology
- Computer Adaptive Testing
- Testing Platform Interoperability
Click HERE to view the full program.
The 2016 Conference on Test Security will be held on October 18-20 in Cedar Rapids. Our Director of Research Innovation, Dr. Wim J. van der Linden, will be hosting a presentation on the topic of “Bayesian Detection of Cheating on Tests.”
What is Bayesian Detection and what are the advantages?
A Bayesian approach to the detection of cheating on tests has several advantages relative to classical statistical hypothesis testing. First, it is based on the correct probability distribution of the number of items on which the test taker has cheated given his observed number of aberrant responses. Second, whereas classical hypothesis testing only allows us to control its Type I error, the Bayesian approach does allow us to directly account for the incidence of cheating in the population of test takers. Third, the approach resolves the problem of whether or not to condition on the responses by the source in the detection of answer copying, which has plagued the literature since Frary et al. (1977). Fourth, it automatically accounts for the presence of estimation error in any of the parameters of the psychometric model (e.g., ability parameters). A natural Bayesian way of presenting evidence of cheating is through reporting of its posterior odds given the responses observed for the test taker. In this presentation we will show the odds for four different types of cheating: item pre-knowledge, item harvesting, answer copying, and fraudulent erasures on answer sheets. For each of these types of cheating, the odds can be calculated using a simple, extremely fast algorithm known as the Lord-Wingersky algorithm in test theory. The only difference exists in the parameters that need to be fed into the algorithm.
For more information about this presentation, please refer to the description of Presentation 23 in the event program. https://cete.ku.edu/2016-conference-test-security
1200 Collins Road Northeast, Cedar Rapids, IA 52402, United States
The theme of this year’s E-ATP Conference is “Gaining Advantage Through Assessment.”
Our very own, Dr. Michelle Barrett and Dr. Wim van der Linden will be presenting on the topic of “A Universal Test Assembler for the Automated Production of Fixed-Form Tests, Adaptive Tests, and Any Mixture of the Two.”
An interest is developing in testing formats that combine aspects of fixed-form and adaptive testing that suit the practical context of educational, psychological or licensure/certification testing best.
The purpose of the session is to introduce a universal automated test assembler that guarantees real-time generated test forms meeting the same test specifications with any desired format. It is discussed how to configure the assembler to meet both functional and non-functional requirements, including content blueprints, presence of set-based items, adaptation to test-taker proficiencies within a common passage or stimulus, control of the item-exposure rates, inclusion of field-test items while providing for the same “look and feel” of the test, options for test takers to navigate among items, accommodation of large numbers of concurrent users, etc.
The hour-long session is scheduled for Friday, September 30 at 10:30AM, and will include two 20-minute presentations followed by audience discussion.
We hope to see you there!