ITEMS Portal
ACCESS IS FREE!
Quickly create a new user account for the Elevate learning management system (this website) to access any modules.
You do NOT have to be an NCME member to do this!
Assessment Development

All-access Pass
This provides immediate access to ALL print and digital modules in the portal by "registering" you for each and displaying all modules as a single collection as part of this pass.

All-access Pass (PRINT ONLY)
This provides access to a ZIP folder with all 45 previously published print modules.

Digital Module 01: Reliability in Classical Test Theory
In this digital ITEMS module, Dr. Charlie Lewis and Dr. Michael Chajewski provide a two-part introduction to the topic of reliability from the perspective of classical test theory (CTT).
Keywords: classical test theory, CTT, congeneric, KR-20, KR-21, Cronbach’s alpha, Pearson correlation, reliability, Spearman-Brown formula, parallel, tau-equivalent, test-retest, validity

Digital Module 02: Scale Reliability in Structural Equation Modeling
In this digital ITEMS module, Dr. Greg Hancock and Dr. Ji An provide an overview of scale reliability from the perspective of structural equation modeling (SEM) and address some of the limitations of Cronbach’s α.
Keywords: congeneric, Cronbach’s alpha, reliability, scale reliability, SEM, structural equation modeling, McDonald’s omega, model fit, parallel, tau-equivalent

Digital Module 07: Subscore Evaluation & Reporting
In this digital ITEMS module, Dr. Sandip Sinharay reviews the status quo on the reporting of subscores, which includes how they are used in operational reporting, what kinds of professional standards they need to meet, and how their psychometric properties can be evaluated.
Keywords: Diagnostic scores, disattenuation, DETECT, DIMTEST, factor analysis, multidimensional item response theory (MIRT), proportional reduction in mean squared error (PRMSE), reliability, subscores

Digital Module 08: Foundations of Operational Item Analysis
In this digital ITEMS module, Dr. Hanwook Yoo and Dr. Ronald K. Hambleton provide an accessible overview of operational item analysis approaches for dichotomously scored items within the frameworks of classical test theory and item response theory.
Keywords: Classical test theory, CTT, corrections, difficulty, discrimination, distractors, item analysis, item response theory, operations, R Shiny, TAP, test development

Digital Module 09: Sociocognitive Assessment for Diverse Populations
In this digital ITEMS module, Dr. Robert Mislevy and Dr. Maria Elena Oliveri introduce and illustrate a sociocognitive perspective on educational measurement, which focuses on a variety of design and implementation considerations for creating fair and valid assessments for learners from diverse populations with diverse sociocultural experiences.
Keywords: assessment design, Bayesian statistics, cross-cultural assessment, diverse populations, educational measurement, evidence-centered design, fairness, international assessments, prototype, reliability, sociocognitive assessment, validity

Digital Module 12: Think-aloud Interviews and Cognitive Labs
In this digital ITEMS module, Dr. Jacqueline Leighton and Dr. Blair Lehman review differences between think-aloud interviews to measure problem-solving processes and cognitive labs to measure comprehension processes and illustrate both traditional and modern data-collection methods.
Keywords: ABC tool, cognitive laboratory, cog lab, cognition, cognitive model, interrater agreement, kappa, probe, rubric, thematic analysis, think-aloud interview, verbal report

Digital Module 14: Planning and Conducting Standard Setting
In this digital ITEMS module, Dr. Michael B. Bunch provides an in-depth, step-by-step look at how standard setting is done. It does not focus on any specific procedure or methodology (e.g., modified Angoff, bookmark, body of work) but on the practical tasks that must be completed for any standard setting activity.
Keywords: achievement level descriptor, certification and licensure, cut score, feedback, interquartile range, performance level descriptor, score reporting, standard setting, panelist, vertical articulation

Digital Module 15: Accessibility of Educational Assessments
In this digital ITEMS module, Dr. Ketterlin Geller and her colleagues provide an introduction to accessibility of educational assessments. They discuss the legal basis for accessibility in K-12 and higher education organizations and describe how test and item design features as well as examinee characteristics affect the role that accessibility plays in evaluating test validity during test development operational deployment.
Keywords: Accessibility, accommodations, examinee characteristics, fairness, higher education, K-12 education, item design, legal guidelines, test development, universal design

Digital Module 17: Data Visualizations
In this digital module, Nikole Gregg and Dr. Brian Leventhal discuss strategies to ensure data visualizations achieve graphical excellence. The instructors review key literature, discuss strategies for enhancing graphical presentation, and provide an introduction to the Graph Template Language (GTL) in SAS to illustrate how elementary components can be used to make efficient, effective and accurate graphics for a variety of audiences.
Key words: data visualization, graphical excellence, graphical template language, SAS

Digital Module 21: Results Reporting for Large-scale Assessments
In this digital ITEMS module, Dr. Francis O’Donnell and Dr. April Zenisky provide a firm grounding in the conceptual and operational considerations around results reporting for summative large-scale assessment and, throughout the module, highlight research-grounded good practices, concluding with some principles and ideas around conducting reporting research.
Keywords: data, large-scale assessment, results, score reporting, validity, visualization

Digital Module 22: Supporting Decisions with Assessment
In this digital ITEMS module, Dr. Chad Gotch walks through different forms of assessment, from everyday actions that are almost invisible, to high-profile, annual, large-scale tests with an eye towards educational decision-making.
Keywords: assessment literacy, classroom assessment, decision-making, formative assessment, in-the-moment assessment, interim assessment, large-scale assessment, major milepost, periodic check-in, unit test

Digital Module 24: Assessment Literacy
In this digital ITEMS module, Dr. Jade Caines Lee provides an opportunity for learners to gain introductory-level knowledge of educational assessment. The module’s framework will allow K-12 teachers, school building leaders, and district-level administrators to build “literacy” in three key assessment areas: measurement, testing, and data.
Key Words: assessment literacy, classroom assessment, data, educational measurement, formative assessment, K-12 education, public schooling, reliability, summative assessment, validity

Digital Module 26: Content Alignment in Standards-based Educational Assessment
In this digital ITEMS module, Dr. Katherine Reynolds and Dr. Sebastian Moncaleano discuss content alignment, its role in standards-based educational assessment, and popular methods for conducting alignment studies.
Key words: Achieve methodology, content alignment, content area standards, content validity, standards-based assessment, Surveys of Enacted Curriculum, Webb methodology

Digital Module 28: Unusual Things that Usually Occur in a Credentialing Testing Program
In this digital ITEMS module, Drs. Richard Feinberg, Carol Morrison, and Mark R. Raymond discuss an overview of how credentialing testing programs operate and special considerations that need to be made when unusual things occur.
Keywords: Credential/Licensure Testing, Assessment Design, Assessment Challenges, Threats to Score Validity, Operational Psychometrics

Module 17: Item Bank Development
In this print module, Dr. Annie W. Ward and Dr. Mildred Murray-Ward help those who develop assessments of any kind to understand the process of item banking, to analyze their needs, and to find or develop programs and materials that meet those needs.
Keywords: assessment development, item bank, item retrieval, item quality, parameter, standardized testing, test development

Module 18: Setting Passing Scores
In this print module, Dr. Gregory J. Cizek describes standard setting for achievement measures used in education, licensure, and certification.
Keywords: cut score, Bookmark method, compromise method, Ebel method, examinee-based method, proficiency classification, standard setting, validity

Module 22: Standard Setting: Contemporary Methods
In this print module, Dr. Gregory J. Cizek, Dr. Michael B. Bunch, and Dr. Heather Koons describe some common standard-setting procedures used to derive performance levels for achievement tests in education, licensure, and certification.
Keywords: achievement classification, certification, cut scores, licensure, performance standards, proficiency classification, standard setting

Module 23: Practice Analysis Questionnaires: Design and Administration
In this print module, Dr. Mark R. Raymond describes procedures for developing practice analysis surveys with emphasis on task inventory questionnaires for credentialing examinations.
Keywords: certification, credentialing, job analysis. licensure, practice analysis, questionnaire, rating scales

Module 24: Quality Control for Scoring, Equating, and Reporting
In this print module, Dr. Avi Allalouf describes quality control (QC) as a formal systematic process designed to ensure that expected quality standards are achieved during scoring, equating, and reporting of test scores.
Keywords: scoring, equating, errors, mistakes, score reporting, operational practice, quality control, reporting, standards

Module 25: Multistage Testing
In this print module, Dr. Amy Hendrickson describes multistage tests (MSTs), including two-stage and testlet-based tests, and discusses the relative advantages and disadvantages of multistage testing as well as considerations and steps in creating such tests.
Keywords: adaptive testing, multistage testing, principled assessment design, scoring, testlet, two-stage tests

Module 30: Booklet Designs in Large-Scale Assessments
In this print module, Dr. Andreas Frey, Dr. Johannes Hartig, and Dr. Andre A. Rupp describe the construction of complex booklet designs as the task of allocating items to booklets under context-specific constraints for large-scale standardized assessments and educational surveys.
Keywords: booklet design, educational survey, experimental design, item response theory, IRT, large-scale assessments, measurement

Module 34: Automated Item Generation
In this print module, Dr. Mark J. Gierl and Dr. Hollis Lai describe and illustrate a template-based method for automatically generating test items for large-scale production enterprises.
Keywords: automatic item generation, AIG, item model, item development, test development, technology and testing

Module 44: Quality-control for Continuous Mode Tests
In this print module, Dr. Avi Allalouf, Dr. Tony Gutentag, and Dr. Michal Baumer discuss errors that might occur at the different stages of the continuous mode tests (CMT) process as well as the recommended quality-control (QC) procedure to reduce the incidence of each error.
Keywords: automated review, computer-based testing, CBT, continuous mode tests, CMT, human review, quality control, QC, scoring, test administration, test analysis, test scoring
Access Date | Quiz Result | Score | Actions |
---|