Search the Catalog:
Module 45: Mokken-scale Analysis
This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore measurement quality, with an emphasis on its application in the context of educational assessment.
Module 44: Quality-control for Continuous Mode Tests
In the current ITEMS module we discuss errors that might occur at the different stages of the continuous mode tests (CMT) process, as well as the recommended QC procedure to reduce the incidence of each error.
Module 43: Data Mining for Classification and Regression
This ITEMS module first provides a review of data mining techniques for classification and regression, which should be accessible to a wide audience in education measurement.
Module 42: Simulation Studies in Psychometrics
This ITEMS module provides a comprehensive introduction to the topic of simulation that can be easily understood by measurement specialists at all levels of training and experience.
Module 41:Latent DIF Analysis using Mixture Item Response Models
This item module provides an introduction to differential item functioning (DIF) analysis using mixture item response models, which involves comparing item profiles across latent groups, instead of manifest groups.
Module 40: Item Fit Statistics for Item Response Theory Models
This ITEM module provides an overview of methods used for evaluating the fit of IRT models.
Module 39: Polytomous Item Response Theory Models: Problems with the Step Metaphor
The Problem With the Step Metaphor for Polytomous Models for Ordinal Assessments
Module 38: A Simple Equation to Predict a Subscore’s Value
This ITEM module determines if a particular subscore adds enough value to be worth reporting, through the use of a simple linear equation.
Module 37: Improving Subscore Value through Item Removal
This ITEM module shows for a broad range of conditions of item overlap on subscores, that the value of the subscore is always improved through the removal of such items
Module 36: Quantifying Error and Uncertainty Reductions in Scaling Functions
This ITEM module describes and extends X-to-Y regression measures that have been proposed for use in the assessment of X-to-Y scaling and equating results
Module 35: Polytomous Item Response Theory Models
This ITEMS module provides an accessible overview of polytomous IRT models.
Module 34: Automated Item Generation
This ITEM module describes and illustrates a template-based method for generating test items.
Module 33: Population Invariance in Linking and Equating
This ITEM module provides a comprehensive overview of population invariance in linking and equating and the relevant methodology developed for evaluating violations of invariance.
Module 32: Subscores
This ITEMS module provides an introduction to subscores.
Module 31: Scaling
This ITEM module describes different types of raw scores and scale scores, illustrates how to incorporate various sources of information into a score scale, and introduces vertical scaling and its related designs and methodologies as a special type of scaling.
Module 30: Booklet Designs in Large-Scale Assessments
This ITEM module describes the construction of booklet designs as the task of allocating items to booklets under context-specific constraints.
Module 29: Differential Step Functioning for Polytomous Items
This ITEM module presents a didactic overview of the DSF framework and provides specific guidance and recommendations on how DSF can be used to enhance the examination of DIF in polytomous items.
Module 28: Raju’s Differential Functioning of Items and Tests
This ITEM module explains DFIT and show how this methodology can be utilized in a variety of DIF applications.
Module 27: Markov Chain Monte Carlo Methods for Item Response Theory Models
This ITEMS module provides an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models.
Module 26: Structural Equation Modeling
This module focuses on foundational issues to inform readers of the potentials as well as the limitations of SEM.
Module 25: Multistage Testing
This ITEM module describes multistage tests, including two-stage and testlet-based tests, and discusses the relative advantages and disadvantages of multistage testing as well as considerations and steps in creating such tests.
Module 24: Quality Control for Scoring, Equating, and Reporting
This ITEM module describes quality control as a formal systematic process designed to ensure that expected quality standards are achieved during scoring, equating, and reporting of test scores.
Module 23: Practice Analysis Questionnaires: Design and Administration
This ITEM module describes procedures for developing practice analysis surveys with emphasis on task inventory questionnaires.
Module 22: Standard Setting: Contemporary Methods
This ITEM module describes some common standard-setting procedures used to derive performance levels for achievement tests in education, licensure, and certification.
Module 21: Multidimensional Item Response Theory
This ITEM module illustrates how test practitioners and researchers can apply multidimensional item response theory (MIRT) to understand better what their tests are measuring, how accurately the different composites of ability are being assessed, and how this information can be cycled back into the test development process.
Module 20: Rule-space Methodology
This ITEM module examined the logic of Tatsuoka's rule-space model, as it applies to test development and analysis.
Module 19: Differential Item Functioning
This ITEM module prepares the reader to use statistical procedures to detect differentially functioning test items.
Module 18: Setting Passing Scores
This ITEM module describes standard setting for achievement measures used in education, licensure, and certification.
Module 17: Item Bank Development
This ITEM module is designed to help those who develop assessments of any kind to understand the process of item banking, to analyze their needs, and to find or develop programs and materials that meet those needs.
Module 16: Comparison of Classical Test Theory and Item Response Theory
This ITEM module provides a nontechnical comparison of classical test theory and item response theory.
Module 15: Assessing Student Achievement with Term Papers and Written Reports
This ITEM module intended to help teachers apply the development strategies and rules of evidence for performance assessment to term papers and written reports.
Module 14: Generalizability Theory
This ITEM module introduced the framework and the procedures of generalizability theory using a hypothetical scenario involving writing proficiency.
Module 13: Developing a Personal Grading Plan
This ITEM module assisted teachers in developing defensible grading practices that effectively and fairly communicate students' achievement status to their parents.
Module 12: High Quality Classroom Assessment
This module promotes the understanding of differences between sound and unsound assessments.
Module 11: Portfolio Assessment and Instruction
This ITEM module clarifies the notion of portfolio assessment and helps users design such assessments in a thoughtful manner.
Module 10: Equating Methods in Item Response Theory
This ITEM module provides the basis for understanding the process of score equating through the use of item response theory (lRT).
Module 09: Standard Error of Measurement
This ITEM module describes the standard error of measurement (SEM), important concept in classical testing theory applications.
Module 08: Reliability in Classical Test Theory
This ITEM module illustrated the idea of consistency with reference to two sets of test scores.
Module 07: Comparison of 1-, 2-, and 3-Parameter IRT Models
This ITEM module discusses the 1-, 2-, and 3-parameter logistic item response theory models.
Module 06: Equating Methods in Classical Test Theory
This ITEM module promoted a conceptual understanding of test form equating using traditional methods.
Module 05: Criterion-referenced Tests in Program Curriculum Evaluation
This ITEM module describes how criterion-referenced tests (CRTs) can be used in program and curriculum evaluations for developing information to form judgments about educational programs and curricula.
Module 04: Formula Scoring of Multiple-Choice Tests
This ITEM module discussed the formula scoring of multiple choice tests.
Module 03: Reliability of Scores From Teacher-Made Tests
This ITEM module discussed the reliability of teacher-made tests.
Module 02: Obtaining Intended Weights When Combining Students' Scores
This ITEM module describes how scores can be adjusted so that the intended weights are obtained.
Module 01: Performance Assessments: Design and Development
This ITEM module presents and illustrates specific rules of test design in the form of a step-by-step strategy.
Digital Module 02: Scale Reliability in Structural Equation Modeling
In this digital ITEMS module, Dr. Greg Hancock and Dr. Ji An provide an overview of scale reliability from the perspective of structural equation modeling (SEM) and address some of the limitations of Cronbach’s α.
Digital Module 01: Reliability in Classical Test Theory
In this digital ITEMS module, Dr. Charlie Lewis and Dr. Michael Chajewski provide a two-part introduction to the topic of reliability from the perspective of classical test theory (CTT).
André A. Rupp (Editor, 2016-2022)
Phone: (609) 252-8545
Let us know what works well and what can be improved here!