Digital Module 01: Reliability in Classical Test Theory

4.7 (27 votes)

Recorded On: 07/26/2020

In this digital ITEMS module we provide a two-part introduction to the topic of reliability from the perspective of classical test theory (CTT). In the first part, which is directed primarily at technical beginners, we review and build on the content presented in the original didactic ITEMS article by Traub & Rowley (1991). Specifically, we discuss the notion of reliability as an intuitive everyday concept to lay the foundation for its formalization as a reliability coefficient via the basic CTT model. We then walk through the step-by-step computation of key reliability indices and discuss the data-collection conditions under which each is most suitable. In the second part, which is directed primarily at intermediary learners, we present a distribution-centered perspective on the same content. We discuss the associated assumptions of various CTT models ranging from parallel to congeneric, and review how these affect the choice of reliability statistics. Throughout the module, we use a customized Excel workbook with sample data and basic data manipulation functionalities to illustrate the computation of individual statistics and to allow for structured independent exploration. In addition, we provide quiz questions with diagnostic feedback as well as short videos that walk through sample solutions. 

Keywords:  classical test theory, CTT, congeneric, KR-20, KR-21, Cronbach’s alpha, Pearson correlation, reliability, Spearman-Brown formula, parallel, tau-equivalent, test-retest, validity 

Key:

Complete
Failed
Available
Locked
Digital Module
Recorded 07/26/2020
Recorded 07/26/2020 Full digital module with all resources and activities.
DM01 VIDEO (Introduction, Version 1.5)
Open to view video.
Open to view video. Video version of the introduction section of the module. [3 minutes]
DM01 VIDEO (Section 1, Version 1.5)
Open to view video.
Open to view video. Video version of the first content section of the module. [8 minutes]
DM01 VIDEO (Section 2, Version 1.5)
Open to view video.
Open to view video. Video version of the second content section of the module. [10 minutes]
DM01 VIDEO (Section 3, Version 1.5)
Open to view video.
Open to view video. Video version of the third content section. [26 minutes]
DM01 VIDEO (Section 4, Version 1.5)
Open to view video.
Open to view video. Video version of the fourth content section. [20 minutes]
DM01 VIDEO (Section 5, Version 1.5)
Open to view video.
Open to view video. Video version of the fifth content section. [10 minutes]
Companion Article
Open to download resource.
Open to download resource.
Data Files
Open to download resource.
Open to download resource. The data file and the Excel workbook for the worked examples and data activities.

Charlie Lewis

Distinguished Presidential Appointee

Charlie Lewis a Distinguished Presidential Appointee at Educational Testing Service and Professor Emeritus of Psychology and Psychometrics at Fordham University. He also taught psychology and psychometrics at Dartmouth College, the University of Illinois, and the University of Groningen. His research interests include fairness and validity in educational testing; mental test theory, including item response theory and computerized adaptive testing; Bayesian inference; generalized linear models; and behavioral decision making. He was recently co-editor and co-author of Computerized Multistage Testing: Theory and Applications (2014).

Contact Charlie via clewis@ets.org 

Michael Chajewski

Principal Psychometrician, Learning Science

Michael Chajewski received his undergraduate degree in experimental psychology from the University of South Carolina, and a masters degree in forensic psychology from John Jay College of Criminal Justice, The City University of New York. He received his doctoral degree in Psychometrics and Quantitative Psychology from Fordham University. As a psychometrician, Michael worked for eight years for the College Board supporting operational testing programs such as PSAT/NMSQT and AP, as well as assisted in the redesign of the SAT. His contributions and research spanned a variety of technical work including equating, test security and system development. Since 2017 Michael has been leading the psychometrics team at Kaplan Test Prep, spearheading measurement model development for formative assessment and innovating assessment operating procedures. As an educator, Michael has taught both undergraduate and graduate courses within the CUNY system as well as at Fordham University. His research interests include configuring adaptive assessments, large data model fit evaluations, missing data impact, scaling, norming, as well as statistical software development and Bayesian statistics.

Contact Michael via michael.chajewski@kaplan.com