Catalog Advanced Search

Search by Categories
Search by Format
Search by Date Range
Products are filtered by different dates, depending on the combination of live and on-demand components that they contain, and on whether any live components are over or not.
Start
End
Search by Keyword
Sort By

54 Results

  • Module 05: Criterion-referenced Tests in Program Curriculum Evaluation

    Product not yet rated Contains 1 Component(s)

    This ITEM module describes how criterion-referenced tests (CRTs) can be used in program and curriculum evaluations for developing information to form judgments about educational programs and curricula.

    This monograph describes how criterion-referenced tests (CRTs) can be used in program and curriculum evaluations for developing information to form judgments about educational programs and curricula. The material is organized as follows: a brief introduction to the monograph, its purpose and goals, a discussion of the relationship between evaluation and CRTs, pertinent information about program and curriculum evaluation, relevant facts about CRTs, and a summation. A principal goal of the essay is to describe concepts and procedures in terms that are instructionally illuminating. The reader is guided to identify and examine particular points at which decisions must be made about how, when, and why CRTs may aid the evaluation process. These steps are each identified as "An Instructional Step" and presented in separate boxes with pertinent guiding questions. Conciseness is a further aim of this monograph, one consequence of which is that several important concepts are only cursorily described or alluded to. Annotated references are included. Accompanying this instructional monograph is a "Student's Self-Test." An "Instructor's Guide" with expanded references and materials for photocopying or preparing transparencies is available by mail order (see "Teaching Aids" ordering information).

    Keywords: criterion-referenced test, CRT, curriculum evaluation, program evaluation

  • Module 04: Formula Scoring of Multiple-Choice Tests

    Product not yet rated Contains 1 Component(s)

    This ITEM module discussed the formula scoring of multiple choice tests.

    Formula scoring is a procedure designed to reduce multiple-choice test score irregularities due to guessing. Typically, a formula score is obtained by subtracting a proportion of the number of wrong responses from the number correct. Examinees are instructed to omit items when their answers would be sheer guesses among all choices but otherwise to guess when unsure of an answer. Thus, formula scoring is not intended to discourage guessing when an examinee can rule out one or more of the options within a multiple-choice item. Examinees who, contrary to the instructions, do guess blindly among all choices are not penalized by formula scoring on the average; depending on luck, they may obtain better or worse scores than if they had refrained from this guessing. In contrast, examinees with partial information who refrain from answering tend to obtain lower formula scores than if they had guessed among the remaining choices. (Examinees with misinformation may be exceptions.) Formula scoring is viewed as inappropriate for most classroom testing but may be desirable for speeded tests and for difficult tests with low passing scores. Formula scores do not approximate scores from comparable fill-in-the-blank tests, nor can formula scoring preclude unrealistically high scores for examinees who are very lucky.

    Keywords: formula scoring, multiple-choice test score, guessing 

  • Module 03: Reliability of Scores From Teacher-Made Tests

    Product not yet rated Contains 1 Component(s)

    This ITEM module discussed the reliability of teacher-made tests.

    Reliability is the property of a set of test scores that indicates the amount of measurement error associated with the scores. Teachers need to know about reliability so that they can use test scores to make appropriate decisions about their students. The level of consistency of a set of scores can be estimated by using the methods of internal analysis to compute a reliability coefficient. This coefficient, which can range between 0.0 and +1.0, usually has values around 0.50 for teacher-made tests and around 0.90 for commercially prepared standardized tests. Its magnitude can be affected by such factors as test length, test-item difficulty and discrimination, time limits, and certain characteristics of the group-extent of their testwiseness, level of student motivation, and homogeneity in the ability measured by the test.

    Keywords: reliability, test scores, reliability coefficient, internal analysis

  • Module 02: Obtaining Intended Weights When Combining Students' Scores

    Product not yet rated Contains 1 Component(s)

    This ITEM module describes how scores can be adjusted so that the intended weights are obtained.

    An instructor typically combines students' scores from several measures such as assignments and exams when assigning course grades. The relative weights intended for these scores are at least inferred and often stated explicitly by the instructor. This module describes how scores can be adjusted so that the intended weights are obtained. Techniques are discussed for two grading criteria: (a) grading students through comparison to others in the class and (b) grading students through comparison to predetermined levels of performance.

    Keywords: grading criteria, intended weights, combine scores, course grade

  • Module 01: Performance Assessments: Design and Development

    Contains 1 Component(s)

    This ITEM module presents and illustrates specific rules of test design in the form of a step-by-step strategy.

    Achievement can be, and often is, measured by means of observation and professional judgment. This form of measurement is called performance assessment. Developers of large-scale assessments of communication skills often rely on performance assessments in which carefully devised exercises elicit performance that is observed and judged by trained raters. Teachers also rely heavily on day-to-day
    observation and judgment. Like other tests, quality performance assessment must be carefully planned and developed to conform to
    specific rules of test design. This module presents and illustrates those rules in the form of a step-by-step strategy for designing such assessments, through the specification of (a) reason(s) for assessment, (b) type of performance to be evaluated, (c) exercises that will elicit performance, and (d) systematic rating procedures. General guidelines are presented for maximizing the reliability, validity, and economy of performance assessments.

    Keywords: performance assessment, reliability, validity

  • Digital Module SP1: Sociocognitive Assessment (SNEAK PEEK)

    Contains 1 Component(s)

    In this digital ITEMS module, Dr. Bob Mislevy conceptually introduces a sociocognitive perspective on educational measurement, which focuses on a variety of design and implementation considerations for creating fair and valid assessments for learners from diverse populations with diverse sociocultural experiences.

    In this digital ITEMS module, Dr. Bob Mislevy conceptually introduces a sociocognitive perspective on educational measurement, which focuses on a variety of design and implementation considerations for creating fair and valid assessments for learners from diverse populations with diverse sociocultural experiences. The module contains a general overview section, a description of the sociocognitive framing of assessment issues, and a section on implications for assessment around key concepts such as reliability, validity, and fairness. The module is conceptual as well as non-statistical in nature and is currently a "sneak peek" version of a more comprehensive version to be released later this spring, which will include practical illustrations around with digital assessment prototypes.

    Keywords: sociocognitive; educational measurement; assessment design; evidence-centered design; reliability; validity; fairness; prototype; Bayesian statistics; international assessments; cross-cultural assessment

    Bob Mislevy

    Lord Chair in Measurement and Statistics

    Dr. Mislevy is the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service as well as Professor Emeritus of Measurement, Statistics, and Evaluation at the University of Maryland, with affiliations with Second Language Acquisition and Survey Methods. Dr. Mislevys research applies developments in statistics, technology, and cognitive science to practical problems in educational assessment. His work includes a multiple-imputation approach to integrate sampling and psychometric models in the National Assessment of Educational Progress (NAEP), an evidence-centered framework for assessment design, and simulation- and game-based assessment with the Cisco Networking Academy. Among his many awards are AERA’s Raymond B. Cattell Early Career Award for Programmatic Research, NCME’s Triennial Award for Technical Contributions to Educational Measurement (3 times), NCME’s Award for Career Contributions, AERA’s E.F. Lindquist Award for contributions to educational assessment, the International Language Testing Association's Messick Lecture Award, and AERA Division D’s inaugural Robert L. Linn Distinguished Address Award.  He is a member of the National Academy of Education and a past president of the Psychometric Society. He has served on projects for the National Research Council, the Spencer Foundation, and the MacArthur Foundation concerning assessment, learning, and cognitive psychology, and on the Gordon Commission on the Future of Educational Assessment. His most recent book is "Sociocognitive Foundations of Educational Assessment" for which he received the 2019 NCME Annual Award and on which this ITEMS module is based.

    Contact Bob via rmislevy@ets.org

  • Digital Module 08: Foundations of Operational Item Analysis

    Contains 6 Component(s)

    In this digital ITEMS module, Dr. Hanwook Yoo and Dr. Ronald K. Hambleton provide an accessible overview of operational item analysis approaches for dichotomously scored items within the frameworks of classical test theory and item response theory.

    Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Dr. Hanwook Yoo and Dr. Ronald K. Hambleton provide an accessible overview of operational item analysis approaches for dichotomously scored items within these frameworks. They review the different stages of test development and associated item analyses to identify poorly performing items and effective item selection. Moreover, they walk through the computational and interpretational steps for CTT- and IRT-based evaluation statistics using simulated data examples and review various graphical displays such as distractor response curves, item characteristic curves, and item information curves. The digital module contains sample data, Excel sheets with various templates and examples, diagnostic quiz questions, data-based activities, curated resources, and a glossary.

    Keywords: Classical test theory, corrections, difficulty, discrimination, distractors, item analysis, item response theory, R Shiny, TAP, test development

  • Digital Module 07: Subscores - Evaluation & Reporting

    Contains 2 Component(s) Recorded On: 07/12/2019

    In this digital ITEMS module, Dr. Sandip Sinharay reviews the status quo on the reporting of subscores, which includes how they are used in operational reporting, what kinds of professional standards they need to meet, and how their psychometric properties can be evaluated.

    In this digital ITEMS module, Dr. Sandip Sinharay reviews the status quo on the reporting of subscores. Specifically, he first provides examples of operationally-reported subscores, discusses why subscores are in high demand, and discusses professional quality standards that subscores have to satisfy. He then describes various statistical methods that can be used to evaluate whether subscores satisfy professional standards, which include descriptive statistics, DIMTEST / DETECT, factor analysis, multidimensional item response theory, and the Haberman method. He provides guidance for how to implement these methods on real data using the R package ‘subscores’.

    Keywords: Diagnostic scores, disattenuation, DETECT, DIMTEST, factor analysis, multidimensional item response theory (MIRT), proportional reduction in mean squared error (PRMSE), reliability, subscores  


  • Digital Module 06: Posterior Predictive Model Checking

    Contains 13 Component(s) Recorded On: 04/24/2019

    ​In this digital ITEMS module, Dr. Allison Ames and Aaron Myers ​discuss the most common Bayesian approach to model-data fit evaluation, which is called Posterior Predictive Model Checking (PPMC), for simple linear regression and item response theory models.

    In this digital ITEMS module, Dr. Allison Ames and Aaron Myers discuss the most common Bayesian approach to model-data fit evaluation called Posterior Predictive Model Checking (PPMC). Specifically, drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model and violations of model-data fit have numerous adverse consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model-data fit models is critical. The instructors review the conceptual foundation of Bayesian inference as well as PPMC and walk through the computational steps of PPMC using real-life data examples from simple linear regression and item response theory (IRT) analysis. They provide guidance for how to interpret PPMC results and discuss how to implement PPMC for other model(s) and data. The digital module contains sample data, SAS code, diagnostic quiz questions, data-based activities, curated resources, and a glossary.

    Keywords: Bayesian inference; simple linear regression; item response theory (IRT); model-data fit; posterior predictive model checking (PPMC); Bayes’ theorem; Yen’s Q3; item fit

    Allison J. Ames

    Assistant Professor

    Allison is an assistant professor in the Educational Statistics and Research Methods program in the Department of Rehabilitation, Human Resources and Communication Disorders, Research Methodology, and Counseling at the University of Arkansas. There, she teaches courses in educational statistics, including a course on Bayesian inference. Allison received her Ph.D. from the University of North Carolina at Greensboro. Her research interests include Bayesian item response theory, with an emphasis on prior specification; model-data fit; and models for response processes. Her research has been published in prominent peer-reviewed journals. She enjoyed collaborating on this project with a graduate student, senior faculty member, and the Instructional Design Team.
    Contact Allison via boykin@uark.edu

    Aaron Myers

    Graduate Assistant / Doctoral Student

    Aaron is a doctoral student in the Educational Statistics and Research Methods program at the University of Arkansas. His research interests include Bayesian inference, data mining, multidimensional item response theory, and multilevel modeling. Aaron previously received his M.A. in Quantitative Psychology from James Madison University. He currently serves as a graduate assistant where he teaches introductory statistics and works in a statistical consulting lab. 

    Contact Aaron via ajm045@uark.edu

  • Digital Module 05: The G-DINA Framework

    Contains 3 Component(s) Recorded On: 04/10/2019

    In this digital ITEMS module, Dr. Wenchao Ma and Dr. Jimmy de la Torre introduce the G-DINA model, which is a general framework for specifying, estimating, and evaluating a wide variety of cognitive diagnosis models for the purpose of diagnostic measurement.

    In this digital ITEMS module, Dr. Wenchao Ma and Dr. Jimmy de la Torre introduce the generalized deterministic inputs, noisy “and” gate (G-DINA) model, which is a general framework for specifying, estimating, and evaluating a wide variety of cognitive diagnosis models (CDMs). The module contains a non-technical introduction to diagnostic measurement, an introductory overview of the G-DINA model as well as common special cases, and a review of model-data fit evaluation practices within this framework. They use the flexible GDINA R package, which is available for free within the R environment and provides a user-friendly graphical interface in addition to the code-driven layer. The digital module also contains videos of worked examples, solutions to data activity questions, curated resources, a glossary, and quizzes with diagnostic feedback. 

    Keywords: diagnostic measurement; cognitive diagnosis models (CDMs); diagnostic classification models (DCMs); G-DINA framework; GDINA package; model fit; model comparison; Q-matrix; validation

    Wenchao Ma

    Assistant Professor

    Dr. Wenchao Ma is an assistant professor in the Educational Research program in the Department of Educational Studies in Psychology, Research Methodology, and Counseling at the University of Alabama. He received his Ph.D. from Rutgers, The State University of New Jersey. His research interests lie in educational and psychological measurement in general, and item response theory and cognitive diagnosis modeling in particular. Wenchao was a recipient of the 2017 Bradley Hanson Award for Contributions to Educational Measurement given by the National Council on Measurement in Education as well as the 2018 Outstanding Dissertation Award given by the American Educational Research Association. 

    Contact Wenchao via wenchao.ma@ua.edu

    Jimmy de la Torre

    Professor

    Dr. Jimmy de la Torre is a Professor at the Faculty of Education at The University of Hong Kong. His research interests include latent variable models for educational and psychological measurement and how to use assessment to improve classroom instruction and learning. His recent work includes the development of various cognitive diagnosis models, implementation of estimation codes for cognitive diagnosis models, and development of the G-DINA framework for model estimation, test comparison, and Q-matrix validation, which is the focus of this module. He is an ardent advocate of CDM, and, to date, has conducted more than a dozen national and international CDM workshops. Jimmy was a recipient of the 2008 Presidential Early Career Award for Scientists and Engineers given by the White House, the 2009 Jason Millman Promising Measurement Scholar Award, and the 2017 Bradley Hanson Award for Contributions to Educational Measurement awarded by the National Council on Measurement in Education (NCME).

    Contact Jimmy via j.delatorre@hku.hk