Catalog Advanced Search

Search by Categories
Search by Format
Search by Date Range
Products are filtered by different dates, depending on the combination of live and on-demand components that they contain, and on whether any live components are over or not.
Start
End
Search by Keyword
Sort By

54 Results

  • Digital Module 04: Diagnostic Measurement Checklists

    Contains 3 Component(s) Recorded On: 03/26/2019

    ​In this digital ITEMS module, Dr. Natacha Carragher, Dr. Jonathan Templin, and colleagues provide a didactic overview of the specification, estimation, evaluation, and interpretation steps for diagnostic measurement / classification models (DCMs) centered around checklists for practitioners. A library of macros and supporting files for Excel, SAS, and Mplus is provided along with video tutorials for key practices.

    In this digital ITEMS module, Dr. Natacha Carragher, Dr. Jonathan Templin, and colleagues provide a didactic overview of the specification, estimation, evaluation, and interpretation steps for diagnostic measurement / classification models (DCMs), which are a promising psychometric modeling approach. These models can provide detailed skill- or attribute-specific feedback to respondents along multiple latent dimensions and hold theoretical and practical appeal for a variety of fields. They use a current unified modeling framework - the log-linear cognitive diagnosis model (LCDM) – as well as a series of quality-control checklists for data analysts and scientific users to review the foundational concepts, practical steps, and interpretational principles for these models. They demonstrate how the models and checklists can be applied in real-life data-analysis contexts. A library of macros and supporting files for Excel, SAS, and Mplus is provided along with video tutorials for key practices.

    Keywords: diagnostic measurement; diagnostic classification models (DCMs); log-linear cognitive diagnosis modeling (LCDM) framework; checklists; attributes; Q-matrix; model fit; Excel; Mplus; SAS

    Natacha Carragher

    Senior Statistician at the University of New South Wales

    Natacha is a Senior Statistician at the University of New South Wales (UNSW) as well as a consultant for the Department of Mental Health and Substance Abuse at the World Health Organization Headquarters in Geneva, Switzerland. She has 10 years of experience in the mental health field and, more recently, three years of experience in higher education assessment and behavioral addictions. Her research interests include the classification and structure of psychopathology, assessment and measurement, comorbidity, and the application of latent variable modelling techniques to public health and educational data. Her work has been published in a range of prestigious peer-reviewed journals, which includes a co-written book chapter on self-report assessment for specific mental disorders published in the Cambridge Handbook of Clinical Assessment and Diagnosis. In 2014, she received the Epidemiology and Public Health Section Young Epidemiologist Prize from the UK Royal Society of Medicine. As an educator, Natacha has provided statistical advice and expertise to postgraduate students and staff at UNSW and colleagues at other universities. 

    Contact Natacha via n.carragher@unsw.edu.au

    Jonathan Templin

    Professor and E. F. Lindquist Chair of Educational Measurement and Statistics at the University of Iowa

    Johnathan is professor and E. F. Lindquist Chair of Educational Measurement and Statistics at the University of Iowa. His research interests are focused on the development of psychometric and general quantitative methods, as applied in the psychological, educational, and social sciences. He teaches courses on advanced quantitative methodology with an emphasis on statistical modelling, model comparisons, and the integration and generalities of popular statistical and psychometric techniques. He is a co-author of the book Diagnostic Measurement: Theory, Methods, and Applications, which won the 2012 American Educational Research Association Division D Award for Significant Contribution to Educational Measurement and Research Methodology. He is the winner of the 2015 AERA Cognition and Assessment SIG Award for Outstanding Contribution to Research in Cognition and Assessment and the inaugural 2017 Robert Linn Lecture Award.

    Contact Jonathan via jonathan-templin@uiowa.edu

    Philip Jones

    Professor Emeritus at the University of New South Wales

    Philip is Professor Emeritus at the University of New South Wales (UNSW) where he was the Associate Dean in Education for UNSW Medicine for 10 years until his retirement in 2016. He was a senior staff specialist in the Department of Infectious Diseases at the Prince of Wales Hospital and held a conjoint appointment to the Prince of Wales Clinical School. He was involved with the development of the Medicine program from the inception of its planning in 1998. In 2010, he received the Vice-Chancellor’s Award for Teaching Excellence at UNSW.

    Contact Philip via philip.jones@unsw.edu.au

    Boaz Shulruf

    Associate Professor at the University of New South Wales

    Boaz is an Associate Professor who works in the Office of Medical Education at the University of New South Wales (UNSW) who is also an Honorary Associate Professor at the University of Auckland. His main research interests are in the area of psycho-educational assessment in higher education, particularly within the context of Medical and Health Sciences Education. He has expertise in quantitative research methodologies and educational assessment and psychometrics. He supervises Independent Learning Project students in medical education and educational measurement.

    Contact Boaz via b.shulruf@unsw.edu.au

    Gary Velan

    Professor and Associate Dean at the University of New South Wales

    Gary is a professor, Associate Dean, Head of the Department of Pathology, Director of Learning and Teaching Development, and Head of the Educational Research and Development Group in the School of Medical Sciences at the University of New South Wales (USNW). His research is based on educational innovations, including web-based assessments, virtual microscopy adaptive tutorials, concept and knowledge maps, and their effect upon learning outcomes in medical education.

    Contact Gary via g.velan@unsw.edu.au

  • Digital Module 03: Nonparametric Item Response Theory

    Contains 5 Component(s)

    In this digital ITEMS module Dr. Stefanie Wind introduces the framework of nonparametric item response theory (IRT), in particular Mokken scaling, which can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models.

    In this digital ITEMS module Stefanie Wind introduces the framework of nonparametric item response theory (IRT), in particular Mokken scaling, which can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. She walks through the key distinction between parametric and nonparametric models, introduces the two key nonparametric models under Mokken scaling – the monotone homogeneity and double monotonicity model – and discusses modern extensions of the basic models. She also describes how researchers and practitioners can use key nonparametric statistics and graphical visualization tools to evaluate the fundamental measurement properties of an assessment from a nonparametric perspective. Finally, Dr. Wind illustrates the key reasoning steps and associated best practices using video-based worked examples completed with the mokken package in R

    Keywords: nonparametric IRT; Mokken scaling; monotone homogeneity model; double monotonicity model; rater effects; multilevel modeling; mokken package; R

    Stefanie A. Wind

    Assistant Professor, Department of Educational Research, University of Alabama, Tuscaloosa, AL

    Dr. Wind conducts methodological and applied research on educational assessments with an emphasis on issues related to raters, rating scales, Rasch models, nonparametric IRT, and parametric IRT. 

    Contact Stefanie via stefanie.wind@au.edu

  • Digital Module 02: Scale Reliability in Structural Equation Modeling

    Contains 5 Component(s) Recorded On: 06/14/2019

    ​In this digital ITEMS module, Dr. Greg Hancock and Dr. Ji An provide an overview of scale reliability from the perspective of structural equation modeling (SEM) and address some of the limitations of Cronbach’s α.

    In this digital ITEMS module, we frame the topic of scale reliability within a confirmatory factor analysis and structural equation modeling (SEM) context and address some of the limitations of Cronbach’s α. This modeling approach has two major advantages: (1) it allows researchers to make explicit the relation between their items and the latent variables representing the constructs those items intend to measure, and (2) it facilitates a more principled and formal practice of scale reliability evaluation. Specifically, we begin the module by discussing key conceptual and statistical foundations of the classical test theory (CTT) model and then framing it within an SEM context; we do so first with a single item and then expand this approach to a multi-item scale. This allows us to set the stage for presenting different measurement structures that might underlie a scale and, more importantly, for assessing and comparing those structures formally within the SEM context. We then make explicit the connection between measurement model parameters and different measures of reliability, emphasizing the challenges and benefits of key measures while ultimately endorsing the flexible McDonald’s ω over Cronbach’s α. We then demonstrate how to estimate key measures in both a commercial software program (Mplus) and three packages within an open-source environment (R). In closing, we make recommendations for practitioners about best practices in reliability estimation based on the ideas presented in the module.

    Keywords:  scale reliability; structural equation modeling; Cronbach’s α; McDonald’s ω

    Gregory R. Hancock

    Professor

    Gregory R. Hancock is Professor, Distinguished Scholar-Teacher, and Director of the Measurement, Statistics and Evaluation program in the Department of Human Development and Quantitative Methodology at the University of Maryland, College Park, and Director of the Center for Integrated Latent Variable Research (CILVR). His research interests include structural equation modeling and latent growth models, and the use of latent variables in (quasi)experimental design. His research has appeared in a wide variety of prestigious peer-reviewed journals. He also co-edited the volumes Structural Equation Modeling: A Second Course (2006; 2013), The Reviewer's Guide to Quantitative Methods in the Social Sciences (2010), and Advances in Longitudinal Methods in the Social and Behavioral Sciences (2012). He has taught dozens of methodological workshops in the United States, Canada, and outside North America and has received the 2011 Jacob Cohen Award for Distinguished Contributions to Teaching and Mentoring by the APA.

    Contact Greg via ghancock@umd.edu

    Ji An

    Ph.D. Candidate

    Ji An is a Ph.D. candidate in the Measurement, Statistics and Evaluation program in the Department of Human Development and Quantitative Methodology at the University of Maryland, College Park. Her research interests include propensity score methods, analysis of survey data with complex sampling designs, structural equation modeling, and multilevel modeling. Before her time at College Park she had earned an M.A. in Teaching and Curriculum from Michigan State University. She has subsequently worked as an instructor for introductory courses in educational statistics and as a graduate assistant for various courses at the undergraduate and graduate level.  

    Contact Ji via anji.nihao@gmail.com

  • Digital Module 01: Reliability in Classical Test Theory

    Contains 2 Component(s)

    ​In this digital ITEMS module, Dr. Charlie Lewis and Dr. Michael Chajewski provide a two-part introduction to the topic of reliability from the perspective of classical test theory (CTT).

    In this digital ITEMS module we provide a two-part introduction to the topic of reliability from the perspective of classical test theory (CTT). In the first part, which is directed primarily at technical beginners, we review and build on the content presented in the original didactic ITEMS article by Traub & Rowley (1991). Specifically, we discuss the notion of reliability as an intuitive everyday concept to lay the foundation for its formalization as a reliability coefficient via the basic CTT model. We then walk through the step-by-step computation of key reliability indices and discuss the data-collection conditions under which each is most suitable. In the second part, which is directed primarily at intermediary learners, we present a distribution-centered perspective on the same content. We discuss the associated assumptions of various CTT models ranging from parallel to congeneric, and review how these affect the choice of reliability statistics. Throughout the module, we use a customized Excel workbook with sample data and basic data manipulation functionalities to illustrate the computation of individual statistics and to allow for structured independent exploration. In addition, we provide quiz questions with diagnostic feedback as well as short videos that walk through sample solutions. 

    Key words:  reliability; classical test theory; KR-20; KR-21; Cronbach’s; Pearson correlation; Spearman-Brown formula; parallel model; tauequivalent model; congeneric model

    Charlie Lewis

    Distinguished Presidential Appointee

    Charlie Lewis a Distinguished Presidential Appointee at Educational Testing Service and Professor Emeritus of Psychology and Psychometrics at Fordham University. He also taught psychology and psychometrics at Dartmouth College, the University of Illinois, and the University of Groningen. His research interests include fairness and validity in educational testing; mental test theory, including item response theory and computerized adaptive testing; Bayesian inference; generalized linear models; and behavioral decision making. He was recently co-editor and co-author of Computerized Multistage Testing: Theory and Applications (2014).

    Contact Charlie via clewis@ets.org 

    Michael Chajewski

    Principal Psychometrician, Learning Science

    Michael Chajewski received his undergraduate degree in experimental psychology from the University of South Carolina, and a masters degree in forensic psychology from John Jay College of Criminal Justice, The City University of New York. He received his doctoral degree in Psychometrics and Quantitative Psychology from Fordham University. As a psychometrician, Michael worked for eight years for the College Board supporting operational testing programs such as PSAT/NMSQT and AP, as well as assisted in the redesign of the SAT. His contributions and research spanned a variety of technical work including equating, test security and system development. Since 2017 Michael has been leading the psychometrics team at Kaplan Test Prep, spearheading measurement model development for formative assessment and innovating assessment operating procedures. As an educator, Michael has taught both undergraduate and graduate courses within the CUNY system as well as at Fordham University. His research interests include configuring adaptive assessments, large data model fit evaluations, missing data impact, scaling, norming, as well as statistical software development and Bayesian statistics.

    Contact Michael via michael.chajewski@kaplan.com