Catalog Advanced Search
Module 15: Assessing Student Achievement with Term Papers and Written ReportsContains 1 Component(s)
This ITEM module intended to help teachers apply the development strategies and rules of evidence for performance assessment to term papers and written reports.
This module is written for teachers and is intended to help them apply the development strategies and rules of evidence for performance assessment to term papers and written reports. These traditional classroom assignments can be designed and used to stimulate student performance that requires higher-order thinking and student self-investment in a topic. The issues of assessment quality presented in this module will help teachers derive dependable information about student performance from term papers and written reports to use in decisions about instruction, grading, and other aspects of teaching.
Keywords: performance assessment, term papers, written reports
Module 14: Generalizability TheoryContains 1 Component(s)
This ITEM module introduced the framework and the procedures of generalizability theory using a hypothetical scenario involving writing proficiency.
Generalizability theory consists of a conceptual framework and a methodology that enable an investigator to disentangle multiple sources of error in a measurement procedure. The roots of generalizability theory can be found in classical test theory and analysis of variance (ANOVA), but generalizability theory is not simply the conjunction of classical theory and ANOVA. In particular, the conceptual framework in generalizability theory is unique. This framework and the procedures of generalizability theory are introduced and illustrated in this instructional module using a hypothetical scenario involving writing proficiency.
Keywords: generalizability theory, measurement procedure, classical test theory, analysis of variance
Module 13: Developing a Personal Grading PlanContains 1 Component(s)
This ITEM module assisted teachers in developing defensible grading practices that effectively and fairly communicate students' achievement status to their parents.
The purpose of this instructional module is to assist teachers in developing defensible grading practices that effectively and fairly communicate students' achievement status to their parents. In formulating such practices, it is essential that teachers first consider their personal grading philosophy and then create a compatible personal grading plan. The module delineates key philosophical issues that should be addressed and then outlines the procedural steps essential to establishing a grading plan. Finally, the features of several common methods of absolute and relative grading are compared.
Keywords: grading plan, personal, student's achievement status
Module 12: High Quality Classroom AssessmentContains 1 Component(s)
This module promotes the understanding of differences between sound and unsound assessments.
Teachers who gather accurate information about student achievement through the use of sound classroom assessment contribute to effective teaching and learning. On the other hand, those who fail to understand and apply the rules of evidence for sound assessment risk doing great harm to students. Thus, all teachers must understand the differences between sound and unsound assessments. This module is designed to promote that understanding. It examines the many users and uses of classroom assessment, the wide range of achievement targets to be assessed, the array of assessment methods teachers use, and the importance of marrying targets and methods in ways that promote sound assessment. Four key attributes of sound assessment are presented for the teachers to apply in their own classroom assessment environments.
Keywords: sound assessment, assessment methods, classroom assessment, student achievement
Module 11: Portfolio Assessment and InstructionContains 1 Component(s)
This ITEM module clarifies the notion of portfolio assessment and helps users design such assessments in a thoughtful manner.
The term portfolio has become a popular buzz word. Unfortunately, it is not always clear exactly what is meant or implied by the term, especially when used in the context of portfolio assessment. This training module is intended to clarify the notion of portfolio assessment and help users design such assessments in a thoughtful manner. We begin with a discussion of the rationale for assessment alternatives and then discuss portfolio definitions, characteristics, pitfalls, and design considerations.
Keywords: portfolio assessment, portfolio design, assessment methods
Module 10: Equating Methods in Item Response TheoryContains 1 Component(s)
This ITEM module provides the basis for understanding the process of score equating through the use of item response theory (lRT).
The purpose of this instructional module is to provide the basis for understanding the process of score equating through the use of item response theory (lRT). A context is provided for addressing the merits of IRT equating method. The mechanics of IRT equating and the need to place parameter estimates from separate calibration runs on the same scale are discussed. Some procedures for placing parameter estimates on a common scale are presented. In addition, IRT true-score equating is discussed in some detail. A discussion of the practical advantages derived from IRT equating is offered at the end of the module.
Keywords: score equating, item response theory, IRT, equating method
Module 09: Standard Error of MeasurementContains 1 Component(s)
This ITEM module describes the standard error of measurement (SEM), important concept in classical testing theory applications.
The standard error of measurement (SEM) is the standard deviation of errors of measurement that are associated with test scores from a particular group of examinees. When used to calculate confidence bands around obtained test scores, it can be helpful in expressing the unreliability of individual test scores in an understandable way. Score bands can also be used to interpret intraindividual and interindividual score differences. Interpreters should be wary of over-interpretation when using approximations for correctly calculated score bands. It is recommended that SEMs at various score levels be used in calculating score bands rather than a single SEM value.
Keywords: standard error of measurement, SEM, score bands, confidence bands, unreliability of individual test scores
Module 08: Reliability in Classical Test TheoryContains 1 Component(s)
This ITEM module illustrated the idea of consistency with reference to two sets of test scores.
The topic of test reliability is about the relative consistency of test scores and other educational and psychological measurements. In this module, the idea of consistency is illustrated with reference to two sets of test scores. A mathematical model is developed to explain both relative consistency and relative inconsistency of measurements. A means of indexing reliability is derived using the model. Practical methods of estimating reliability indices are considered, together with factors that influence the reliability index of a set of measurements and the interpretation that can be made of that index.
Keywords: Reliability; reliability index; precision; measurement error; parallel forms design; test-retest design
Module 07: Comparison of 1-, 2-, and 3-Parameter IRT ModelsContains 1 Component(s)
This ITEM module discusses the 1-, 2-, and 3-parameter logistic item response theory models.
This module discusses the 1-, 2-, and 3-parameter logistic item response theory models. Mathematical formulas are given for each model, and comparisons among the three models are made. Figures are included to illustrate the effects of changing the a, b, or c parameter, and a single data set is used to illustrate the effects of estimating parameter values (as opposed to the true parameter values) and to compare parameter estimates achieved though applying the different models. The estimation procedure itself is discussed briefly. Discussions of model assumptions, such as dimensionality and local independence, can be found in many of the annotated references (e.g., Hambleton, 1988).
Keywords: item response theory, parameter estimation, model assumptions
Module 06: Equating Methods in Classical Test TheoryContains 1 Component(s)
This ITEM module promoted a conceptual understanding of test form equating using traditional methods.
This instructional module is intended to promote a conceptual understanding of test form equating using traditional methods. The purpose of equating and the context in which equating occurs are described. The process of equating is distinguished from the related process of scaling to achieve comparability. Three equating designs are considered, and three equating methods, mean, linear, and equipercentile, are described and illustrated. Special attention is given to equating with nonequivalent groups, and to sources of equating error.
Keywords: test form equating, equating methods, equating design