College of Education

About Us Admissions & Academics Research & Engagement Departments & Faculty Current Students

Jinming Zhang

Key Professional Appointments

  • Professor, Educational Psychology, University of Illinois, Urbana-Champaign
  • Professor, Statistics, University of Illinois, Urbana-Champaign

Research & Service

Dr. Jinming Zhang's research focuses on theoretical and applied statistical issues involved in educational and psychological measurement, and his research interest includes multidimensional item response theory, dimensionality assessment techniques, large-scale assessments, generalizability theory, and test security. From 1996 to 2009 he was employed at the Educational Testing Service (ETS) where he was a senior research scientist in the Division of Research and Development. He had worked on various research and operational projects related to large-scale educational assessments, specifically, the National Assessment of Educational Progress (NAEP), at ETS. He successfully directed and co-directed statistical and psychometric analyses of many large-scale assessment projects such as the 1996 and 2005 national science, 2000 state science, 1998, 2002 and 2003 national and state reading, 2006 national civics, and 2008 national arts assessments.

Publications

Li, X., Xu, H., Zhang, J., & Chang, H. H. (2021). Optimal Hierarchical Learning Path Design With Reinforcement Learning. Applied Psychological Measurement, 45(1), 54-70.  link >

Li, X., Zhang, J., & Chang, H. H. (2020). Look-ahead content balancing method in variable-length computerized classification testing. British Journal of Mathematical and Statistical Psychology, 73(1), 88-108.  link >

Choe, E. M., Zhang, J., & Chang, H. H. (2018). Sequential Detection of Compromised Items Using Response Times in Computerized Adaptive Testing. Psychometrika, 83(3), 650-673.  link >

Lin, C. K., & Zhang, J. (2018). Detecting Nonadditivity in Single-Facet Generalizability Theory Applications: Tukey's Test. Journal of Educational Measurement, 55(1), 78-89.  link >

Lee, Y. H., & Zhang, J. (2017). Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test. International Journal of Testing, 17(1), 23-54.  link >

Zhang, J., & Lin, C. K. (2016). Generalizability Theory With One-Facet Nonadditive Models. Applied Psychological Measurement, 40(6), 367-386.  link >

Zhang, J., & Li, J. (2016). Monitoring Items in Real Time to Enhance CAT Security. Journal of Educational Measurement, 53(2), 131-151.  link >

Tay-lim, B. S. H., & Zhang, J. (2015). An Investigation of Different Treatment Strategies for Item Category Collapsing in Calibration: An Empirical Study. Applied Measurement in Education, 28(2), 143-155.  link >

Lin, C. K., & Zhang, J. (2014). Investigating correspondence between language proficiency standards and academic content standards: A generalizability theory study. Language Testing, 31(4), 413-431.  link >

Zhang, J. (2014). A Sequential Procedure for Detecting Compromised Items in the Item Pool of a CAT System. Applied Psychological Measurement, 38(2), 87-104.  link >

Lin, C. K. C., & Zhang, J. (2013). Enhancing standard-based validity for ELL population: A perspective from correspondence between standards. TESOL Quarterly, 47(2), 399-410.  link >

Zhang, J. (2013). A Procedure for Dimensionality Analyses of Response Data from Various Test Designs. Psychometrika, 78(1), 37-58.  link >

Zhang, J. (2012). Calibration of Response Data Using MIRT Models With Simple and Mixed Structures. Applied Psychological Measurement, 36(5), 375-398.  link >

Zhang, J., Chang, H. H., & Yi, Q. (2012). Comparing single-pool and multiple-pool designs regarding test security in computerized testing. Behavior Research Methods, 44(3), 742-752.  link >

Zhang, J. (2012). The Impact of Variability of Item Parameter Estimators on Test Information Function. Journal of Educational and Behavioral Statistics, 37(6), 737-757.  link >

Zhang, J., Xie, M., Song, X., & Lu, T. (2011). Investigating the Impact of Uncertainty About Item Parameters on Ability Estimation. Psychometrika, 76(1), 97-118.  link >

Braun, H., Zhang, J., & Vezzu, S. (2010). An investigation of bias in reports of the National Assessment of Educational Progress. Educational Evaluation and Policy Analysis, 32(1), 24-43.  link >

Lee, Y., & Zhang, J. (2010). DIFFERENTIAL ITEM FUNCTIONING: ITS CONSEQUENCES. ETS Research Report Series, 2010(1), i-25.  link >

Braun, H., Zhang, J., & Vezzu, S. (2008). EVALUATING THE EFFECTIVENESS OF A FULL-POPULATION ESTIMATION METHOD. ETS Research Report Series, 2008(1), i-58.  link >

Lee, Y., & Zhang, J. (2008). COMPARING DIFFERENT APPROACHES OF BIAS CORRECTION FOR ABILITY ESTIMATION IN IRT MODELS. ETS Research Report Series, 2008(1), i-15.  link >

Yi, Q., Zhang, J., & Chang, H. H. (2008). Severity of organized item theft in computerized adaptive testing: A simulation study. Applied Psychological Measurement, 32(7), 543-558.  link >

Zhang, J. (2007). Conditional covariance theory and DETECT for polytomous items. Psychometrika, 72(1), 69-91.  link >

Zhang, J., & Lu, T. (2007). REFINEMENT OF A BIAS-CORRECTION PROCEDURE FOR THE WEIGHTED LIKELIHOOD ESTIMATOR OF ABILITY. ETS Research Report Series, 2007(2), i-25.  link >

Yi, Q., Zhang, J., & Chang, H. H. (2006). Assessing CAT test security severity. Applied Psychological Measurement, 30(1), 62-63.  link >

Yi, Q., Zhang, J., & Chang, H. (2006). Severity of organized item theft in computerized adaptive testing: an empirical study. ETS Research Report Series, 2006(2), i-25.  link >

Zhang, J. (2005). BIAS CORRECTION FOR THE MAXIMUM LIKELIHOOD ESTIMATE OF ABILITY. ETS Research Report Series, 2005(2), i-39.  link >

Zhang, J. (2005). ESTIMATING MULTIDIMENSIONAL ITEM RESPONSE MODELS WITH MIXED STRUCTURE. ETS Research Report Series, 2005(1), i-38.  link >

Zhang, J., & Chang, H. (2005). The effectiveness of enhancing test security by using multiple item pools. ETS Research Report Series, 2005(2), i-16.  link >

Zhang, J. (2004). COMPARISON OF UNIDIMENSIONAL AND MULTIDIMENSIONAL APPROACHES TO IRT PARAMETER ESTIMATION. ETS Research Report Series, 2004(2), i-40.  link >

Chang, H. H., & Zhang, J. (2002). Hypergeometric family and item overlap rates in computerized adaptive testing. Psychometrika, 67(3), 387-398.  link >

Zhang, J., & Stout, W. (1999). Conditional covariance structure of generalized compensatory multidimensional items. Psychometrika, 64(2), 129-152.  link >

Zhang, J., & Stout, W. (1999). The theoretical DETECT index of dimensionality and its application to approximate simple structure. Psychometrika, 64(2), 213-249.  link >

Zhang, J., & Stout, W. (1997). On Holland's Dutch identity conjecture. Psychometrika, 62(3), 375-392.  link >

Stout, W., Habing, B., Douglas, J., Kim, H. R., Roussos, L., & Zhang, J. (1996). Conditional covariance-based nonparametric multidimensionality assessment. Applied Psychological Measurement, 20(4), 331-354.  link >

Courses

Educational Statistics (EPSY 480) Designed for terminal value for professional training of students not intending to pursue advanced graduate work, and for introductory value for students continuing graduate study in education; descriptive statistics, introduction to correlation and regression, the normal curve, statistical inference, and the presentation and interpretation of statistical data in educational literature. Synchronous attendance not required. Compass LMS.

Theories of Measurement I (EPSY 585) Provides a conceptual framework of classical test theory (e.g., true scores, error of measurement, composite measures) and alternatives to the classical model (e.g., generalizability theory, latent trait theory). Students will learn the techniques and theory of classical test theory and apply the methods to educational and psychological assessments. Topics covered include reliability, validity, generalizability, dichotomous Item Response Theory (IRT), test construction and design, item bias and fairness, Differential Item Functioning (DIF), scaling, linking, and equating.

Theories of Measurement I (EPSY 585) Provides a conceptual framework of classical test theory (e.g., true scores, error of measurement, composite measures) and alternatives to the classical model (e.g., generalizability theory, latent trait theory). Students will learn the techniques and theory of classical test theory and apply the methods to educational and psychological assessments. Topics covered include reliability, validity, generalizability, dichotomous Item Response Theory (IRT), test construction and design, item bias and fairness, Differential Item Functioning (DIF), scaling, linking, and equating. Prerequisites are enforced and those not meeting them will be asked to drop or will be dropped from the class. Synchronous attendance required. Compass LMS.

Dimensionality and MIRT (EPSY 590) Seminar in educational psychology; topics relate to the areas of specialization represented by the various divisions within the department.

LSA, Equating and Linking (EPSY 590) Seminar in educational psychology; topics relate to the areas of specialization represented by the various divisions within the department. Synchronous attendance required. Compass LMS. Advanced Seminar course in Educational Psychology focuses on various designs and methods on equating and linking for educational tests, especially for large-scale assessments. Prerequisite: EPSY 585 or equivalents.

Statistical Learning with SAS (EPSY 590) Seminar in educational psychology; topics relate to the areas of specialization represented by the various divisions within the department.