Wednesday, August 30, 2023

Criterion-related validity

Criterion- Related Validity

by Nang Aye Aye Htun

What is criterion-related validity? 

  • evaluate how accurately a test measures the outcome it was designed to measure. 
  • To establish criterion validity, you need to compare your test results to criterion variables

How to measure criterion validity

Criterion validity is assessed in two ways:

  • By statistically testing  a new measurement technique against an independent criterion or standard to establish concurrent validity
  • By statistically testing against a future performance to establish predictive validity
Example: Criterion validity

A researcher wants to know whether a college entrance exam is able to predict future academic performance. First-semester GPA can serve as the criterion variable, as it is an accepted measure of academic performance.The researcher can then compare the college entry exam scores of 100 students to their GPA after one semester in college. If the scores of the two tests are close, then the college entry exam has criterion validity.

Types of criterion validity

There are two types of criterion validity.

  • Concurrent validity is used when the scores of a test and the criterion variables are obtained at the same time.
  • Predictive validity is used when the criterion variables are measured after the scores of the test


Criterion Validity Evaluating Methods

Correlation Coefficients

Correlation coefficients, such as Pearson’s correlation coefficient (r), are used to measure the strength and direction of the relationship between the scores on a test or measurement instrument and the criterion or outcome. A high correlation indicates a strong relationship, suggesting good criterion validity.

Receiver Operating Characteristic (ROC) Analysis

ROC analysis is commonly used when the test or measurement instrument produces dichotomous outcomes (e.g., pass/fail). It evaluates the ability of the test to accurately discriminate between individuals who have the criterion or outcome and those who do not. The area under the ROC curve (AUC) is used as a measure of the test’s predictive accuracy, with higher values indicating better criterion validity.

Sensitivity and Specificity

Sensitivity refers to the proportion of individuals with the criterion or outcome who are correctly identified by the test, while specificity refers to the proportion of individuals without the criterion or outcome who are correctly identified as such. Sensitivity and specificity are typically calculated based on predetermined cutoff scores on the test and are used to evaluate the accuracy of the test in correctly classifying individuals.

Regression Analysis

Regression analysis can be used to predict the criterion or outcome variable based on the scores obtained on the test or measurement instrument. By examining the strength and significance of the regression coefficients, researchers can determine the extent to which the test predicts or correlates with the criterion or outcome.

Known Groups Method

The known groups method involves comparing the scores obtained on the test or measurement instrument between groups that are known to differ in terms of the criterion or outcome. If the test can effectively distinguish between these groups, it suggests good criterion validity.

No comments:

Post a Comment

Impact of Principal Leadership Styles on Teacher Job Performance : An Empirical Investigation

Assignment 2     :  Analysing the research paper Submitted by     : Nang Aye Aye Htun St. ID                  : 236511001      Date         ...