Validity and Reliability

Validity and Reliability

2021 | P. Mariel et al.
This chapter discusses the validity and reliability of discrete choice experiments (DCEs) in environmental valuation. It begins by focusing on three essential concepts for assessing the validity of welfare estimates: content, construct, and criterion validity. Content validity involves ensuring that the survey design aligns with respondents' true preferences. Construct validity examines how well the value estimates reflect underlying constructs, often through expectation-based validity tests. Criterion validity compares DCE estimates to other valid measures, such as market prices or simulated markets. The chapter then addresses the reliability of DCEs, which is assessed through test-retest studies. These studies involve conducting the same survey at different time points to measure consistency in responses. Various statistical tests are used to evaluate the reliability of choice experiments, including congruence tests, equality of parameter vectors, and mean WTP comparisons. Model comparison and selection are also discussed, emphasizing the importance of choosing models based on statistical fit, research questions, and specific study goals. Statistical criteria such as log-likelihood values, pseudo-$R^2$, and information criteria (AIC, BIC) are used to compare models. Cross-validation is another method to assess model performance by applying the model to new data. Finally, the chapter touches on prediction in DCEs, noting that while models can predict the probability of choosing an alternative, they cannot predict individual choices. The chapter highlights the challenges and limitations of predicting choices, including overfitting and the need for careful model selection to balance fit and predictive performance.This chapter discusses the validity and reliability of discrete choice experiments (DCEs) in environmental valuation. It begins by focusing on three essential concepts for assessing the validity of welfare estimates: content, construct, and criterion validity. Content validity involves ensuring that the survey design aligns with respondents' true preferences. Construct validity examines how well the value estimates reflect underlying constructs, often through expectation-based validity tests. Criterion validity compares DCE estimates to other valid measures, such as market prices or simulated markets. The chapter then addresses the reliability of DCEs, which is assessed through test-retest studies. These studies involve conducting the same survey at different time points to measure consistency in responses. Various statistical tests are used to evaluate the reliability of choice experiments, including congruence tests, equality of parameter vectors, and mean WTP comparisons. Model comparison and selection are also discussed, emphasizing the importance of choosing models based on statistical fit, research questions, and specific study goals. Statistical criteria such as log-likelihood values, pseudo-$R^2$, and information criteria (AIC, BIC) are used to compare models. Cross-validation is another method to assess model performance by applying the model to new data. Finally, the chapter touches on prediction in DCEs, noting that while models can predict the probability of choosing an alternative, they cannot predict individual choices. The chapter highlights the challenges and limitations of predicting choices, including overfitting and the need for careful model selection to balance fit and predictive performance.
Reach us at info@study.space
[slides and audio] Validity and Reliability