Date Available

4-23-2013

Year of Publication

2013

Document Type

Doctoral Dissertation

Degree Name

Doctor of Philosophy (PhD)

College

Education

Department/School/Program

Educational, School, and Counseling Psychology

Advisor

Dr. Fred Danner

Abstract

Reliability Generalization (RG) is a meta-analytic method that examines the sources of measurement error variance for scores for multiple studies that use a certain instrument or group of instruments that measure the same construct (Vacha-Haase, Henson, & Caruso, 2002). Researchers have been conducting RG studies for over 10 years since it was first discussed by Vacha-Haase (1998). Henson and Thompson (2002) noted that, as RG is not a monolithic technique; researchers can conduct RG studies in a variety of ways and include diverse variables in their analyses. Differing recommendations exist in regards to how researchers should retrieve, code, and analyze information when conducting RG studies and these differences can affect the conclusions drawn from meta-analytic studies (Schmidt, Oh, & Hayes, 2009) like RG. The present study is the first comprehensive review of both current RG practices and RG recommendations. Based upon the prior research findings of other meta-analytic review papers (e.g., Dieckmann, Malle, & Bodner 2009), the overarching hypothesis was that there would be differences between current RG practices and best practice recommendations made for RG studies.

Data consisted of 64 applied RG studies and recommendation papers, book chapters, and unpublished papers/conference papers. The characteristics that were examined included how RG researchers: (a) collected studies, (b) organized studies, (c) coded studies, (d) analyzed their data, and (e) reported their results.

The results showed that although applied RG researchers followed some of the recommendations (e.g., RG researchers examined sample characteristics that influenced reliability estimates), there were some recommendations that RG researchers did not follow (e.g., the majority of researchers did not conduct an a priori power analysis). The results can draw RG researchers’ attentions to areas where there is a disconnect between practice and recommendations as well as provide a benchmark for assessing future improvement in RG implementation.

Share

COinS