|Citation:||Mattingly, D. J., Prislin, R., McKenzie, T. L., Rodriguez, J. L., & Kayzar, B. (2002). Evaluating evaluations: The case of parent involvement programs. Review of Educational Research, 72(4), 549-576.|
This article analyzes 41 evaluations of interventions designed to improve the educational involvement of parents of children in grades K-12 to assess the existing evidence about the effects of parent involvement programs. An initial appraisal of the outcomes of the programs suggested moderate success, but closer examination revealed that the majority of the evaluation designs and data collection techniques used were not sufficiently rigorous to provide solid evidence of program effectiveness. Only four evaluations used the most rigorous research designs (matched controls, pretest and posttest), and two of those found that children whose parents received the intervention did not perform significantly better than students in the control group. Data collection also weakened the power of some evaluations because they relied heavily on interviews, questionnaires, or other subjective data. Therefore, the authors conclude that these findings do not suggest that parent involvement programs are not effective or important. They only indicate that there are flaws in the existing evidence (based on these 41 program evaluations) to indicate a causal relationship between interventions designed to increase parent involvement and improvements in student learning. Authors reviewed 213 studies, out of which 41 studies were selected because they reported evaluation findings about outcomes of parent involvement interventions. Each article reviewed was coded independently by two researchers for more than 100 variables, divided into four categories: a) program description (size, duration, program development, types of interventions), b) context (information about community, school and program participants), c) evaluation (types of data and analyses), and d) outcomes (measures for students, parents and teachers). Although this study may be used by some to diminish the importance and value of parent involvement, readers should note that these findings say more about the state of program evaluation techniques than about parent involvement programs per se. The authors recognize several times that the information provided in the articles was often sparse and uneven, and given their assessment that very few evaluations met basic standards of rigor, any negative or positive results from these evaluations cannot be trusted. In addition, the authors provide very little information about how these programs defined and operationalized parent involvement, and about the quality of program implementation. Finally, the authors do not define the criteria they used to determine whether the study was truly a program evaluation, or just a study of an intervention, and they seem to use the terms interchangeably.
The Connection Collection: ©SEDL 2017