Excerpted from the SEP by the BELL program, a subgrantee of the Edna McConnell Clark Foundation.
This section describes how participants will be tracked even if they are no longer available at the original site of data collection.
The study design recognizes the need to avoid loss of the sample due to missing data. Much of the impact study will rely on existing administrative records from BELL (the application, which provides baseline characteristics) or participating school districts (which provide test score information at baseline and follow up from existing state or local standardized tests plus other student records on student characteristics, attendance, promotion, etc). It is possible that some students served in a summer will not be tested in the following spring because they have left a participating school district. We will access records data on students who switch schools within a district.
The crucial point at which sample attrition through missing data could occur in the random assignment study is the special testing of achievement the study will undertake, typically after the summer program when state or local standardized tests are not available. Our preferred approach will be to conduct this special testing at the start of school after the BELL summer program serves the program group. We recognize that some participating school districts may not want to introduce additional testing at the start of the school year and we may have to field the test in late summer, prior to the start of the school year. As long as the testing is done at the same time for the program and control groups, shifting from the start of school to late summer will not affect the internal validity of the impact estimate. These sentences indicate both the preferred manner of collecting data and an additional alternative that can be undertaken as needed.We will attempt to test all sample members attending any school in the district at that time. Testing students at the start of school will be logistically easier, but we are prepared to conduct summer testing, as was required in our first site to start, Springfield, MA.
MDRC is working with Survey Research Management, with which it has partnered many times in similar studies to produce a high sample response rate on this testing. In prior studies, we have achieved response rates of 90 percent or above in such testing. This project poses special challenges, since some school districts in the project will want end-of-summer testing to occur prior to the start of school to avoid any disruption of instruction. Absence special procedures and effort to produce high response rates, this can create problems, as the prior evaluation of BELL experienced. Our plan to produce high response rates includes the following strategies:
This section includes information about strategies for recruiting and retaining participants, including remuneration.
• Hiring of local staff to make contact with sample members early in the summer to remind them of the later testing, explain how their participation is vital to learning about the effectiveness of the BELL program, provide initial information about when and how it will be conducted, and identify and begin to address issues that create barriers to participation in the testing.
• Help in arranging attendance at the testing can include customized information about transportation options to the testing locations, and alternative times for testing that can be convenient for parents and student with difference schedules.
These points also address efforts to maintain participation among study participants.
• These early contacts will include information about compensation they will receive for the effort involved in getting their child to the testing. Since the testing will involve one-hour tests in both reading and math, it will need to occur on two days. Thus, we anticipate compensation for the effort involved in participating in the testing will be gift cards for a local store of up to one hundred dollars per family.
• Follow-up contact with families who miss the initially scheduled dates urging them to get their child to make-up sessions.
We expect that program group members will be much easier to locate and test as the summer ends because most will still be attending the BELL program. We intend to capitalize on this to test members of the program group still attending the BELL program prior to the end of their program participation and in a site where their program operates. We will work with BELL staff to attempt to equalize as much as possible the testing conditions for these program group members, other sample members who are in the program group but no longer participating, and students in the control group. The entire sample (program and control groups) will be tested by SRM staff. We believe that it will be possible to make testing conditions reasonably comparable across the members of the sample and the substantial savings in evaluation resources made possible by piggy-backing the evaluation testing on the end of the program will allow us to devote the resources needed to assure a high testing rate and similar timing of the testing for the control group and the non-attending program group members, which will be vital for the success of the evaluation.
Based on past experience in similar studies of programs operating outside the regular school calendar (especially our prior study of afterschool programs), we believe that these strategies will avoid differential attrition of the sample or differential timing of data collection.