*Excerpted from the SEP by the Gateway to College program, a subgrantee of the Edna McConnell Clark Foundation.*

This section provides information concerning sample selection and representativeness.

The target population will be drawn from nine Gateway to College program sites from across the country and will be selected based on the program’s general eligibility requirements for participation. The program’s eligibility requirements include students that are/have:

• Between the ages of 16-20 (and able to complete high school diploma by age 21);

• Between 5-17 credits away from high school completion;

• Current or former high school students;

• Reading at the 8th grade level; and

• A low GPA or history of absenteeism.

__Minimum Detectable Effects (MDEs)__

The Minimum Detectable Effect (MDE) is the smallest true impact that an experiment has a “good” chance of detecting (Bloom, 1995).^{1} The smaller the MDE, the more likely a study is to be able to detect impacts of a small magnitude.

This section clearly illustrates the power and MDES calculations, and additionally describes how the 2:1 treatment to control design affects both statistics.

The MDE for binary outcomes is calculated using the following formula:^{2}

Where:

2.80= The appropriate multiplier for 80 percent power and a five percent significance level

𝜋= The expected control group success rate

𝑅2= The explanatory power of the impact regression

𝑇= The proportion of the study sample that is randomly assigned to the treatment group

𝑛= The study sample size

As equation 1 shows, the main factors that influence the MDE are:

(1) The control group’s success rate (Π),

(2) The study sample size (n),

(3) The proportion of the sample randomly assigned to the program and control groups (T)

This section describes how statistical power is estimated and consistent with the study design.

In MDE calculations 80 percent power and a five percent significance level are assumed, as is customary. Conservatively, it assumed that R2 is 0, meaning that baseline covariates do not help to explain any of the variation in outcomes.^{3}

__Sample Size Under the 2:1 Treatment to Control Random Assignment Ratio__

Figure 1 displays the MDE under a 2:1 random assignment ratio. As the total sample size decreases, and the expected control group success rate increases, the MDE increases. In other words, a smaller sample results in there being less sensitivity to detect small impacts. Similarly, a higher control group success rate increases the benchmark that an intervention has to surpass, resulting in there being less sensitivity to detect small impacts.

Outcome(s) and assumptions used in the statistical power calculations are described here.

Our power analysis finds that assuming a control group success rate of 20 percent and a target sample size of approximately 1,800 students, with 1,200 in the treatment group and 600 in the control group (see Table 4.2 for the sample breakdown by group) using the 2:1 ratio is sufficient to yield a minimum detectable effect of around 5.5 percentage points.^{4} The dotted line on Figure 1 illustrates this outcome. The third curved line from the bottom shows the MDEs associated with a control group success rate of 20 percent. At a sample size of 1800, the MDE is around 5.5 percentage points. Assuming an 80 percent response rate on the student survey, and using the same assumptions outlined above, the MDE for the survey is just over six percentage points.

With a total of nine sites, we expect that each site will contribute at least two cohorts of students. While each program site’s capacity may vary and some sites may be able to contribute more students to the study sample than others, we expect that each program site will contribute approximately 100 students each year across two cohorts. A breakdown of the target study sample by cycle and site is provided in Table 4.3.

When there are plans to conduct analyses of subgroups, additional statistical power analyses are presented to estimate those MDES.