Excerpted from the SEP by AIDS United.
This section outlines the pre-experimental study design.
The evaluation for Access to Care is designed to have three complementary components: monitoring of national evaluation measures; case studies; and cost analysis. The outcome evaluation draws from each of these three components.
Monitoring of national evaluation measures: This portion of the outcome evaluation uses a pre/post test design of program participants without a comparison group. Data are collected longitudinally at baseline, six months (Time 1), twelve months (Time 2), and eighteen months (Time 3) on all study participants. Baseline is defined as the date the client enrolls in the project and is most often operationalized as the date that informed consent is signed. Baseline data is gathered either at enrollment or shortly after enrollment. The windows for Time 1-3 are four months wide and are skewed to the right. Time 1 is defined as months 3, 4, 5, and 6; Time 2 is months 9, 10, 11, and 12; Time 3 is months 15, 16, 17, and 18. The reason that the windows are so heavily skewed toward lower months is that some programs are only engaging clients for three or four months, and thus requested that Time 1 include months 3 and 4 to better ensure participation in follow-up data collection. If multiple measures are taken during a follow-up window, we have asked sites to take the measurement closest to six months. Sites report the mean number of days from baseline that outcome measures were taken to monitor that measurements are representative of six, twelve, and eighteen months.
Case Study (network analysis): The case study network analysis uses a retrospective pre/post design. An on-line survey will be used to obtain retrospective and current network data on relationships between organizations. The network analysis will calculate density and average node degree. These data will be collected by asking “Does your organization collaborate with [insert name of organization] in the implementation of A2C?” and “In the six months before A2C, did your organization work with any of the following organizations to link PLWHA into HIV care and treatment?” Network analysis data will be collected approximately one year into program implementation.
Cost Analysis: We plan to conduct a cost and threshold analysis to assess: cost per client and cost per contact of delivering the program, economic threshold for the cost per HIV infection averted compared to current standard of care, economic threshold for cost per QALY averted. The threshold analysis of transmissions averted will allow us to calculate if the intervention is cost-saving and the threshold analysis of QALYs will allow us to assess if the interventions are cost-effective.
To achieve these aims we will employ standard methods of cost, threshold and cost-effectiveness analyses as recommended by the U.S. Panel on Cost-effectiveness in Health and Medicine, and as adapted to HIV/AIDS programs by Holtgrave (Holtgrave, 1998). The cost analysis will employ a U.S. Panel recommended micro-costing approach that has also been adopted by the U.S. Centers for Disease Control and Prevention. The threshold analysis will take the results of the cost analysis, and determine how many HIV infections from clients living with HIV must be averted to HIV seronegative partners in order to claim that the A2C program is cost-saving. The threshold analysis will also determine how much improvement in the quality of life of A2C clients much be realized in order to claim that the program services are cost-effective (even if not cost-saving) at a well-utilized standard of $100,000 per QALY saved. At the conclusion of the project, we will combine the cost information and the outcome data to conduct a cost-effectiveness analysis so as to determine the actual cost-per-quality-adjusted-life-year saved by the A2C project services. Uncertainty in any input parameters will be examined via sensitivity analysis so that robustness of results to changes in parameter values can be gauged.
Additional threats to the internal validity of the design are discussed here.
Limitations of Initial Evaluation
This section outlines the barriers to achieving moderate or strong evidence in the initial evaluation period.
This evaluation has many limitations that are worthy of mention. We are using a pre-post design to assess trends in participants’ health. Because we do not have a control or comparison group, we will not be able to test causal hypotheses. Each site has a unique program, recruitment methodology, and data collection methodology, and thus combining national evaluation construct data across site may not be appropriate. In addition, due to the vast differences between sites, we are not able to make meaningful comparisons across sites. For the case studies, only one individual will be analyzing and coding the data and therefore we will not able to assess reliability. For the case studies, we will not interview all individuals who participate in the A2C program, and therefore some key information and details will most likely be omitted.
However, selection of the participants for interviews will attempt to recruit key informants and critical staff. The decision to recruit two key staff members from each participating organization is based on best practices from the literature (Kwait, Valente, & Celentano, 2001). The cost analyses will make some assumptions, such as lifetime HIV costs of approximately $355,000 and $100K as the willingness to pay per QALY. However, these cost assumptions are based on the literature and represents standard amounts used for cost analyses. In addition, HIV infections averted is modeled and not tested biologically.
This section outlines plans for continued evaluation during years two through five of SIF involvement, which would likely yield moderate or strong evidence.
Plan to Achieve Moderate Evidence
We propose a comparative analysis of Access to Care and U.S. national data on linkage and retention in care. To better understand the unmet HIV care and treatment needs for individuals living with HIV, scientists have recently constructed cascades of the spectrum of engagement in care. The spectrum of engagement in care includes being HIV infected, diagnosed HIV positive, linked to HIV care, retained in HIV care, on ART, and having an undetectable viral load. The cascades estimate the number of PLWHA in each category and allow researchers to estimate the percentage of PLWHA who are linked to care and the percentage of PLWHA who have an undetectable viral load. For example, Gardner recently estimated that 19 percent of PLWHA have an undetectable viral load and the CDC estimated that 28 percent of PLWHA have a suppressed viral load (CDC, 2011; Gardner, McLees, Steiner, Del Rio, & Burman, 2011). The national evaluation will include a comparison of the SIF cascade to the CDC’s national-level cascade. Thus, the CDC’s national data on HIV linkage to care would serve as our ‘comparison group’. Both cascades would include the following: Achieving a moderate level of evidence: evaluation activities for year four and five.
During year two and three of the project, JHU will work with a selected site (or sites) to design an evaluation that meets the requirements of a moderate or strong level of evidence. We anticipate working closely with AU and a site to develop a detailed evaluation proposal that would outline the research questions to be answered, the evaluation design (including sampling, recruitment, and retention), an analysis strategy, and limitations. We anticipate that data collection would begin in year four.
While it is not possible at this time to speculate on any of the details of this evaluation plan, below we discuss some of the types of designs we think would be feasible. In suggesting these designs, we are assuming that we are working with a site that has access to existing medical records.
1. Randomized comparison group design. Randomization could occur at the individual level or at the sub-site level. If we randomized at the individual level, our sampling frame could be a list of patients who had been out of care for a year (failing to have two visits two months apart in the past 12 months). These patients could be randomized to an intervention condition (e.g. outreach followed by six months of peer navigation) or a standard of care condition (e.g. a call and referral by a case manager). If we randomized at the sub-site level, the approximately ten to twelve partner organizations for a site could be randomized to either an intervention condition or a standard of care condition. Data on linkage to care and retention in care, CD4 and VL could be collected at baseline, six months, and twelve months for the intervention and the control group. This design would allow us to compare differences between the intervention and the control group at baseline, six months, and twelve months.
This is a fairly robust design as it includes a randomized control group which should limit biases in how participants are enrolled in the program. Because the control group would have some interaction with study staff in order for data collection on CD4 and VL to be possible, we would expect our results to be biased toward null findings. Another alternative to this design would be to only track the outcomes of linkage to care and retention in care. This could be done passively for participants in the control group, and therefore would be less burdensome.
2. Interrupted time series design without a comparison group. Clients who were identified as being out of care based on their previous medical records would be enrolled in the study and followed for six months. During this time, they would receive the usual standard of care and data would be collected on participants by an outreach worker on visit history, CD4, and viral load. After six months, participants would be exposed to the intervention condition (e.g. six months of peer navigation). The study would compare visit history and health outcomes prior to exposure to the program to visit history and health outcomes after exposure to the program.
This design has appeal because all out of care participants are exposed to the program, there is simply a lag in time from their enrollment to the start of the intervention. However, this lag time could be problematic. It becomes increasingly difficult to locate out of care clients as time passes. There are additional limitations of this design that are worthy of mention. Because data are not collected on the intervention group and the comparison group at the same time, factors other than exposure to the program that occurred at the same time as exposure to the program could impact the outcome variables measured.
Given our previous experience working with sites on implementing the national evaluation, it is more likely that sites have the capacity to conduct an interrupted time series design. However, we anticipate exploring both options equally.
Unavailable:
Unavailable:
There is no content in this SEP example that identifies with the currently selected SEP Guidance Checklist element.