Evidence-Based Decision Making
Improve Government Services Delivery
"What does it mean […] to create 'evidence-based initiatives?' It means that the administration strives to be as certain as possible that federal dollars are spent on social intervention programs that have been proven by rigorous evidence to work."
- Ron Haskins and Greg Margolis, Show Me The Evidence
Purpose and Outcomes
Purpose: Inform how evidence-based decision-makingand its common approaches to practice can further agency goals.
Evidence-based decision-making helps to achieve greater impact per dollar by focusing resources on what works. The Office of Management and Budget (OMB) outlined the importance of using evidence-based approaches that allow agencies to measure the effectiveness of programs, services, and products by evaluating what's working, where it's succeeding, for whom, and under what circumstances. By using data, performance metrics, and assessments, agencies can target their resources to invest in programs and initiatives shown to be the most effective.
Taking an evidence-based decision-making approach ensures using efficient and cost-saving methods and presupposes routine, rigorous use of data and evaluations to make funding decisions. The benefits of this decision-making approach are numerous, including creating a culture where continuous learning and program improvement lead to better overall performance (Source: David Garvin, Amy Edmondson, and Francesca Gino, "Is Yours a Learning Organization?," Harvard Business Review, March 2008).
All federal programs and services can benefit from evidence-based approaches, enhancing their ability to:
- Collect and use data
- Employ appropriate program assessment to understand what is and is not working
- Enable continuous improvement across initiatives, programs, and agencies
In 2009, OMB established guidance to encourage evaluation across government agencies and defined evidence as "the available body of facts or information indicating whether a belief or proposition is true or valid. Evidence can be quantitative or qualitative and may come from a variety of sources, including performance measurement, evaluations, statistical series, retrospective reviews, and other data analytics and research." (Source: Office of Management and Budget, "Analytical Perspectives: Budget of the United States Government, Fiscal Year 2017," 2016. See chapter 7, pgs. 71-72.)
Outcomes: Evidence-based decision-making exists to ensure the impact you say you're going to have results in positive future outcomes for your work. The section below on "Implementing Evidence Requirements" identifies the types of data that may exist or be best appropriate for evidence-based decision-making strategies.
The remaining sections outline three broad strategies to implement evidence-based decision-making exists across the federal government. These approaches share a common emphasis on producing evidence and understanding around funded initiatives:
- Tiered Grant-making: Grant programs that include evidence requirements and provide agencies a systematic way of addressing structure of investment to support new ideas, while also investing in the scale-up of approaches that have demonstrated results.
- Learning Agenda Strategy: Implementing a learning agenda emphasizing federal employees and grantees' understanding of how to apply evidence- and data-based decision making and ensuring they have the resources, capacity, and data needed to implement this approach.
- Pay for Success (PFS) through Public-Private Partnerships: Using public-private partnerships to achieve outcomes through evidence-based programs and pay for them in a more cost-effective way. Three definitions are possible: new type of financing model, a new form of outcomes-focused contract, or a new approach to public-private partnership.
Several examples of agencies using evidence-based decision-making approaches currently exist:
- U.S. Agency for International Development's (USAID) Development Innovation Ventures (DIV) Program: Taking a portfolio approach, DIV invests small sums of funding in many relatively unproven ideas, but continues to support only those that demonstrate rigorous evidence of impact, cost-effectiveness, and potential to scale via the public and/or private sector. In six years, DIV has invested more than $90 million in nearly 170 innovations across all 10 sectors.
- Department of Labor (DOL): To adopt a robust learning agenda, the DOL established the Chief Evaluation Office (CEO) in 2010, and has since made significant progress in institutionalizing a culture of evidence and learning. Responsible for managing the DOL's evaluation program, the CEO is committed to conducting rigorous, relevant, and independent evaluations, as well as to funding research through a collaborative learning agenda process. Through its work, the CEO has been able to connect evaluation with performance and partner cross-agency to encourage the adoption of analytical approaches in decision-making.
- Social Innovation Fund (SIF): Program of the Corporation for National and Community Servicethat has used tiered-evidence approach to award more than $240 million in funding for program expansion and evaluation in communities across the country since 2010; 2014 and 2015 appropriations included up to $21.7 million to support development of PFS projects.
Implementing Evidence Requirements for Approaches
Evidence comes in many forms, and different types of evidence are appropriate for different purposes. Agencies should develop a portfolio of evidence that includes the following:
High-quality performance measurement: Outcomes and output measures that align with the theory of change and a systemic method to collect and report on data on a regular basis are implemented.
- Implementation or process evaluations: Investigate how a program is being enacted and whether it is carried out as intended. The process includes quantitative and qualitative methods to capture measurable units and descriptive elements.
- Formative evaluations: Ensure that a program or activity is feasible, appropriate, and acceptable before it is fully implemented.
- Outcome evaluation: Tracks whether the program achieved the identified desired outcomes, including pre- and post-measurements to identify changes that occurred during a program's implementation.
- Impact Evaluation: Designed to determine if the outcomes observed are due to having received program services or an intervention. It is the only way to determine cause and effect. There are several methodologies that can be used to achieve this:
- Quasi-Experimental Design: Includes a comparison group formed using a method other than random assignment, or that controls for threats to validity using other counterfactual situations.
- Randomized Control Trials (RCTs): Examines the effects of a program by comparing individuals who receive it with a comparable group who do not. Individuals are randomly assigned to the two groups to try to ensure that, before taking part in the program, each group is statistically similar in both observable and unobservable ways.
Approach A: Tiered Grantmaking
Tiered grantmaking allows for innovative ideas to rise up from local practitioners or other program sectors, be tried out, scaled, and tested, to advance understanding about a particular policy issue.
For agencies, it's a valuable way of directing investments towards programs and projects that provide greater impact for each dollar invested.
Agencies can structure grant competitions into different tiers, varying the amounts of funding available depending on where a program or intervention falls on the continuum of evidence of effectiveness. Many programs distribute funding across one of three tiers:
Highest tier: For programs, services, and products where the evidence base is "strong," that is, they have been proven effective through multiple random assignments or strong quasi-experimental studies that can be replicated with fidelity. These projects are deemed suitable for scaling and warrant funding at the highest level because they have been shown to work.
**Middle tier: ** For programs, services, and products with only a moderate evidence base, that has limited quasi-experimental studies or a single or small random assignment study. Moderate-level funding is provided for replication grants designed to further test and validate effectiveness.
Lowest tier: Where there is only preliminary evidence or a strong theory of action, funding is offered for development or proof of concept projects with an appropriate evaluation design to determine whether the project would merit further development.
(Source: The White House, "A Strategy for American Innovation," 2015)
This approach's goal is to identify replicable evidence-based models and bring them to scale. This tiered approach can seed multiple potential interventions and encourage further testing and validation (Source: Results for America, "Invest in What Works Fact Sheet: Evidence-Based Innovation Programs," October 2015). It avoids larger investments in ineffective programs, while the built-in mechanism for scaling up interventions that work also helps prevent the troubling problem of not investing in programs with proven high returns (Source: The White House, "2014 Economic Report of the President," 2014. See Chapter 7: Evaluation as a Tool for Improving Federal Programs).
A tiered-evidence approach to grantmaking has implications for leaders, policymakers, and career civil service employees. Problems across social, economic, and environmental domains are often at varying levels of scientific understanding. Engaging researchers in-house or in the broader scientific community through the tiered model builds a foundation for solving big problems using evidence.
Senior leaders and career employees looking to introduce an evidence-driven approach to grantmaking should understand the existing state of knowledge, define and focus their efforts, and ensure that proposed approaches are empirically validated by experienced researchers using quantitative scientific methods.
Approach B: Learning Agenda Strategy
Adopt learning agenda approaches in which you collaboratively identify the critical questions that will make your programs work more effectively. The key components of that approach are that agencies:
Identify the most important questions that need to be answered to improve program implementation and performance. These questions should reflect the interests and needs of a large group of stakeholders, including program office staff and leadership, agency and administrative leadership, program partners at state and local levels, and researchers, as well as legislative requirements and congressional interests.
Strategically prioritize which questions to answer within available resources, including which studies or analyses will help the agency make the most informed decisions.
Identify the most appropriate tools and methods (e.g. evaluations, research, analytics, and/or performance measures) to answer each question.
Implementing studies, evaluations, and analysis using the most rigorous methods appropriate to the context.
Develop plans to disseminate findings in accessible and useful ways to program officials, policy-makers, practitioners, and other key stakeholders—including integrating results into performance.
Three elements are important to successfully implement evidence-based policy:
- Build staff capacity: Hire staff or contractors who understand evaluation and data collection methodologies and can translate these concepts to other program staff. Having the appropriate technical skills to create, maintain, and report on data systems and data sets is integral to any evidence-based approach.
- Develop a coalition of support: Build and maintain support from all levels of the agency, including visible leadership buy-in and investment.
- Budget for evaluation activities: Assess if there is a budgetary authority for evaluation spending or if there is flexibility within an agency or program's budget to set aside funds for evaluation activities.
Approach C: Pay for Success (PFS) Strategy
PFS programs are outcomes- and evidence-based investments, allowing agencies to invest specifically in an issue area where they hope to achieve outcomes and scale interventions that have demonstrated impact. According to Results for America's Invest in What Works: Pay for Success, in a PFS investment "a government agency enters into a contract with an intermediary organization to achieve specific outcomes that will produce government savings. The contract specifies how results will be measured and the level of outcomes that must be achieved for government to make payments. The intermediary selects one or more service providers to deliver a proven or promising intervention expected to produce the desired outcomes. Funding for service delivery comes from outside investors, often secured by the intermediary. If the desired outcomes are achieved, then government pays the intermediary, which in turn pays investors."
Common programmatic elements of Pay for Success initiatives include:
Budgetary Authority or Flexibility: PFS programs contracts are inherently years-long to allow for launch, implementation, and evaluation. You must have the money to enable long-term PFS programs.
Access to Expertise via Technical Assistance Contracts or Internal Staffing: Over the years-long process, you'll need various roles to fit your staffing profile. Just-in-time staffing levels, with subject-matter expertise with the needed technical skills, is key to a PFS initiative's long-term viability.
Agency Focus on an Issue that Lends Itself to PFS Programs: You'll need to ensure that an intervention is defined and measurable in terms of outcomes and costs and that there are existing interventions with evidence of effectiveness demonstrated.
Actions and Considerations
There are many ways to build evidence of what works. After reviewing many federal evaluation initiatives in 2016, the Office of Management and Budget identified five guiding principles that should be part of any evaluation policy:
Rigor: Use the most rigorous methods that are appropriate to the evaluation questions and feasible within budget or other constraints.
Relevance: The evaluation priorities should consider legislative requirements and Congressional interests, and reflect other stakeholders' the interests and needs
Transparency: Evaluation plans, ongoing work, and findings should be easily accessible. Release them regardless of findings.
- Independence: Insulate evaluation functions from undue influence or bias.
- Ethics: Conduct evaluations in an ethical manner that safeguards participants' dignity, rights, safety, and privacy.
- Office of Management and Budget, "Analytical Perspectives: Budget of the United States Government, Fiscal Year 2018," 2017.
- Analytical Perspectives contains analyses concerning a number of subject areas that place the Budget of the U.S. Government in perspective. Chapter 6: Building and Using Evidence to Improve Government Effectiveness addresses the U.S. Government's definition and understanding of evidence-based evaluation.
- Department of Labor Evaluation Policy. US Department of Labor, November 2013.
- This policy describes the Department of Labor's (DOL) governance and use of program evaluations as part of continuous improvement within the agency.
- Evidence-Based Policymaking Commission Act of 2016
- Act established within the executive branch a Commission on Evidence-Based Policymaking, including that Commission's responsibility and scope for evaluation.
- Increased Emphasis on Program Evaluations, OMB M-10-01, October 7, 2009.
- This memo announces the OMB's 2009 launch of government-wide efforts to inform and encourage participation in rigorous program evaluations.
- Next Steps in the Evidence and Innovation Agenda, OMB M-13-17, July 26, 2013.
- This memo provides detail and resources for agencies to include evidence and evaluation for 2015 agency Budget submissions. Specific strategies are outlined for agency budget proposals to address both categories.
- Pew-MacArthur Results First Initiative, "Legislating Evidence-Based Policymaking A look at state laws that support data-driven decision-making," March 2015.
- Pew-MacArthur Results First Initiative provides a review of 100 state statutes between 2004 and 2014 relating to evidence-based evaluation programs and assesses common approaches to evaluation across the legislative material.
- Andy Feldman, "The Role of a Chief Evaluation Officer: An interview with Demetra Nightingale, Chief Evaluation Officer, U.S. Department of Labor – Episode 42."GovInnovator, March 23, 2014.
- This interview with Demetra Nightingale gives federal, state, and local leaders insights into the DOL's establishment of a Chief Evaluation Officer role, and how best to replicate a similar role or its duties to encourage evidence and evaluation.
- Alex Neuhoff, Simon Axworthy, Sara Glazer, and Danielle Berfond, "What Works Marketplace: Helping Leaders User Evidence to Make Smarter Choices." Results for America, April 2015.
- What Works Marketplace is a policy report surveying the 'market' for evidence on effectiveness; assessing its current state and areas of opportunity using data from supply- and demand-side interviews, research, and analysis of supply-side data sources.
- What Works Clearinghouse
- Hosted by the Department of Education's Office of Education Sciences with the expressed goal of providing "educators with the information they need to make evidence-based decisions" through aggregation of existing research on evidence-based programs, products, practices and policies.