Skip to main content Skip to main content

Improve Government Services Delivery

Evidence-Based Decision Making

Vertical
Alt Text Here.

Improve Government Services Delivery

Complexity
Alt Text Here.

"What does it mean […] to create 'evidence-based initiatives?' It means that the administration strives to be as certain as possible that federal dollars are spent on social intervention programs that have been proven by rigorous evidence to work."

Summary

Purpose and Outcomes

Purpose: Inform how evidence-based decision-makingand its common approaches to practice can further agency goals.

Evidence-based decision-making helps to achieve greater impact per dollar by focusing resources on what works. The Office of Management and Budget (OMB) outlined the importance of using evidence-based approaches that allow agencies to measure the effectiveness of programs, services, and products by evaluating what's working, where it's succeeding, for whom, and under what circumstances. By using data, performance metrics, and assessments, agencies can target their resources to invest in programs and initiatives shown to be the most effective.

Taking an evidence-based decision-making approach ensures using efficient and cost-saving methods and presupposes routine, rigorous use of data and evaluations to make funding decisions. The benefits of this decision-making approach are numerous, including creating a culture where continuous learning and program improvement lead to better overall performance (Source: David Garvin, Amy Edmondson, and Francesca Gino, "Is Yours a Learning Organization?," Harvard Business Review, March 2008).

All federal programs and services can benefit from evidence-based approaches, enhancing their ability to:

  • Collect and use data
  • Employ appropriate program assessment to understand what is and is not working
  • Enable continuous improvement across initiatives, programs, and agencies

In 2009, OMB established guidance to encourage evaluation across government agencies and defined evidence as "the available body of facts or information indicating whether a belief or proposition is true or valid. Evidence can be quantitative or qualitative and may come from a variety of sources, including performance measurement, evaluations, statistical series, retrospective reviews, and other data analytics and research." (Source: Office of Management and Budget, "Analytical Perspectives: Budget of the United States Government, Fiscal Year 2017," 2016. See chapter 7, pgs. 71-72.)

Outcomes: Evidence-based decision-making exists to ensure the impact you say you're going to have results in positive future outcomes for your work. The section below on "Implementing Evidence Requirements" identifies the types of data that may exist or be best appropriate for evidence-based decision-making strategies.

The remaining sections outline three broad strategies to implement evidence-based decision-making exists across the federal government. These approaches share a common emphasis on producing evidence and understanding around funded initiatives:

  1. Tiered Grant-making: Grant programs that include evidence requirements and provide agencies a systematic way of addressing structure of investment to support new ideas, while also investing in the scale-up of approaches that have demonstrated results.
  2. Learning Agenda Strategy: Implementing a learning agenda emphasizing federal employees and grantees' understanding of how to apply evidence- and data-based decision making and ensuring they have the resources, capacity, and data needed to implement this approach.
  3. Pay for Success (PFS) through Public-Private Partnerships: Using public-private partnerships to achieve outcomes through evidence-based programs and pay for them in a more cost-effective way. Three definitions are possible: new type of financing model, a new form of outcomes-focused contract, or a new approach to public-private partnership.

Examples

Several examples of agencies using evidence-based decision-making approaches currently exist:

  1. U.S. Agency for International Development's (USAID) Development Innovation Ventures (DIV) Program: Taking a portfolio approach, DIV invests small sums of funding in many relatively unproven ideas, but continues to support only those that demonstrate rigorous evidence of impact, cost-effectiveness, and potential to scale via the public and/or private sector. In six years, DIV has invested more than $90 million in nearly 170 innovations across all 10 sectors.
  2. Department of Labor (DOL): To adopt a robust learning agenda, the DOL established the Chief Evaluation Office (CEO) in 2010, and has since made significant progress in institutionalizing a culture of evidence and learning. Responsible for managing the DOL's evaluation program, the CEO is committed to conducting rigorous, relevant, and independent evaluations, as well as to funding research through a collaborative learning agenda process. Through its work, the CEO has been able to connect evaluation with performance and partner cross-agency to encourage the adoption of analytical approaches in decision-making.
  3. Social Innovation Fund (SIF): Program of the Corporation for National and Community Servicethat has used tiered-evidence approach to award more than $240 million in funding for program expansion and evaluation in communities across the country since 2010; 2014 and 2015 appropriations included up to $21.7 million to support development of PFS projects.

Approach

Implementing Evidence Requirements for Approaches

Evidence comes in many forms, and different types of evidence are appropriate for different purposes. Agencies should develop a portfolio of evidence that includes the following:

  • High-quality performance measurement: Outcomes and output measures that align with the theory of change and a systemic method to collect and report on data on a regular basis are implemented.

  • Implementation or process evaluations: Investigate how a program is being enacted and whether it is carried out as intended. The process includes quantitative and qualitative methods to capture measurable units and descriptive elements.
  • Formative evaluations: Ensure that a program or activity is feasible, appropriate, and acceptable before it is fully implemented.
  • Outcome evaluation: Tracks whether the program achieved the identified desired outcomes, including pre- and post-measurements to identify changes that occurred during a program's implementation.
  • Impact Evaluation: Designed to determine if the outcomes observed are due to having received program services or an intervention. It is the only way to determine cause and effect. There are several methodologies that can be used to achieve this:
    • Quasi-Experimental Design: Includes a comparison group formed using a method other than random assignment, or that controls for threats to validity using other counterfactual situations.
    • Randomized Control Trials (RCTs): Examines the effects of a program by comparing individuals who receive it with a comparable group who do not. Individuals are randomly assigned to the two groups to try to ensure that, before taking part in the program, each group is statistically similar in both observable and unobservable ways.

Approach A: Tiered Grantmaking

Tiered grantmaking allows for innovative ideas to rise up from local practitioners or other program sectors, be tried out, scaled, and tested, to advance understanding about a particular policy issue.

For agencies, it's a valuable way of directing investments towards programs and projects that provide greater impact for each dollar invested.

Agencies can structure grant competitions into different tiers, varying the amounts of funding available depending on where a program or intervention falls on the continuum of evidence of effectiveness. Many programs distribute funding across one of three tiers:

  • Highest tier: For programs, services, and products where the evidence base is "strong," that is, they have been proven effective through multiple random assignments or strong quasi-experimental studies that can be replicated with fidelity. These projects are deemed suitable for scaling and warrant funding at the highest level because they have been shown to work.

  • **Middle tier: ** For programs, services, and products with only a moderate evidence base, that has limited quasi-experimental studies or a single or small random assignment study. Moderate-level funding is provided for replication grants designed to further test and validate effectiveness.

  • Lowest tier: Where there is only preliminary evidence or a strong theory of action, funding is offered for development or proof of concept projects with an appropriate evaluation design to determine whether the project would merit further development.

(Source: The White House, "A Strategy for American Innovation," 2015)

This approach's goal is to identify replicable evidence-based models and bring them to scale. This tiered approach can seed multiple potential interventions and encourage further testing and validation (Source: Results for America, "Invest in What Works Fact Sheet: Evidence-Based Innovation Programs," October 2015). It avoids larger investments in ineffective programs, while the built-in mechanism for scaling up interventions that work also helps prevent the troubling problem of not investing in programs with proven high returns (Source: The White House, "2014 Economic Report of the President," 2014. See Chapter 7: Evaluation as a Tool for Improving Federal Programs).

A tiered-evidence approach to grantmaking has implications for leaders, policymakers, and career civil service employees. Problems across social, economic, and environmental domains are often at varying levels of scientific understanding. Engaging researchers in-house or in the broader scientific community through the tiered model builds a foundation for solving big problems using evidence.

Senior leaders and career employees looking to introduce an evidence-driven approach to grantmaking should understand the existing state of knowledge, define and focus their efforts, and ensure that proposed approaches are empirically validated by experienced researchers using quantitative scientific methods.

Approach B: Learning Agenda Strategy

Adopt learning agenda approaches in which you collaboratively identify the critical questions that will make your programs work more effectively. The key components of that approach are that agencies:

  • Identify the most important questions that need to be answered to improve program implementation and performance. These questions should reflect the interests and needs of a large group of stakeholders, including program office staff and leadership, agency and administrative leadership, program partners at state and local levels, and researchers, as well as legislative requirements and congressional interests.

  • Strategically prioritize which questions to answer within available resources, including which studies or analyses will help the agency make the most informed decisions.

  • Identify the most appropriate tools and methods (e.g. evaluations, research, analytics, and/or performance measures) to answer each question.

  • Implementing studies, evaluations, and analysis using the most rigorous methods appropriate to the context.

  • Develop plans to disseminate findings in accessible and useful ways to program officials, policy-makers, practitioners, and other key stakeholders—including integrating results into performance.

Three elements are important to successfully implement evidence-based policy:

  1. Build staff capacity: Hire staff or contractors who understand evaluation and data collection methodologies and can translate these concepts to other program staff. Having the appropriate technical skills to create, maintain, and report on data systems and data sets is integral to any evidence-based approach.
  2. Develop a coalition of support: Build and maintain support from all levels of the agency, including visible leadership buy-in and investment.
  3. Budget for evaluation activities: Assess if there is a budgetary authority for evaluation spending or if there is flexibility within an agency or program's budget to set aside funds for evaluation activities.

Approach C: Pay for Success (PFS) Strategy

PFS programs are outcomes- and evidence-based investments, allowing agencies to invest specifically in an issue area where they hope to achieve outcomes and scale interventions that have demonstrated impact. According to Results for America's Invest in What Works: Pay for Success, in a PFS investment "a government agency enters into a contract with an intermediary organization to achieve specific outcomes that will produce government savings. The contract specifies how results will be measured and the level of outcomes that must be achieved for government to make payments. The intermediary selects one or more service providers to deliver a proven or promising intervention expected to produce the desired outcomes. Funding for service delivery comes from outside investors, often secured by the intermediary. If the desired outcomes are achieved, then government pays the intermediary, which in turn pays investors."

Common programmatic elements of Pay for Success initiatives include:

  1. Budgetary Authority or Flexibility: PFS programs contracts are inherently years-long to allow for launch, implementation, and evaluation. You must have the money to enable long-term PFS programs.

  2. Access to Expertise via Technical Assistance Contracts or Internal Staffing: Over the years-long process, you'll need various roles to fit your staffing profile. Just-in-time staffing levels, with subject-matter expertise with the needed technical skills, is key to a PFS initiative's long-term viability.

  3. Agency Focus on an Issue that Lends Itself to PFS Programs: You'll need to ensure that an intervention is defined and measurable in terms of outcomes and costs and that there are existing interventions with evidence of effectiveness demonstrated.

Actions and Considerations

There are many ways to build evidence of what works. After reviewing many federal evaluation initiatives in 2016, the Office of Management and Budget identified five guiding principles that should be part of any evaluation policy:

  1. Rigor: Use the most rigorous methods that are appropriate to the evaluation questions and feasible within budget or other constraints.

  2. Relevance: The evaluation priorities should consider legislative requirements and Congressional interests, and reflect other stakeholders' the interests and needs

  3. Transparency: Evaluation plans, ongoing work, and findings should be easily accessible. Release them regardless of findings.

  4. Independence: Insulate evaluation functions from undue influence or bias.
  5. Ethics: Conduct evaluations in an ethical manner that safeguards participants' dignity, rights, safety, and privacy.

Policies

Additional Resources

How to Get Connected

Better Government Playbook

Playbook


Six key guiding principles or “plays” for public sector innovation

Better Government Stories

Case Studies


In-depth case studies of where innovation is happening and working in the government

Join the Better Government Movement

Join


Opportunities to join the Community of Practice, Innovation Ambassadors, and upcoming events