Improve Government Services Delivery


Alt Text Here.

Improve Government Services Delivery

Alt Text Here.

Purpose and Outcomes

Purpose: Agile methods support government in the iterative and responsive design, implementation, and ongoing management of both functional and mission-oriented projects.

Agile is a set of project management methodologies commonly used by innovative organizations because its practice emphasizes simplicity, quick iteration, and close customer collaboration. Agile is both a philosophy and an umbrella term for a collection of methodologies or approaches that share certain common characteristics. Agile methods are commonly applied in the federal government in contracting, project management, design discovery, and software development.

There is no universally accepted, formal definition for Agile, though one federal Agile practitioner informally defines Agile as:

"an iterative and incremental (evolutionary) approach to software development which is performed in a highly collaborative manner by self-organizing teams within an effective governance framework, with 'just enough' ceremony, that produces high quality solutions, in a cost effective and timely manner which meets the changing needs of its stakeholders." (Source: The White House, "Innovative Contracting Case Studies," August 2014)

The Agile philosophy is embodied in the four tenets of the Agile Manifesto and its 12 associated principles.

Although Agile is not one specific method, there are a variety of common approaches that differ with respect to project approach, resource availability, and desired end result of an Agile project.

Common Agile Methods

Agile Method Framework Description
Extreme Programming (XP) Organizes so customers and developers partner partner around a series of short developmental releases paired with customer testimonials and intensive, continuous code review
Dynamic Systems Development Method (DSDM) Emphasizes business needs/value as the primary criteria for delivery and acceptance of a system while rigorously defining requirements and ensuring reversibility of developments
Features-Driven Development (FDD) Designs and schedules the build around end product's expected and desired features
Kanban Manages a process focused on visualizing a workflow, limiting work-in-progress, and measuring and optimizing lead time
Scrum Provides teams with a process to manage prioritization, implementation, delivery, and testing of iterative work

The variety of Agile methods available must be considered in relation to the intent and expected outcomes of any one intervention. However, adopting an Agile methodology can equip practitioners with one or more of the following outcomes:

  • A set of engineering best practices that allow for rapid delivery of high-quality software
  • A project management process that encourages frequent evaluation and adaptation
  • A leadership philosophy that promotes collaboration, teamwork, and accountability

Agency practitioners should have an understanding and the resources necessary to incorporate Agile methods in the development and execution of programs, services, and products.


Below are applications of Agile methods at the federal and state levels, including benefits of using these methodologies, challenges of adopting and how to address them, and common best practices for Agile methods

  1. Department of Justice (DOJ): Beginning in 2014, the DOJ adopted a Scrum Agile methodology to modernize and consolidate the DOJ website, With early success in implementing version releases on time and to budget, the DOJ overhauled across 12 iterations within the Scrum methodology. Scrum Agile lessened project challenges impact and encouraged stakeholder investment as the iterative approach allowed for user testing and feedback during development.
  2. Salt Lake City: The Salt Lake City Information Management Services (IMS) department wanted to implement fresh ways to approach business, so the department leadership established a project management office (PMO) and Agile training series. The Salt Lake City IMS now applies Agile for a variety of projects including software development, system implementations, and upgrades.
  3. State of Maine : The State of Maine moved to Agile processes to better quantify and prove success within initiatives while progressing through them, to avoid risk and project failures at cost. In adopting Agile, the State of Maine established an Agile Center of Excellence (CoE), responsible for equipping project teams with the skills and resources necessary to effectively apply Agile in their culture.


Benefits of Using Agile

Agile methods enable teams to develop programs, services, and products incrementally and be able to quickly adapt to changing requirements. Other project management methodologies, such as waterfall, are focused on developing a 100% solution upfront. Agile methods differ as they focus on building working increments of a given product or solution that both allows for faster time-to-market and flexibility to change requirements and/or directions for the next version of the program, service, or product. This also reduces lifecycle costs by not creating unnecessary and unwanted components.

Although Agile methods are iterative and build programs, services, or products in increments, they also provide a clear process and structure for discussing priorities and tradeoffs. This generates more accurate assessments of the project state at any given time. The process also requires constant collaboration between contractors and government personnel, increasing the understanding of the requirements which potentially results in a deliverable better suited to the end users' needs.

Other benefits of using Agile Methods include:

  • Early users insight into the actual design and implementation of the solution
  • Early and ongoing developers insight into user behavior, leading to more usable applications
  • The ability to change requirements and priorities throughout the life cycle
  • Opportunities to fail fast and make timely adjustments if the early solution ideas turn out to be flawed. This minimizes the time and money spent before that learning occurs, and enables rapid redirection to be implemented
  • Surfacing and addressing bugs earlier in the process

(Source: The White House, "Innovative Contracting Case Studies," August 2014)

Any federal agency that develops or purchases software can benefit from Agile – either by embedding Agile practices in its work, or in working with offerors using Agile methods. When adopting Agile – or entering into contracts with Agile-practicing businesses – agencies should be mindful of the various flavors of Agile and select the method(s) that most closely aligns with its goals.

Understanding Agile: Common Challenges

Adoption of any methodology requires a change in the prevailing culture, and adopting Agile is no different. It requires a marked shift in perspective to development from traditional waterfall approaches, with implications for the organization structure, rewards system, communications, decision-making, and staffing model. To meet the challenges of adopting Agile, a program management office can take specific actions:

  • Plan for Agile practices and any deficits in resource skill or knowledge;
  • Address deficits and align resources through continuous Agile training;
  • Anticipate changes in the environment and business model that may affect how Agile is applied and implemented;
  • Be Adaptive. Flexibility is a tenet of Agile and should be of any project team.

Additionally, terminology will need to be learned or relearned if terms have different meanings across project teams. Once adopted, however, Agile's transparent nature provides continuous and immediate insight into the project state.

Best Practices when using Agile

In addition to understanding the common benefits and challenges to applying an Agile framework, the Government Accountability Office (GAO) has identified a number of best practices to consider and practice when adopting Agile:

  • Start with Agile guidance and an Agile adoption strategy
  • Enhance migration to Agile concepts using Agile terms and examples of those terms
  • Continuously improve Agile adoption at both the project and organization level
  • Seek to identify and address impediments at the organization and project levels
  • Obtain stakeholder/customer feedback frequently
  • Empower small, cross-functional teams
  • Include requirements related to security and progress monitoring in your queue of unfinished work (the backlog)
  • Gain trust by demonstrating value at the end of each iteration
  • Track progress using tools and metrics
  • Track progress daily and visibly

Actions and Considerations

For teams actively adopting Agile methodologies for their projects, follow the guidance below to ensure effective delivery and collaboration among stakeholders throughout the project lifecycle:

  • Build a functioning minimum viable product (MVP) or service that solves a core user need, with a clear agreed-upon project deadline, using a beta or test period if needed
  • Ask for user feedback to see what works well and identify improvements that should be made
  • Ensure people building the service communicate closely, using techniques like war rooms, daily standups, and team chat tools
  • Keep delivery teams small and focused; limit organizational layers that separate these teams from the business owners
  • Release improvements to the product or service according to an appropriate cadence
  • Prioritize improvements and use an issue tracker to catalog them
  • Use a version control system to track and understand iterative changes
  • Ensure entire team has access to the issue tracker and version control system
  • Use quality reviews with each improvement

Applying the above actions is integral to successful practice of any Agile framework.


  • Section 804 of the National Defense Authorization Act (NDAA) of 2010
    • Section 804 of the NDAA authorizes the Secretary of Defense to recommend IT systems for implementation via project management methods that align to general Agile approaches.
  • FAR 16.3, Cost Reimbursement
    • FAR 16.3 provides guidance regarding multiple contract types that are amenable to the design and execution of an Agile methodology in a project.
  • LOE-Term FAR 16.306(d)(2) and indefinite delivery FAR 16.5
    • FAR 16.306(d)(2) relates level of effort and iterative cadence within an Agile framework where deliverables are provided according to a fixed period of time (or cadence. This guidance is valuable for structuring of compensation and fees within an Agile development initiative.

Additional Resources

Evidence-Based Decision Making

Alt Text Here.

Improve Government Services Delivery

Alt Text Here.

"What does it mean […] to create 'evidence-based initiatives?' It means that the administration strives to be as certain as possible that federal dollars are spent on social intervention programs that have been proven by rigorous evidence to work."


Purpose and Outcomes

Purpose: Inform how evidence-based decision-makingand its common approaches to practice can further agency goals.

Evidence-based decision-making helps to achieve greater impact per dollar by focusing resources on what works. The Office of Management and Budget (OMB) outlined the importance of using evidence-based approaches that allow agencies to measure the effectiveness of programs, services, and products by evaluating what's working, where it's succeeding, for whom, and under what circumstances. By using data, performance metrics, and assessments, agencies can target their resources to invest in programs and initiatives shown to be the most effective.

Taking an evidence-based decision-making approach ensures using efficient and cost-saving methods and presupposes routine, rigorous use of data and evaluations to make funding decisions. The benefits of this decision-making approach are numerous, including creating a culture where continuous learning and program improvement lead to better overall performance (Source: David Garvin, Amy Edmondson, and Francesca Gino, "Is Yours a Learning Organization?," Harvard Business Review, March 2008).

All federal programs and services can benefit from evidence-based approaches, enhancing their ability to:

  • Collect and use data
  • Employ appropriate program assessment to understand what is and is not working
  • Enable continuous improvement across initiatives, programs, and agencies

In 2009, OMB established guidance to encourage evaluation across government agencies and defined evidence as "the available body of facts or information indicating whether a belief or proposition is true or valid. Evidence can be quantitative or qualitative and may come from a variety of sources, including performance measurement, evaluations, statistical series, retrospective reviews, and other data analytics and research." (Source: Office of Management and Budget, "Analytical Perspectives: Budget of the United States Government, Fiscal Year 2017," 2016. See chapter 7, pgs. 71-72.)

Outcomes: Evidence-based decision-making exists to ensure the impact you say you're going to have results in positive future outcomes for your work. The section below on "Implementing Evidence Requirements" identifies the types of data that may exist or be best appropriate for evidence-based decision-making strategies.

The remaining sections outline three broad strategies to implement evidence-based decision-making exists across the federal government. These approaches share a common emphasis on producing evidence and understanding around funded initiatives:

  1. Tiered Grant-making: Grant programs that include evidence requirements and provide agencies a systematic way of addressing structure of investment to support new ideas, while also investing in the scale-up of approaches that have demonstrated results.
  2. Learning Agenda Strategy: Implementing a learning agenda emphasizing federal employees and grantees' understanding of how to apply evidence- and data-based decision making and ensuring they have the resources, capacity, and data needed to implement this approach.
  3. Pay for Success (PFS) through Public-Private Partnerships: Using public-private partnerships to achieve outcomes through evidence-based programs and pay for them in a more cost-effective way. Three definitions are possible: new type of financing model, a new form of outcomes-focused contract, or a new approach to public-private partnership.


Several examples of agencies using evidence-based decision-making approaches currently exist:

  1. U.S. Agency for International Development's (USAID) Development Innovation Ventures (DIV) Program: Taking a portfolio approach, DIV invests small sums of funding in many relatively unproven ideas, but continues to support only those that demonstrate rigorous evidence of impact, cost-effectiveness, and potential to scale via the public and/or private sector. In six years, DIV has invested more than $90 million in nearly 170 innovations across all 10 sectors.
  2. Department of Labor (DOL): To adopt a robust learning agenda, the DOL established the Chief Evaluation Office (CEO) in 2010, and has since made significant progress in institutionalizing a culture of evidence and learning. Responsible for managing the DOL's evaluation program, the CEO is committed to conducting rigorous, relevant, and independent evaluations, as well as to funding research through a collaborative learning agenda process. Through its work, the CEO has been able to connect evaluation with performance and partner cross-agency to encourage the adoption of analytical approaches in decision-making.
  3. Social Innovation Fund (SIF): Program of the Corporation for National and Community Servicethat has used tiered-evidence approach to award more than $240 million in funding for program expansion and evaluation in communities across the country since 2010; 2014 and 2015 appropriations included up to $21.7 million to support development of PFS projects.


Implementing Evidence Requirements for Approaches

Evidence comes in many forms, and different types of evidence are appropriate for different purposes. Agencies should develop a portfolio of evidence that includes the following:

  • High-quality performance measurement: Outcomes and output measures that align with the theory of change and a systemic method to collect and report on data on a regular basis are implemented.

  • Implementation or process evaluations: Investigate how a program is being enacted and whether it is carried out as intended. The process includes quantitative and qualitative methods to capture measurable units and descriptive elements.
  • Formative evaluations: Ensure that a program or activity is feasible, appropriate, and acceptable before it is fully implemented.
  • Outcome evaluation: Tracks whether the program achieved the identified desired outcomes, including pre- and post-measurements to identify changes that occurred during a program's implementation.
  • Impact Evaluation: Designed to determine if the outcomes observed are due to having received program services or an intervention. It is the only way to determine cause and effect. There are several methodologies that can be used to achieve this:
    • Quasi-Experimental Design: Includes a comparison group formed using a method other than random assignment, or that controls for threats to validity using other counterfactual situations.
    • Randomized Control Trials (RCTs): Examines the effects of a program by comparing individuals who receive it with a comparable group who do not. Individuals are randomly assigned to the two groups to try to ensure that, before taking part in the program, each group is statistically similar in both observable and unobservable ways.

Approach A: Tiered Grantmaking

Tiered grantmaking allows for innovative ideas to rise up from local practitioners or other program sectors, be tried out, scaled, and tested, to advance understanding about a particular policy issue.

For agencies, it's a valuable way of directing investments towards programs and projects that provide greater impact for each dollar invested.

Agencies can structure grant competitions into different tiers, varying the amounts of funding available depending on where a program or intervention falls on the continuum of evidence of effectiveness. Many programs distribute funding across one of three tiers:

  • Highest tier: For programs, services, and products where the evidence base is "strong," that is, they have been proven effective through multiple random assignments or strong quasi-experimental studies that can be replicated with fidelity. These projects are deemed suitable for scaling and warrant funding at the highest level because they have been shown to work.

  • **Middle tier: ** For programs, services, and products with only a moderate evidence base, that has limited quasi-experimental studies or a single or small random assignment study. Moderate-level funding is provided for replication grants designed to further test and validate effectiveness.

  • Lowest tier: Where there is only preliminary evidence or a strong theory of action, funding is offered for development or proof of concept projects with an appropriate evaluation design to determine whether the project would merit further development.

(Source: The White House, "A Strategy for American Innovation," 2015)

This approach's goal is to identify replicable evidence-based models and bring them to scale. This tiered approach can seed multiple potential interventions and encourage further testing and validation (Source: Results for America, "Invest in What Works Fact Sheet: Evidence-Based Innovation Programs," October 2015). It avoids larger investments in ineffective programs, while the built-in mechanism for scaling up interventions that work also helps prevent the troubling problem of not investing in programs with proven high returns (Source: The White House, "2014 Economic Report of the President," 2014. See Chapter 7: Evaluation as a Tool for Improving Federal Programs).

A tiered-evidence approach to grantmaking has implications for leaders, policymakers, and career civil service employees. Problems across social, economic, and environmental domains are often at varying levels of scientific understanding. Engaging researchers in-house or in the broader scientific community through the tiered model builds a foundation for solving big problems using evidence.

Senior leaders and career employees looking to introduce an evidence-driven approach to grantmaking should understand the existing state of knowledge, define and focus their efforts, and ensure that proposed approaches are empirically validated by experienced researchers using quantitative scientific methods.

Approach B: Learning Agenda Strategy

Adopt learning agenda approaches in which you collaboratively identify the critical questions that will make your programs work more effectively. The key components of that approach are that agencies:

  • Identify the most important questions that need to be answered to improve program implementation and performance. These questions should reflect the interests and needs of a large group of stakeholders, including program office staff and leadership, agency and administrative leadership, program partners at state and local levels, and researchers, as well as legislative requirements and congressional interests.

  • Strategically prioritize which questions to answer within available resources, including which studies or analyses will help the agency make the most informed decisions.

  • Identify the most appropriate tools and methods (e.g. evaluations, research, analytics, and/or performance measures) to answer each question.

  • Implementing studies, evaluations, and analysis using the most rigorous methods appropriate to the context.

  • Develop plans to disseminate findings in accessible and useful ways to program officials, policy-makers, practitioners, and other key stakeholders—including integrating results into performance.

Three elements are important to successfully implement evidence-based policy:

  1. Build staff capacity: Hire staff or contractors who understand evaluation and data collection methodologies and can translate these concepts to other program staff. Having the appropriate technical skills to create, maintain, and report on data systems and data sets is integral to any evidence-based approach.
  2. Develop a coalition of support: Build and maintain support from all levels of the agency, including visible leadership buy-in and investment.
  3. Budget for evaluation activities: Assess if there is a budgetary authority for evaluation spending or if there is flexibility within an agency or program's budget to set aside funds for evaluation activities.

Approach C: Pay for Success (PFS) Strategy

PFS programs are outcomes- and evidence-based investments, allowing agencies to invest specifically in an issue area where they hope to achieve outcomes and scale interventions that have demonstrated impact. According to Results for America's Invest in What Works: Pay for Success, in a PFS investment "a government agency enters into a contract with an intermediary organization to achieve specific outcomes that will produce government savings. The contract specifies how results will be measured and the level of outcomes that must be achieved for government to make payments. The intermediary selects one or more service providers to deliver a proven or promising intervention expected to produce the desired outcomes. Funding for service delivery comes from outside investors, often secured by the intermediary. If the desired outcomes are achieved, then government pays the intermediary, which in turn pays investors."

Common programmatic elements of Pay for Success initiatives include:

  1. Budgetary Authority or Flexibility: PFS programs contracts are inherently years-long to allow for launch, implementation, and evaluation. You must have the money to enable long-term PFS programs.

  2. Access to Expertise via Technical Assistance Contracts or Internal Staffing: Over the years-long process, you'll need various roles to fit your staffing profile. Just-in-time staffing levels, with subject-matter expertise with the needed technical skills, is key to a PFS initiative's long-term viability.

  3. Agency Focus on an Issue that Lends Itself to PFS Programs: You'll need to ensure that an intervention is defined and measurable in terms of outcomes and costs and that there are existing interventions with evidence of effectiveness demonstrated.

Actions and Considerations

There are many ways to build evidence of what works. After reviewing many federal evaluation initiatives in 2016, the Office of Management and Budget identified five guiding principles that should be part of any evaluation policy:

  1. Rigor: Use the most rigorous methods that are appropriate to the evaluation questions and feasible within budget or other constraints.

  2. Relevance: The evaluation priorities should consider legislative requirements and Congressional interests, and reflect other stakeholders' the interests and needs

  3. Transparency: Evaluation plans, ongoing work, and findings should be easily accessible. Release them regardless of findings.

  4. Independence: Insulate evaluation functions from undue influence or bias.
  5. Ethics: Conduct evaluations in an ethical manner that safeguards participants' dignity, rights, safety, and privacy.


Additional Resources

Human-Centered Design

Alt Text Here.

Improve Government Services Delivery

Alt Text Here.

“If human-centered design can guide us towards a human-centered process that accommodates how people work, how they like to discover and consume information, we’re all the better for it.” – Matt Conner, Acting Chief Information Security Officer and Director of Cybersecurity Office at the National Geospatial Intelligence Agency


Purpose and Outcomes

Purpose: Human-centered design (HCD)—sometimes called design thinking—is a discipline in which the needs, behaviors, and experiences of an organization’s customers (or users) drive the design of a solution to a particular problem.

HCD methods can guide work across products, programs, and policy and can also enable federal employees to engage with the public as partners to identify and address the root causes of problems, rather than the symptoms.

Human-centered design:

  • Makes government more participatory and responsive
  • Increases stakeholder engagement and cross-sector collaboration
  • Offers insight into citizens’ needs, behaviors, and decisions
  • Equips us with tools for generating, testing, and improving solutions

Ultimately, using this methodology ensures that we are solving the right problem in a way that works for the people we serve.



An HCD process follows three main areas of work before you have a working solution:

  • Identify and understand the problem
  • Brainstorm and select possible solutions
  • Build and test out a prototype

The concepts of divergent and convergent thinking are key to HCD. Divergent thinking explores many possible ideas/solutions, and convergent thinking narrows down these problems//ideas to a few or one solution.

The point of divergent thinking is to collect as many ideas (no matter how crazy) and then use convergent thinking to bring it back into reality and see what’s the best possible solution.

There are two schools of thought for HCD and Design Thinking–the IDEO Method and the Stanford d.School approach.

The IDEO Method

The IDEO Model starts with building empathy during the discovery phase, then moves through a flow of divergent and convergent thinking and doing. It first starts with ideation, then goes to inspiration, and finally implementation:

Step 1: Inspiration

In this phase, you start learning how to better understand people. You observe their lives, hear their hopes and desires, and learn about the challenge before you.

Step 2: Ideation

This phase helps you make sense of what you’ve heard, generate lots of ideas, identify opportunities for design, and test and refine your solutions.

Step 3: Implementation

During this phase, you bring your solution to life. You figure out how to maximize its impact in the world.

Stanford d.School Approach

Stanford University has a five-step process that they’ve developed for HCD, which follows these five steps:

Step 1: Empathize.

Put yourself in the shoes of your users or audience, and design ways to observe and listen to their experience with your product or program. Collect insights and lessons learned from the process.

Step 2: Define

Using the insights and lessons learned from step one, narrow down the possibilities to define the challenge ahead of you–what problem(s) you’re trying to solve. Use different ways to frame the problem clearly so that you can collect the best ideas to solve those problems. Many times, people would define their problem in a “How Might We” question to help kick off the brainstorming (ideation) phase.

Step 3: Ideate

Using the clearly defined problem from step two, you can begin to think of all the ways to solve the problem. We do this through brainstorming. Use team brainstorming to create diverse perspectives and get better outcomes.

After you’ve created a list of great ideas, select the best ideas to create a short list to move to the next step. Also at this point, start defining how you’re going to pick the best solution over the others.

Step 4: Prototype

After you’ve narrowed down your choices, you can begin the process of prototyping, which is creating fast and inexpensive models of your solutions so you can get feedback from your users. The key here is to pick 2-3 possible solutions and then move quickly to create rough drafts to see if they will help our target audience or user.

Prototyping can be sketches or actual physical products built of sticks and paper. Whatever you create does not need to be perfect–it’s very rough and part of the process is to perfect what you have over many iterations so it gets better each time.

Step 5: Test

Now you’ve created a prototype, talk to your users to get their thoughts on what you’ve created. Ask open-ended questions so you can really get a good idea of what they like or don’t like. This is not the time to get attached to your idea–be humble and listen to what your users are telling you. It will help make the next draft much better and will improve over time.

Once you have prototyped and iterated many times, it is ready to pilot. A pilot allows you to test your solution in a real-life situation for a limited time and with a small target audience to see how it performs.

How might we encourage and support additional HCD projects at my agency?

Human-centered design is a process you can start implementing today on your existing projects to make it better. However, to create a larger culture that is more human-centered takes a little more groundwork and time.

Individuals or project teams using this approach often use it to tackle problems with existing government services, or when an existing problem needs a new solution. Government agencies may deploy, support, and encourage HCD on the front line as well as at management and leadership levels. These steps can help you spread these practices.

Step 1: See how you may support HCD practices based on your level within your agency

Here’s what you can do do right now to help support spreading HCD at your agency, wherever you sit in that agency.

Agency Level How to Spread Human-Centered Design
Front Line Doers - Work with team members who have different responsibilities
- Suggest working with other offices
- Investigate sociological research on specific countries, communities, or populations to influence language and style of deliverables
- Create multiple advertising messages targeted at specific audience values
- Develop many user experience decisions for digital and physical products and services to improve information flow and access
- Make user interface decisions that affect the usability of digital and physical products, services, and applications for users with disabilities
- Research and examine culturally and regionally significant colors for design
- Develop and circulate a list of HCD practices for your specific office
Mid-level Managers - Create a collaboration space for your team
- Support flexibility and ambiguity of your team’s project
- Serve as a buffer for your project team’s work
- Propose policy, guidelines, and standards that institutionalize better usability and accessibility for all users
-Partner with other offices that serve similar audiences
- Create office hours for employees to come learn about your office’s HCD practices
- Meet with HCD leaders at other agencies
- Advocate for dedicated resources to support HCD projects
Executives - Broadcast HCD projects to other offices across your agency
- Support information sharing and developing software that uses HCD principles
- Attend team meetings to show leadership support and stay informed about HCD processes
- Advocate for increased collaboration on multi-agency campaigns
- Advocate for HCD as a business imperative that helps your organization deliver on its mission goals

Step 2: Make the case to leadership

Your leaders will need to approve your implementing an expansive HCD process. Focus your business case on your leadership’s areas of concern. Early federal adopters of HCD have found success with the following strategies:

  • Build a coalition of colleagues interested in using HCD to demonstrate broad interest, different perspectives, and key elements when pitching to leadership.
  • Appeal to a motivating factor, such as budgetary concerns. While HCD may require an upfront investment for training or innovation lab assistance, the long-term savings of an improved program often outweigh the initial costs.
  • Use storytelling to pitch HCD to your leadership. If another group or agency has solved a similar issue or used HCD to solve other complex issues resulting in quantifiable results, use their stories to validate using HCD in your agency.
  • Take advantage of training opportunities. Agencies like OPM offer HCD training. It is currently designing training for federal executives whose buy-in and understanding of HCD are critical to HCD’s success within their agencies.
  • Focus on results. Agencies may hesitate to implement HCD due to uncertainty of how to measure the success or failure of HCD methods. They may hesitate mid-project at the shifting end-goals or success markers that are natural in HCD implementation. Employees and leadership have a natural desire to benchmark progress. You will need to re-brief your leadership on the HCD process throughout the project. Focus on the proven benefits of multiple failures and pivots and the value that the project will eventually demonstrate.

Step 3: Market HCD within your agency

To successfully use and scale an HCD approach, you’ll need to build agency-wide support and interest by effectively marketing HCD within your agency.

One proven approach to marketing is called RAISE: Research, Adaptation, Implementation, Strategy, and Evaluation:

  • Research: Understand your audience through research and analysis, including surveys, focus group testing, interviews, and intake meetings.
  • Adaptation: Create ideas and messages targeted at your audience based on your research. Develop materials like brochures, pamphlets, web pages, or even short videos. Include personal success stories or case studies.
  • Strategy: Develop an implementation strategy to publish your messaging and materials. Training programs and classes, e-blasts, webinars, factsheets, posters and social media are all effective methods.
  • Evaluation: Create metrics and a process for measuring your marketing success. Evaluate the feedback and then tailor your plan and messaging accordingly.

Actions and Considerations

Every HCD project will vary based on the environment, the targeted problem(s), stakeholders/customers, and goals. It will naturally evolve and change as you follow the broad phases:

  • Adopt and embrace multidisciplinary skills and perspectives
  • Develop and communicate a clear understanding of the users, tasks, and environments
  • Make design user-centered and evaluation-driven
  • Analyze the overall consumer experience
  • Involve the consumer in the design and production process
  • Iterate the MVP/pilot to incorporate feedback and continually improve.


Federal agencies must follow various laws and regulations, including the Paperwork Reduction Act (PRA) and the Privacy Act, when collecting information from the public. You should also know SORN (Systems of Records Notice), as well as rules around personally identifiable information (PII), and laws that relate to your specific method of feedback collection (such as Section 508 compliance for online surveys).

Federal agencies must make their electronic and information technology (EIT) accessible to people with disabilities under Section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. § 794 (d)). GSA offers a robust overview of Section 508 Law and Related Laws and Policies.


Lean Startup

Alt Text Here.

Improve Government Services Delivery

Alt Text Here.

Lean Startup

“For long-term change, experiment immediately.” -Eric Ries, Lean Startup


Purpose and Outcomes

Purpose: Lean Startup is a framework for developing user-centered solutions through small-scale tests, regular end-user engagement, and continuous iterations. The Lean Startup approach allows federal agencies to experiment with new programs and use only the strongest and most effective idea. While many federal agencies are too large and established to be considered lean startups, smaller government programs and new offices can use the methodology. This approach can be adapted and applied to a broad array of agency-specific missions.

One definition for lean startup is working with incremental steps that include feedback loops to continually create improvements and scale projects quickly. The method emphasizes flexibility, pragmatism, and experimentation, which allows organizations to quickly understand their stakeholders, deployment issues, costs, resources, and ultimate mission value while delivering solutions that best meet stakeholder needs.

Adopting effective Lean Startup techniques can:

  • Break the status quo and overcome obstacles with effective change management processes.
  • Build an entrepreneurial mindset and agency culture that responds to stakeholders by design.
  • Generate new ideas for improvement and build capacity for translating ideas into action.


The National Science Foundation (NSF) Innovation Corps (I-Corps) program was started in 2011 to increase the economic impact (through commercialization) of NSF-funded basic research. I-Corps provides experiential entrepreneurship training to teams of federally-funded researchers to better prepare academic researchers for commercialization of their funded research. I-Corps offers an evidence-based framework to support research commercialization. The rigorous boot camp curriculum emphasizes understanding customer or stakeholder needs and articulating a clear value proposition to implement or scale an idea, technology, product, or program. With guidance from established entrepreneurs and through a targeted curriculum, I-Corps participants learn to identify valuable product opportunities that can emerge from NSF-supported academic research. Over a period of six months, each team learns what it will take to achieve an economic impact with their particular innovation.

NSF has extensive experience in how challenging it can be to move research to commercial applications. They relied on this experience and sought guidance from established entrepreneurs to develop a targeted curriculum where I-Corps participants could learn to identify valuable product opportunities that can emerge from NSF-supported academic research.

NSF relied on a quick-turn, internal-review for proposals and limited their size to $50,000. The founding principle was to quickly provide small catalyst funds on a quarterly cycle; the near-continuous cycle allowed teams to explore the commercial potential on concepts as they emerged from the lab. Since its founding in 2011, NSF has increased the annual I-Corps program budget from $2M to $30 million in FY2017, held 44 cohorts, and worked with 950 teams of 2900 individuals through the national I-Corps program. It created a National Innovation Network of over 70 universities that has taught a version of the I-Corps curriculum to tens of thousands of researchers. Learn more at NSF Innovation Corps: From Science Lab to Startup.


Lean Startup methods apply a collaborative, team-based approach to accelerated problem solving. They emphasize challenging assumptions and reacting quickly to new information and feedback using hypothesis development and testing as part of the customer discovery. It is closely aligned with human-centered design (HCD) principles, which stress empathy, ideation, prototyping, and testing ideas to validate whether they meet the stakeholder’s needs.

Lean Startup applies to a range of activities, including program creation and management, procurement, and grant making.

There are four steps for Lean Startup:

Step 1: Break down your grand vision into component parts and sketch out your idea. Step 2: Test the problem (customer discovery/stakeholder analysis and engagement). Step 3: Test the solution with a pilot (Agile development). Step 4: Verify or pivot.

Step 1: Break down your grand vision into component parts, and sketch out your hypothesis. There are different approaches to defining a problem One tool is the mission model canvas, which is an adaptation of the business model canvas. It provides a structured process for developing a deeper understanding of the problem and the challenges of deploying a solution for mission-based organizations.

Step 2: Test the problem (customer discovery/stakeholder analysis and engagement) Your team should engage, collaborate with, and receive feedback from your stakeholders. In addition to users, you should engage your colleagues in legal, policy, procurement, etc., whose support you may need in terms of funding, mandates, user requests, etc. You will also need to get long-term support and. Conduct a thorough stakeholder analysis to identify all beneficiaries and engage with them throughout the Lean Startup Process.

Step 3: Test the solution with a pilot (Agile development) You will need to establish what a successful deployment looks like for your program. You can run a small-scale pilot to develop a minimally viable product (MVP) or beta-test based on what shows that your product works.

Step 4: Verify or pivot Once the pilot is implemented, step back to evaluate stakeholder feedback. It will help you to decide whether to conduct more tests on the pilot approach or if you should move in a different direction. Ask your stakeholders if they agree that you’re solving a high-value problem and whether the pilot/model is ready to scale up into execution and implementation.

When not to use Lean Startup may not be the right approach for your agency depending on your problem. Lean Six Sigma may be more useful when redesigning an existing process while Human-Centered Design may be more appropriate when designing a product to delight the users. Grand Challenges may be a better approach for agencies tackling highly ambitious goals where a minimally viable product, prototyping, and incremental steps will not achieve the goals alone.

How can we promote adoption?

You can use three main approach to promote the Lean Startup process in your agency:

Provide training through accelerator programs

Most federal employees are not trained in entrepreneurial approaches. Their work often involves large projects involving an entire system or enterprise. Accelerator programs, such as the Health and Human Services (HHS) Ignite Accelerator, provide a space to explore and test new ideas. The training, coaching, and support they provide is like startup accelerators in the private sector.

Accelerator programs contain the following elements:

  • Small teams - There are typically 3 to 5 people on a team
  • Competitive application process - Teams must submit their idea for selection from across an agency
  • Resources - Teams may receive seed-funding, tools, or something else.
  • Fixed time frame - Programs typically last 3 months
  • Training boot camp - Teams participate in a 3- to 5-day boot camp at the beginning of the program where they learn the practices of customer-discovery, prototyping, and product testing.
  • Coaching - Program staff check in with teams weekly to reinforce the methodologies
  • Pitch Day - Each team presents to senior leadership at the program’s end. They share what they built and learned, and pitch their idea to the judges for support to take their idea to the next level.

Promote online prototyping tools

Federal networks block many digital tools used for product prototyping for various reasons such as terms of service concerns. However, HHS offers guidance on useful tools that fully comply with federal regulations.

Encourage experimentation as a cultural norm

Many agencies’ culture doesn’t match the principles of starting small, growing slowly, and interacting frequently with users. Overcoming these barriers starts at the top. A culture of trying new things in small ways is an important step to address this entrenched issue.

Actions and Considerations

  • Define the problem and scope clearly and early. Adjust as necessary per feedback as necessary.
  • Conduct customer discovery/stakeholder analysis and engagement.
  • Develop a minimum viable product (MVP) that solves a core user need as soon as possible – no longer than three months from the beginning of the project, using a beta or test period, if needed.
  • Implement a version control system.
  • Use peer review to ensure quality.
  • Run usability tests frequently to see how well the service works and identify improvements.
  • Communicate closely using techniques such as launch meetings, war rooms, daily standups, and team chat tools.
  • Keep delivery teams small and focused.
  • Limit organizational layers that separate these teams from the business owners.
  • Release features and improvements multiple times each month.
  • Create a prioritized list of features and bugs, also known as the feature backlog and bug backlog.
  • Give the entire project team access to the issue tracker and version control system.


Federal agencies must follow various laws and regulations, including the Paperwork Reduction Act (PRA) and the Privacy Act, when collecting information from the public. You should also know SORN (Systems of Records Notice), as well as rules around personally identifiable information (PII), and laws that relate to your specific method of feedback collection (such as Section 508 compliance for online surveys).

Federal agencies must make their electronic and information technology (EIT) accessible to people with disabilities under Section 508 of the Rehabilitation Act of 1973, as amended (29 U.S.C. § 794 (d)). GSA offers a robust overview of Section 508 Law and Related Laws and Policies.


How to Get Connected

Better Government Playbook


Six key guiding principles or “plays” for public sector innovation

Better Government Stories

Case Studies

In-depth case studies of where innovation is happening and working in the government

Join the Better Government Movement


Opportunities to join the Community of Practice, Innovation Ambassadors, and upcoming events