Eight Common Challenges to Scaling Innovation

Implementing an innovative approach within the federal government takes relentlessness, stamina, and strategy. It can be incredibly lonely. You are often your own best champion. It can feel impossible-- like being the underdog trying to win a sporting match. But after all the frustrations and setbacks, when you win that first match it is also overwhelmingly satisfying.

But for the change agents in government, winning the first match is not enough. To make innovative approaches more routine, winning one match is just the beginning. The scaling challenge begins when you try to win over and over—and when you try to get more people to join your team.

I have spent my entire career working on increasing the use of innovative approaches to advance federal agency missions at various levels. This has included designing and implementing dozens of specific projects as well as working to build communities and capacity within agencies and across the federal government. I’ve had the opportunity to work on a wide variety of approaches at various levels over the last 10 years, including:

  • Program/project level: Online dialogues (both externally to the public and internally to employees), participatory technology assessments/ citizen deliberations, grand challenges, prizes and challenges, and user-centered design

  • Interagency and intra-agency community building level: Design thinking (as one of the original organizers of Design Thinking DC), prizes and challenges, and the maker movement (as the co-chair of the interagency working group on making)

  • Government-wide capacity building level: Prizes and challenges, and citizen science and crowdsourcing

The Obama Administration coined many of these approaches as part of an “innovation toolkit” in their last Strategy for American Innovation (pages 109-110). I have struggled first hand with the challenges in scaling these approaches. Conventional wisdom when experimenting with new approaches is to “start small” with a series of pilots to build an evidence base before scaling. That has largely been the experience of the federal employees and contractors that have been championing these approaches at their agencies. As described by a former boss of mine, each new project feels like it requires “hand-to-hand combat” to pull off. Making an analogy to the technology adoption lifecycle, innovators and early adopters are more likely to fight these fights. But to increase the adoption of these approaches to the early majority and late majority, the required level of effort to understand and execute the approach must be significantly reduced.

Early pilots are not ALL that is required to support the scaling of innovative approaches. Despite hundreds of great examples of impact of these approaches across government, many still struggle with scale. This is further complicated because some approaches are better supported and more mature than others. For example, I would argue that approaches like open data and prizes, which got an earlier start in federal government adoption this century, are much closer to reaching a scaling tipping point than other, relatively newer approaches, such as participatory technology assessment. Furthermore, these approaches have some common “ingredients” that have been critical to getting those approaches to that point. The challenges to scale are not unique. Thus, when charging forward to scale a relatively newer approach, champions would benefit from learning from the experience of innovative approaches that have come before them.

Challenges to Scale:

There are eight common critical elements to scaling innovative approaches across the federal government that are not unique.

  1. Legal and policy frameworks: Even without an explicit legal authority, policy guidance on existing available authorities can have a great impact on initial scaling efforts.

  2. Shared infrastructure and common platforms: It is not cost-effective for each agency to have to recreate similar capabilities to support each innovative approach; shared services for some functions can reduce barriers to entry and increase efficiency.

  3. Emergence and sustainability of communities of practice: People are the most important part in developing and sharing the knowledge for innovative approaches—and their individual energy can be channeled for higher impact when intentionally connected with shared purpose.

  4. Knowledge capture and sharing: Toolkits increase the impact of interactions between experts and new learners by making basic knowledge more easily discoverable.

  5. Budget: Finding ways to build flexibility into program annual budget requests to allow for the funding of innovative approaches is critical to unlocking more resources to support these approaches that are owned by the programs themselves.

  6. Agency processes: Spending time modernizing the “un-sexy” protocols owned by procurement, human resources, and other Agency mission support functions might be the single most important door to unlock to scale new approaches.

  7. Reporting requirements: Creating centralized mechanisms (whether required or voluntary) for reporting and being disciplined in collecting quality reports that describe results on a project level builds the evidence base for scaling.

  8. External assessments and impact studies: Federal agencies should also support independent assessments of their use of innovative approaches in order to capture non-biased impact analysis and improve practice, based on evidence.

Here’s a little bit more context about why these are all important elements to scaling innovation:

  1. Legal and policy frameworks: Without a clear legal basis for conducting a particular type of approach, the road to implementation can be murky. Explicit legal authority is not necessarily required for an approach to be used, but it can be extremely helpful for scaling. For example, prizes have been in use by the federal government since the early 2000s. Early innovators figured out how to implement prizes and challenges under existing authorities. In March 2010, the Office of Management and Budget (OMB) summarized those existing legal authorities in a policy memo that helped empower other innovators who were trying to find a legal path to implementation. Building upon this law and policy, in December 2010, Congress passed the America COMPETES Reauthorization Act, providing all Federal agencies broad and explicit authority to conduct prize competitions. Since the passage of the COMPETES prize authority, the use of prize competitions has skyrocketed in the federal government due to the much clearer legal path for implementation.

  2. Shared infrastructure and common platforms: Programs provided by the General Services Administration (GSA) have been critical in scaling many innovative efforts to date. These programs provide a focal point for federal efforts on an approach-by-approach basis. Data.gov, launched in 2009, now lists over 170,000 open datasets. Upwards of 100 agencies have used Challenge.gov—a no-cost platform—since its debut in September 2010, launching more than 740 challenges with prizes totaling over $250 million. These programs are more than just websites for listing datasets and challenges. They provide shared services and infrastructure at no cost to agencies. These no cost shared services allow innovators at agencies to bootstrap their early pilots without having to completely re-invent the wheel each time at every agency. Programs like data.gov and challenge.gov employ small teams of full time federal employees that provide critical government-wide policy support, training, community of practice management, metrics, and public outreach for the entire federal community.

  3. Emergence and sustainability of communities of practice: A critical part to enabling the use of new approaches is to support the people that are using them. Being an innovator within government can be lonely and connecting like-minded people to each other is critical not only to sustaining their energy but also to attracting new converts. A great example is the Federal Community of Practice for Citizen Science and Crowdsourcing (CCS). The CCS community was founded in 2012. Five people came to the first meeting; the group has since expanded to almost 300 members. This grassroots group of like-minded people championing a new approach to public participation in science and technology had remarkable impact in four short years. Working with the Office of Science and Technology Policy (OSTP), the General Services Administration (GSA) and the Wilson Center, this effort culminated in the launch of citizenscience.gov, a new central hub for citizen science and crowdsourcing initiatives in the public sector in 2016. The momentum created in these four years also contributed to Congress authorizing the explicit use of citizen science and crowdsourcing approaches (PL 114-329) in 2017. It is also important to recognize that there might be robust communities of practice outside the federal government for certain approaches. For example, the Citizen Science Association and its members are a critical source of knowledge and support for the federal citizen science and crowdsourcing community. Intentionally connecting internal and external communities--recognizing the federal government does not need to reinvent the wheel--can create mutually beneficial outcomes for all communities.

  4. Knowledge capture and sharing: For many years as I was encouraging people to use prizes, I noted that there was no “prizes for dummies” book, which meant that learning largely happened when one person experienced with running a prize personally mentored another that was not. This approach to knowledge sharing is inherently limiting. Expert time is precious and limited. Recognizing that knowledge sharing to date had been largely dependent on expert time and community of practice meetings, the Second National Action Plan for Open Government committed the US to developing an Open Innovation Toolkit to document the best practices, case studies, and step by step instructions for conducting open innovation approaches. The first half of this toolkit, for citizen science and crowdsourcing, was launched in September 2015. The second half, for prizes and challenges was launched in October 2016. Both of these toolkits were developed by federal employees experienced with these approaches for federal employees that are not—yet. Toolkits are not only a collection of resources and practices; they are tools for the people working on these approaches (communities of practice and shared service providers like GSA) to help new learners “help themselves” in the introductory content so they only need to engage experts on more nuanced and complex issues.

  5. Budget: Formulation of new project ideas can be hindered immediately by a critical question of budget and resources. Who pays for innovative approaches? The program who will benefit from the innovation (and thus they may need to work the project into their budget request two years prior)? Or a central innovation group with a special budget for those types of projects? Will programs ever budget to pay for these activities themselves if they see an external pot of funds from a central innovation group as the source for these resources? Since funds are not often appropriated specifically for these purposes, finding the resources to support innovative approaches are a recurring problem to scale.

  6. Agency processes: A huge barrier to scaling innovative approaches are the standard protocols and processes for program management in federal agencies. Many innovative approaches require program and project managers to think fundamentally differently about what their problem is, who could possibly solve it, and what success looks like. It requires a much heavier focus on problem definition and user research. It sometimes requires different “make-buy-partner” decisions and creatively structured contracting, granting or prize mechanisms. This is not standard protocol and since it’s not standard protocol this way of thinking is not an easy path. In order to scale their innovative approach, the US Digital Service, a team that uses technology and design to deliver better services to Americans within a number of Federal agencies, have confronted institutional process barriers head on. The way many Information Technology (IT) contracts are written make agile software development and user-centered design nearly impossible. USDS has attempted to confront this institutional barrier by addressing procurement misconceptions across the government through the development of the TechFAR and acquisition training.

  7. Reporting requirements: Earlier I stated that the use of prizes has skyrocketed since the passage of the COMPETES authority. But how do we know the use of prize competitions have skyrocketed? Because the law also required an annual report to OSTP to document the use of these approaches. Thanks to this requirement, we now have rich narratives and a qualitative dataset for hundreds of prizes (2011, 2012, 2013, 2014, 2015) that not only explores the impact of each individual prize, but also allows the study of prize practice more generally to improve use. Without a reporting requirement, capturing these stories can be like pulling teeth since the stories don’t write themselves.

  8. External assessments and impact studies: A healthy interest from researchers in exploring the methods and impacts of innovative approaches is critical to improving practice and understanding how to best use these approaches within the government context. Independent assessments are best conducted once there is a rich dataset to analyze and when the recommendations from those assessments are used to improve future practice. Some innovative approaches, like citizen science, already have a healthy interest from the academic community in studying questions related to the “science of citizen science”. Other approaches may have rich datasets, such as prize competitions, but have not yet developed wide-spread academic interest. Academic interest in prizes started with a relatively small set of leaders (like Karim Lakhani at Harvard) and has grown in recent years, but could still be expanded substantially.

Adding to the Toolkit:

The innovation toolkit will without question grow. Fifteen years ago, many could not have imagined crowdsourcing and design-thinking being applied to government services or deployed widely within the federal government. There will be new practices that emerge outside of government that we can’t imagine today that we will have a responsibility to experiment with, and scale, as appropriate. Thus, efforts to understand how to scale new approaches within the federal context will continue to be applicable—just to a whole new set of approaches.

For example, participatory technology assessment and the practice of participatory citizen policymaking is a relatively new innovation approach in the United States. I also personally find it to be an exciting approach with many possible applications within the federal government. Citizen deliberations complement other sources of more traditional input to government decision making by increasing the citizen voice in socio-scientific policymaking. The government is used to asking for industry, academic, and association input through mechanisms like requests for information (RFIs) and the rule making process. Individual citizens, who may have a stake in the decision, are not regularly trolling the federal register and thus often do not have a voice in issues that could impact them. Citizen deliberations provide a method to enable individual citizens to be informed and then provide their input, which increases the quality of their participation well beyond tools like surveys and online polls, which often to do not objectively inform a participant about an issue prior to seeking their input. Citizen input gained through deliberations complement the input gained from other sources and can highlight values and complexity that might not have been otherwise considered.

Citizen deliberations bring different perspectives and values to the table in science and technical decision making, opening the solution space for system level problem solving approaches. This is a counter-culture and innovative approach for government and thus it should expect to encounter many of the same barriers described in this piece. The early adopters of citizen deliberation approaches—as well as any new emerging innovative approach—would be wise to consider systematically working to address the eight common challenges to scale described above. Our ability to innovate within the government will continue to depend on it.

Note: Last month I was invited to participate in a panel at the American Association for the Advancement of Science (AAAS) 2017 Annual Meeting entitled “Adding the Citizen Voice: Participatory Socio-Scientific Policymaking”. After my remarks, several members of the panel and the audience asked me to share my remarks—which focused on my observations about the common challenges to scaling new approaches to innovation within the Federal government. This piece is an adapted version of my remarks.

Also, despite my experience first hand with many of these challenges, the identification of many these common challenges should be attributed to my former boss, Tom Kalil, the former deputy director for technology and innovation at the Office of Science and Technology Policy.