The Innovative State
To create government that is neither bigger nor smaller but better at solving problems more effectively and legitimately, agencies need to use big data and the associated technologies of machine learning and predictive analytics. Such data-analytical approaches will help agencies understand the problems they are addressing more empirically and devise more responsive policies and services. Such data-processing tools can also be used to make citizen engagement more efficient, helping agencies to make sense of large quantities of information and invite meaningful participation from more diverse audiences who have never participated in our democracy. To take advantage of the power of new technologies for governing, however, the federal government needs, first and foremost, to invest in training public servants to work differently and prepare them for the future of work in a new technological age.
During the COVID-19 pandemic, I have had the privilege to lead a team of engineers, designers, and policy professionals in the New Jersey Office of Innovation, a recently created administrative unit in the state’s government. When the pandemic hit, the Innovation Office team used technology and data, and unprecedented levels of collaboration across agencies and with the private sector, to respond to the crisis.
Working with the nonprofit Federation of American Scientists, for example, we built a website and accompanying (Amazon) Alexa skill to enable the public to pose questions about the virus to more than six hundred participating scientists and receive rapid, well-researched responses.1
A private sector company lent us the tech and the talent to create a website, covid19.nj.gov, in three days. In the last year, the site has been visited more than seventy-five million times since its launch in March of 2020.
Even more challenging to create than the technology was the content. Therefore, the Innovation Office collaborated with Princeton, Rutgers, Montclair, Rowan, and the state’s other universities to create an editorial team to translate legalese from government agencies into plain English and to knit together disparate sources of information in a single website.
A professor of data science at New York University assembled a team to produce predictive analytics about the spread of the virus. This data enabled the governor and other senior leaders to make better decisions about the response. When the data science team could not determine the number of deaths on the basis of race because the testing labs were not providing that information, the Department of Human Services and the Department of Health shared key administrative data with one another that enabled us to answer this question faster. Such sharing would normally be accomplished in a year (or never); we did it in a day.
In three days, the team also produced the nation’s first state jobs site to list available positions in essential businesses and thereby mitigate the crisis of unemployment. We posted over fifty thousand jobs in a broad range of businesses and salary levels. We launched a site that was far from perfect and improved it as we went along, knowing it was more important to risk failure than not to act quickly. Our team also worked with the federal government’s Digital Service, a unit within the Executive Office of the President, to fix the state’s process of certifying for unemployment.2 We also worked with the nonprofit Code for America to digitize the application process for food benefits, whose paper-based rules previously required coming into a government office to demonstrate income level.
By working more collaboratively and taking advantage of new technologies of information collection, analysis, and visualization, we were able to demonstrate how a bureaucracy can be nimble and effective, rather than lumbering and unresponsive.
Changing how we work in government is imperative. The COVID-19 crisis has revealed how ill-equipped the administrative state is at dealing with novel challenges. From delivering adequate testing and personal protective equipment (PPE) to expanding online education equitably, in too many areas the state has struggled to respond.
Perhaps it is telling that, in the face of the unprecedented COVID crisis, many public leaders chose to hire the management consultancy McKinsey and outsource critical state responses despite the high costs.3 In the first four months of the pandemic alone, public institutions in the United States contracted with McKinsey to the tune of $100 million, reflecting, at best, a perceived lack of confidence in the skills of bureaucracies and, at worst, a hollowing out of competence in the administrative state.4 Either way, there is an urgent need for new approaches to how government operates in response to the crises hiding in plain sight, from the public health emergency to an unprecedented economic depression. In the United States in 2020, joblessness reached numbers not seen since the Great Depression. The International Monetary Fund (IMF) has estimated that the global economy shrunk by 3.5 percent in 2020, pushing many of those who could least afford it deeper into poverty.5
While the economy is showing signs of bouncing back and vaccines are helping to alleviate the public health emergency, the crisis of confidence in government is chronic, not acute, because the challenges we face are not going away. Inequality persists. Pre-COVID, the average worker had not seen her wages increase since the 1970s, while the average pretax income of the top 10 percent of American earners has doubled since 1980, and that of the top 0.001 percent rose sevenfold.6 Whereas life expectancy in the United States continuously increased for most of the past sixty years, it has been decreasing since 2014.7 For the poor, life expectancy is dramatically lower.8 Rich American men now live fifteen years longer than their poorer compatriots.9 Life expectancy for Black men is far below every other demographic.10 On top of these and countless other challenges, there is the looming and existential threat of climate change.
It is no wonder that most Americans today have lost confidence in government, especially the federal government. According to Pew Research Center, only 2 percent of Americans today say they can trust the government in Washington to do what is right “just about always,” while 18 percent trust the federal government “most of the time.”11 Political scientist Paul Light has asserted that “federal failures have become so common that they are less of a shock to the public than an expectation. The question is no longer if government will fail every few months, but where. And the answer is ‘anywhere at all.’”12
If embraced, the right technologies can create new opportunities for improving the efficacy and agility–and, when used well, the legitimacy–of the administrative state. The technologies of big data as well as those engagement tools that enable individual and group communication and collaboration across a distance–what we might call the technologies of collective intelligence–could enable government agencies to understand problems with greater precision and in conversation with those most affected.
Thanks to the ubiquitous presence of data-gathering sensors in our lives, the technologies of big data make it possible for bureaucrats to gather more: more real-time and more granular information. Instead of speculating about the cause of accidents, for example, a city now has exact information generated by the sensors on traffic lights, road cameras, and even sensors built into the pavement revealing exactly what kind of accidents are happening, when they occur, and which vehicles they involve. Data-analytical tools like machine learning make it possible for machines to ingest and make sense of large quantities of data. They can help the administrative state analyze the new glut of information.
Agencies have the opportunity to get smarter from people–their experiences and expertise–as well as from sensors and to obtain more diverse and equitable perspectives and insights. These combinations of quantitative and qualitative approaches tell agency officials more about why a problem is occurring and offer a broader audience to provide solutions.
The administrative agencies of government at every level have always had far greater access to information than other branches of government.13 This is why legal scholar Adrian Vermeule refers to the administrative state as the “sensory organ” of government. Its agencies and large staffs are designed to “gather, examine and cull information” and make greater sense of on-the-ground conditions.14 Technology in every era has enabled administrative agencies to engage in “seeing like a state,” in the famous phrase of political scientist James Scott (and his eponymous book). Whereas Scott was concerned about the tendency of those who govern toward reductive simplification due, in large part, to simplistic measurement tools, entrepreneurial bureaucrats today have the opportunity to use big data and human insight to understand a problem as ordinary people experience it, and to design collaboratively more-effective solutions tailored to achieving the public’s desired outcomes.
If we embrace these diverse sources of external knowledge, the epistemic capacity of the state has the potential to increase dramatically. In fact, in 2018, Congress passed the Foundations for Evidence-Based Policymaking Act, requiring agencies to make better use of their data to measure and improve their performance and policy-making.15 But, on the whole, too many administrative agencies are still falling behind in their use of new technologies and innovative ways of working. There is an ongoing information asymmetry in that regulators lack access to the data, information, and insight they need to safeguard the public interest, deliver services, identify violations, and enforce the law efficiently, especially vis-à-vis those seeking to evade liability. They also lack the practices for solving problems collaboratively. For agencies to engage in transformative policy-making, they need to exploit the tools available for creating a “smarter” and more equitable state.
Big data refers to extremely large data sets that are too big to be stored or processed using traditional means. Today, new collection, storage, transmission, visualization, and analytic techniques have triggered a massive proliferation of data sets collected by public and private entities about everything from health and wellness to phone and purchase records. Such data are powerful raw materials for problem-solving.
Take a recent example from New Orleans, which has one of the highest murder rates of any city in the nation. Determined to change this dismal fact, then Mayor Mitch Landrieu in 2012 created a unit in city government called the Innovation Team, or i-Team. Using more than fifty years of data grouped by neighborhood and by rates of murder, crime, educational attainment, unemployment, and recidivism, the team uncovered a significant correlation between unemployment and violent crime (and thus recidivism). The data showed that a small and identifiable set of people in a few neighborhoods committed a majority of murders, usually as the result of petty disputes.16
That knowledge produced significant change. Municipal agencies instituted programs to train and hire ex-offenders in an effort to reduce the likelihood of reoffending among those who had been incarcerated.17 Strategies in the NOLA for Life program included social services and job opportunities as well as threats of prosecution, using data to determine which approach was appropriate for which individual. In the i-Teams’ first year, the New Orleans’ murder rate dropped 19 percent. Two years in, the rate had dropped over 25 percent from the 2012 high. New Orleans’ murder rates in 2018 and 2019, though still among the highest in the country, were at their lowest level in almost fifty years.18
There has been a significant push in recent years to increase the amount of data that administrative agencies collect from the entities they regulate to enable more targeted regulatory enforcement. In 2010, for example, the Occupational Safety and Health Administration (OSHA) required certain employers to submit death and injury data electronically to Washington and, as a result, OSHA was able to build a dashboard showing where injuries were occurring. (This data collection rule was scrapped by the Trump administration in 2019, though on day one of his administration, President Biden reversed course again.)19 In July 2010, Congress passed and President Obama signed the Dodd-Frank Wall Street Reform and Consumer Protection Act, which among other things created the Consumer Financial Protection Bureau (CFPB). The CFPB created a public complaint database in an effort to pressure businesses to treat customers better. Like OSHA, this agency also collected more data in machine-readable format to be able to create the Student Debt Repayment Assistant, an online tool to help borrowers navigate student loan repayment options.20 Similarly, in 2015, the CFPB issued a rule to expand data collection requirements under the Home Mortgage Disclosure Act to help protect borrowers. (This rule, too, was effectively gutted by the Trump administration, which eliminated penalties for noncompliance. Joe Biden campaigned on a commitment to undo Trump’s actions.)21
Many describe what makes big data big as the “3V”: volume, velocity, and variety. First, the term reflects a huge rise in data volume. In 2015, 12 zettabytes–that’s 12x1021 bytes of data–were created worldwide. By 2025, that number is forecast to reach 163 zettabytes. For comparison, the entire Library of Congress is only 15 terabytes: 1 zettabyte is 1 billion terabytes. Second, data velocity–the speed at which data are generated, analyzed, and used–is increasing. Today, data are generated in near real-time, created by humans through myriad everyday activities like making a purchase with a credit card, logging onto social media, or adjusting a thermostat, and by machines through radio-frequency identification (RFID) and sensor data. Much of these data are “designed data,” collected for statistical and analytical purposes. But large quantities of data are also “found data” (also known as “data exhaust”), collected for something other than research but still susceptible to analysis.22 For example, the JPMorgan Chase Institute uses financial services data, including credit card purchase records, to analyze and comment on the economic future of online platforms such as Uber and Lyft.23 Third, big data reflects accumulating data variety. Data come in many formats, including numbers, text, images, voice, and video. Some data are organized in traditional databases with predefined fields such as phone numbers, zip codes, and credit card numbers. However, more and more data are unstructured: they do not come preorganized in traditional spreadsheet-style formats but helter-skelter as Twitter postings, videos, coordinates, and so forth. Nevertheless, contemporary analytical methods make it possible to search, sort, and spot patterns even in unstructured data.
The value of all this data collection for the administrative state is in the ability to understand past, present, and future actions.24
With the right data-analytical skills–namely, an understanding of how to formulate a hypothesis, identify and collect the right data, and use that data to confirm the hypothesis–policy-makers can understand past performance of public policies and services, evaluating both their efficiency and impact on different populations. Economists Raj Chetty, Nathaniel Hendren, and Lawrence Katz studied twenty years of income records from families that moved to new neighborhoods using the Housing Choice Voucher Program. They discovered that these families earned significantly higher incomes, completed more education, and were less likely to become single parents than peers who stayed in their neighborhoods. Citing this research, the Department of Housing and Urban Development overhauled the formula that it had used for four decades to calculate rental assistance, and increased opportunities for families to move from high-poverty areas to low-poverty areas.25
Larger quantities of data also enable the delivery of more-tailored interventions in the present by helping governments match people to benefits to which they are entitled or to assistance they need. For example, Louisiana’s Department of Health uses Supplemental Nutrition Assistance Program (SNAP) enrollment data to sign people up for health benefits. Of nearly 900,000 SNAP recipients, Louisiana has enrolled 105,000 in Medicaid without a separate application process, relying on a four-question, yes-or-no survey to determine eligibility. This approach has helped some of the state’s poorest residents get access to benefits, while saving the state about $1.5 million in administrative costs.26
Better access to data even helps with forecasting future outcomes, such as who is likely to be a frequent visitor to the emergency room, thereby enabling more targeted interventions and treatment. During the COVID-19 pandemic, many jurisdictions started using “symptom trackers,” simple software tools to enable people to report their symptoms to public health officials. (In New Jersey, we created our own, and half a million participants used it to report data and obtain information.) Especially in the absence of testing data, symptom trackers provided an early warning mechanism, signaling where people were complaining of coughs and fevers. Symptom tracker data enabled emergency officials to anticipate the need for equipment, supplies, and hospital beds in the not too distant future.
Big data also creates the opportunity for regulators to spot mistakes, outliers, and rare events and make decisions based on evidence of on-the-ground conditions. For example, rat infestations in large cities are difficult to tackle because rats travel in virtually unpredictable ways. Chicago’s rat problem peaked in 2011 when it received more than twenty-five thousand rodent complaints via 311 calls (notifications from residents about problems needing attention). This call center information generated a novel database that offered a deeper understanding of the day-to-day patterns of rat infestations. In search of a new strategy, the city partnered with Carnegie Mellon University’s Event and Pattern Detection Lab to gather twelve years of 311 citizen complaint data, including information on rat sightings along with related factors such as overflowing trash bins, food poisoning cases, tree debris, and building vacancies. It is important to point out that these data are not gathered by regulators but by citizens calling the city’s hotline. The 311 system “constructs a collaborative relationship between city residents and government operations,” writes public affairs scholar Daniel T. O’Brien. “Residents act as the ‘eyes and ears of the city,’ reporting problems that they observe in their daily movements.”27
From cuneiform to card catalogs, governments have always recorded data. But the proliferation of big data creates hopeful new opportunities for innovation in the administrative state. Big data makes it possible for agencies to increase their epistemic and sensory capacity and develop a more detailed and accurate understanding of on-the-ground conditions with the engagement of a more diverse public.
These data-analytical techniques have made possible an expanded toolkit for change and new kinds of solutions from regulatory agencies, such as “smart disclosure” tools that aim to give consumers more complete data about the cost, quality, and safety of the products and services they buy, or the health, environmental, and labor practices of manufacturers and service providers.28 For example, the Department of Education’s College Scorecard gives students and parents information about the real costs, financial aid options, graduation rates, and postgraduation salaries and employment opportunities of universities. In New Jersey, we are building Data for the American Dream, a similar initiative to provide transparency about vocational training programs to job seekers, and especially unemployed job seekers, to help them make more-informed decisions about cosmetology, welding, and green energy training programs, for example. Using anonymized government-collected tax data, this “training explorer” will be able to show whether those who took a given training course saw their income go up or down.
To be sure, as legal scholar Rory van Loo has pointed out, there can be drawbacks in the use of smart disclosure tools like Training Explorer, College Scorecard, the Affordable Care Act’s health insurance exchange websites, or the CFPB’s mortgage rate checker tool: when under-resourced public agencies build worse websites than Silicon Valley, consumers suffer. At the same time, outsourcing the development of these tools to the private sector has its own problems. The IRS contracted with Intuit to provide a free version of TurboTax to low-income residents for their tax preparation, but the company has allegedly made that version as bad as possible to pressure people to buy its expensive products.29
Machine learning (a subset of artificial intelligence, or AI) describes a set of analytical techniques for using big data to make sense of and predict future occurrences and could radically transform the ability of agencies to deliver services and make informed policies.30 Machine learning teaches computers to learn using training data sets. Familiar home assistants like Siri, Alexa, and Google Home are all powered by machine learning. They learn from earlier questions to understand and answer new questions. In other words, with machine learning, a computer learns by example rather than through explicit programming instructions, opening up a vast array of new possibilities for administrative interventions.
Machine learning takes many forms. The most common, “supervised machine learning,” is akin to how a teacher trains a child in arithmetic. The conclusions are known, and the teacher shows her how to arrive at them. Similarly, in supervised machine learning, the outputs are known and used to help develop an algorithm to reach that conclusion. Using large quantities of labeled data (and there is an ever-expanding number of labeled data sets available on the Internet), machine learning can uncover patterns and inductively create general rules. For example, MIT researchers used machine learning to analyze the cough patterns of more than five thousand people and used that data set to develop an algorithm that can diagnose COVID-19, and researchers at Stanford looked at a training data set of cancerous moles to devise a tool that could diagnose skin cancer.31 (To be clear, machine learning based on large-scale raw data sets, while potentially an improvement over human diagnostics in some cases, is still error prone.)
The learning in machine learning occurs when the machine turns the data into a model. Models make us smarter, writes political scientist Scott Page. “Without models, people suffer from a laundry list of cognitive shortcomings: we overweight recent events, we assign probabilities based on unreasonableness, and we ignore base rates. . . . With models, we clarify assumptions and think logically. From power laws to Markov models, such heuristics give us simple ways to test our hypotheses.”32 Increasingly, there are also techniques for unsupervised machine learning that can find patterns in large quantities of unstructured data.
Machine learning could transform the workings of the public sector. It can make it possible to target scarce enforcement resources more effectively. For example, Chicago has more than fifteen thousand food establishments, but only three dozen inspectors. Working in collaboration with Carnegie Mellon University, Chicago’s city government used its data on restaurant inspections and a wide variety of other data to create an algorithm to predict food-safety violations. This project increased the effectiveness of its inspections by 25 percent. Chile’s Labor Inspectorate is applying machine learning to analyze past accidents and thereby anticipate workplace safety violations to make inspections more efficient and targeted. The Department of Education is exploring how machine learning and other technologies could be used to bring down the cost and improve the quality of creating learning assessments by automating the process of creating questions, scoring responses, and obtaining insights.33
By making it possible to sort the extraneous chaff from the informational wheat, machine learning could enable agencies to deliver both new and better services to the public. But it can also enable agencies to engage a broader public in decision-making by helping agencies to make public engagement more efficient. The public has long had a right to comment on any proposed agency regulatory rulemaking thanks to the Administrative Procedure Act of 1946. Although many of the three or four thousand rulemakings agencies publish annually receive only a handful of comments, thanks to the ease of digital commenting, some receive voluminous responses. In 2017, when the Federal Communications Commission sought to repeal an earlier Obama-era “net neutrality” rule requiring Internet service providers to transmit all content at the same speeds and not discriminate in favor of one content provider or another, the agency received twenty-two million comments.34 In 2007, the Fish and Wildlife Service received more than 640,000 email comments on whether to list the polar bear as a threatened species.35
While, in principle, it is good for democracy when more people participate in rulemaking, the reality is that the large volume of comments–many of which are “written” by software algorithms or are the result of electronic mass comment campaigns–also makes it hard for agencies to read or use the material and renders the public’s engagement mere “democracy theater.” But if agencies used machine learning to summarize and analyze comments, they could better understand public participation and increase the epistemic value of engagement. Tools already exist for rapid de-duplication of identical comments and summarization of unique comments.36 Journalists took advantage of such tools, for example, when they needed to sift rapidly through the 13.4 million documents that made up the Paradise Papers.37 Both Google and Microsoft announced in 2019 that they had built systems that could summarize articles.38
While not yet in widespread use in federal agencies, data-analytical techniques have begun to be used to make sense of citizen input in some contexts. A recent State Department project offers a simple illustration for how agencies could take a more effective approach to making sense of rulemaking comments using a combination of artificial intelligence from machines and collective intelligence (CI) from humans. In 2016, the State Department sought to improve its passport application and renewal process in anticipation of an increase in the number of passport application and renewal forms. The Department ran an online public engagement process to ask people what improvements they wanted. It received almost one thousand comments and engaged an Israeli-American software company to help it make rapid sense of the submissions.39
First, commenters were asked to highlight the key points of their answers. For users who declined to do so, the platform encouraged other users to highlight what they felt to be the other users’ core ideas. Then the company applied a text-mining algorithm that scanned the highlighted text for responses containing similar keywords in order to create summaries, or what the company calls “highlights.” Not surprisingly, the public was clamoring for a more convenient application process.
While machine learning can make it easier to process large quantities of comments, there are also challenges inherent in using machine learning precisely because of the way it creates generalizable rules. If a machine learning algorithm is “fed” with bad or incomplete data, it will encode bias into the model.40 For example, large companies use machine learning tools (sometimes known as “automated employment decision tools” or “algorithmic hiring tools”) to conduct and score video-based applicant interviews. This reduces the costs of screening potential employees. But if machine learning is used to compare applicant responses with interview answers provided by current employees, and if current employees are mostly White and American-born, applicants who are Black or foreign-born will score poorly.41 Nonetheless, if applied to foster democratic engagement, these tools can help agencies get “smarter,” faster, from new, more diverse audiences.
The late nineteenth and early twentieth centuries saw the rise of the professions–medicine, law, engineering, and social sciences–and of the civil service. To overcome the cronyism of the past, under the Pendleton Reform Act of 1883, professional civil servants had to qualify based on an examination. Rules and procedures were put in place to create a culture of independence and the tradition of working behind closed doors emerged. Governing, especially in expert agencies, was meant to be at arm’s length from the people.42 Institutions and bureaucracies were designed to be hierarchical and rules-based, in order to support the new vision of the public servant as an impartial mandarin shielded from undue influence. This culture of isolation persists today. Mike Bracken, former head of the UK Government Digital Service, writes about the British civil service: “Whitehall was described to me when I started as a warring band of tribal bureaucrats held together by a common pension scheme.”43
As we saw with public 311 data about rats, thanks to the technologies of collective intelligence–those Internet-based tools that connect networks of people to one another for deliberation, data-gathering, collaborative work, shared decision-making, and collective action–the public is capable of playing an increasingly collaborative role in governance. As Geoff Mulgan explains in Big Mind: How Collective Intelligence Can Change Our World, “every individual, organization or group could thrive more successfully if it tapped into . . . the brainpower of other people and machines.”44 Humans, aided by machines, are smarter acting together than alone. They are able to collect and share the information needed to solve problems better. The technologies of collective intelligence create the opportunity to innovate and improve on the traditional regulatory rulemaking commenting process by enabling agencies to get more relevant information, especially from those who have not traditionally participated. Collective intelligence technologies do not refer to specific products but to a field of research and an ever-growing set of participatory methods and tools.
Diversifying engagement in the administrative state is especially important because rulemaking–like civic participation generally–does not attract diverse perspectives. Legal scholar Cynthia Farina has explained that regulated entities tend to be more represented in rulemakings than regulatory beneficiaries. Studies by a variety of academics have found that business groups dominate the commenting process.45 While there is still not enough empirical research on who participates, it appears that individuals all too rarely submit substantive comments, in the same way that freedom of information requests come far less often from investigative reporters or civic groups than from businesses.46 We have no data on race and participation in regulatory rulemakings. Surveys undertaken by Pew Research Center in 2008 and 2012 found that civic engagement is overwhelmingly the province of the wealthy, White, and educated.47 The design of the current notice-and-comment process exacerbates armchair activism and amplifies some voices at the expense of others with relevant expertise and experience to share that could inform regulatory rule writing.
But around the world, public institutions have sought to reverse the decline in democratic trust by using new technology to enable citizens to participate in law and policy-making processes, or what I term crowdlaw.
For example, in early 2020, before the pandemic, New Jersey’s Future of Work Task Force, which I chaired, used a “wiki survey” tool called All Our Ideas to engage workers in defining the challenges associated with the impact of technology on the future of worker rights, health, and learning. All Our Ideas is a free, open-source tool developed by Princeton sociologist Matt Salganik. The wiki survey tool was prepopulated with dozens of possible responses to the question: what is your greatest concern about the impact of technology on the future of work? Respondents were then asked to decide which, between two randomly selected statements, is more important to them. People select the response they prefer (or “I can’t decide” as a third answer) or they may submit their own response. People can answer as many or as few questions as they choose and, with enough people participating, the result is a rank-ordered list of the answer choices, yielding insight into the issues of greatest concern. Over three weeks in February 2020, more than four thousand workers used the tool to engage about the impact of technology on the future of work and share their concerns, such as “unnecessary degree requirements for jobs have a bigger impact on low-income populations” or “costs of living–including medical, housing, and education costs–have risen over the last few decades.” In April 2021, the New Jersey Department of Education used the same technology to ask parents, students, and teachers about their priorities for schools. More than seventeen thousand participated in three weeks, resulting in greater understanding for policy-makers and the public of the priorities of students, teachers, and caregivers, and how they diverge.48
The wiki survey method of showing people two ideas and having them choose between them or submit a new idea has several practical benefits. It makes it harder to manipulate or game results. Respondents cannot manipulate which answer options they will see. In addition, because respondents must select one of two discrete answer choices from each pair (or add their own), this reduces the impulse to add new ideas unless there is something new to be said. New submissions can also be reviewed prior to posting to reduce duplication. Also, the need to pick one of two submissions helps with prioritizing ideas. This feature is particularly valuable in policy contexts in which finite resources make it helpful for agency officials to have some assistance extracting the most unique comments.
Wiki surveys are just one example of technologically enabled engagement. Other countries are turning to online collaborative drafting platforms to develop policies, rules, and laws with the public. In 2018, the German government used a free annotation platform to “expert source” feedback on its draft artificial intelligence policy.49 The German Chancellor’s Office, working in collaboration with Harvard University’s Berkman Center for Internet and Society and the New York University Governance Lab was able to solicit the input of global legal, technology, and policy experts. Taiwan and Brazil are turning to technology to include citizens in drafting national legislation as well.50 Using an annotation platform also made it possible for people to see one another’s feedback and create a robust dialogue, instead of a series of disconnected comments.
If agencies would genuinely like to ensure diverse citizen input in the rulemaking process, there are proliferating examples of participatory rulemaking–crowdlaw processes–sprouting up around the world.
Taking advantage of new technology, whether big data, machine learning, or crowdlaw tools, to regulate, deliver services more effectively, and co-design laws, regulations and policies with the public needs to start with training public servants to work differently, imbuing those who govern with a new set of skills. Retraining, reskilling, and lifelong learning are crucial for thriving in the digital age, in which technology will transform every job, no less so in the public sector than the private. The Innovator’s DNA: Mastering the Five Skills of Disruptive Innovators explains that the ability to innovate is not innate, but a learned set of practices that can and must be taught if businesses are to thrive.51 Yet for all the talk about investing in private sector training, we are not doing so nearly enough in the public sector. By failing to invest in teaching public servants how to use data and collective intelligence–quantitative and qualitative methods–we are failing to build the skill set of the twenty-first-century public servant.52
To create government that is not smaller or bigger but better, the public sector needs to nurture talent, invest in training, and foster the development of a new set of skills. British conservative politician and Minister for the Cabinet Office Michael Gove declared in a much-publicized speech in June 2020:
The manner in which Government has rewarded its workers for many years now has, understandably, prized cognitive skills–the analytical, evaluative and, perhaps, above all, presentational. I believe that should change. Delivery on the ground; making a difference in the community; practicable, measurable improvements in the lives of others should matter more.53
Unfortunately, the skills involving data and collaboration needed to make practicable, measurable improvements in the lives of others–defining problems, employing data-analytical thinking, using collective intelligence and other innovative ways of working–are not in widespread and consistent use in public services. A 2019 survey I conducted to assess the use of six innovative problem-solving skills by over four hundred local public officials in the United States shows that only half were using new data-analytical or engagement skills in their work.54 The results were similar in Australia, where I worked with colleagues at Monash University to run a comparable survey of almost four hundred mid- to senior-level public servants about nine skills, from problem definition to research synthesis. Only one-third of these Australian bureaucrats, on average, used innovative problem-solving skills.55 Tellingly, however, once people knew and used a skill, they applied it regularly in their work. But the application is scattershot, and the skills are not developed for taking a project from idea to implementation.
The public sector’s failure to use creative problem-solving methods that take advantage of collective intelligence and data is widespread.56 And when public servants are not getting trained to work differently, that is no wonder. The surveys showed that respondents had been trained in innovative skills like the use of data or collective intelligence only between 8 and 30 percent of the time.57
In the Trump administration, which was openly hostile to the civil service and even signed an executive order (E.O. 13957) giving the president the power to hire or fire civil servants at will, investing in public sector training and talent did not happen.58 While the Biden administration is friendlier to the civil service and rescinded E.O. 13957, urgent priorities of fighting COVID, climate change, and racial equity are drawing the most attention and resources, even though training people to work differently could help advance these important political goals. While the United States is not focusing on training, forward-thinking countries are investing heavily in training public servants in new skills. Argentina’s Innovation Ƶ offers programs on human-centered design that reach thirty-six thousand public servants. Germany has launched the new Digitalakademie with government-wide courses in new ways of working and digital competencies. Canada’s Busrides program offers podcasts about new technologies such as artificial intelligence and their application to governing aimed at the country’s two hundred and fifty thousand public servants.
Ultimately, the future of the administrative state rests in the hands of people who must embrace new ways of working. Individuals drive the actions of institutions. Futurist and architect Buckminster Fuller likened the power of the individual change agent to the trim tab, the small rudder that moves a big ship.59 If we want better government capable of responding to existential crises like climate change or inequality, we must invest in and train new leaders: passionate and innovative people who are determined to go beyond mere compliance to solve problems in new ways.
In addition to training, however, government at every level needs to recruit more people with digital and innovation skills. The Tech Talent Project is a nonprofit effort by more than eighty technologists and former policy-makers to conduct a review of agency operations and recommend ways to innovate. They, too, emphasize that “agencies need leaders with modern technical expertise from Day One” and recommend appointing people with more tech savvy in key leadership roles as well as training existing personnel.60 It is also key to promote the agile recruitment and hiring of a modern and diverse federal workforce, including hiring a new generation of public sector leaders (currently, only 155,000 out of 2.1 million federal workers are under thirty) and more people of color, to complement better efforts at training.61 The overhaul of the Office of Personnel Management and the Office of Presidential Appointments to facilitate faster hiring and better training, together with the creation of a Chief People Officer or cabinet-level human capital position to oversee these efforts, would ensure a robust twenty-first-century federal workforce and that training becomes a priority for future governments.
We also need greater understanding of the talents already in place in the administrative state. While we have data about the age, gender, race, and disability status of federal public servants and know how imbalanced the distribution of leadership positions is, we know very little about public workers’ current skill gaps. The federal government should conduct an in-depth diagnostic survey about the talent and competences of the current workforce to diagnose what people do and do not know and empirically determine whether they are using qualitative and quantitative techniques, new technologies, and data-driven research in how they work. Only by learning what people can do can government facilitate a data-driven and informed training and hiring strategy. A decade ago, for example, the World Bank developed SkillFinder to keep track of the skills and know-how of its employees and consultants to foster greater knowledge sharing.62 The United States should follow the lead of Chile, which conducted a limited skills survey in 2017; Canada, which did so in 2018; and the German Federal Government, which is planning in 2021 to distribute the same innovation skills survey I ran among public officials in the United States and in Australia.
But in addition to training and talent, we need the technology itself. The General Services Administration (GSA) should execute blanket purchase agreements with appropriate technology vendors to make it easy for every agency to know which tools to use and how to access innovative new platforms, including AI, machine learning, and collective intelligence platforms. We can use the authority provided by the America Competes Act to host a competition on the federal government’s challenge platform (challenge.gov) to spur the creation of new tools designed specifically for regulatory agencies, such as platforms for summarizing comments or undertaking collaborative drafting. The Tech Talent project specifically recommends that, in 2021, the Biden administration prioritize building a modern data infrastructure to enable robust, secure sharing of data within agencies, between agencies, and with the American public. In addition to massive investment in technology infrastructure and funding for technology research, advances in new technology need to be translated into more modern government. The GSA should not give grants to fund private sector innovation without ensuring that those innovations are used by government, too.
Previously, I have written extensively about using new technology to connect federal agencies to experts in America’s industries and universities to improve the level of understanding of science in federal agencies. In Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing, I lay out in detail how the federal government could expand projects like experts.gov for connecting public servants to smart, outside help to obtain data, facts, opinions, advice, and insights from a much broader audience. In addition, technology can help to connect administrative agencies to ordinary people with lived experience and situational awareness. Appellate lawyer and public interest advocate David Arkush has proposed that administrative agencies adopt a citizen jury system that would empanel one thousand randomly selected citizens to provide oversight over agency decision-making. In a variation on Arkush’s idea, Administrative Conference of the United States counsel Reeve Bull, building on an idea expressed earlier by the Jefferson Center in its work on citizen juries, has proposed creating citizen advisory committees: relatively small groups of citizens who would advise but not bind an agency. In Bull’s model, participants would receive background materials generated by deliberative polling before their discussions. This is exactly what they do in Belgium, where random samples of ordinary citizens serve on legislative committees. Thanks to new technology, it is becoming cheaper and easier to connect with ever-larger quantities of people who can bring their expertise to bear.
From Toby Ord to Bill Gates to Stephen Hawking, there is no lack of doomsday prognosticators about the dangers of new technology, especially artificial intelligence. But the greatest risk for our democracy is not the longer-term future of hyper-intelligent machines. Rather, the risk right now is that administrative agencies will fail to innovate altogether and miss this opportunity to open the processes of governance to more data and more public engagement. While there may be a danger from machines wresting control from humanity down the line, right now we have an opportunity to put these tools to use to strengthen participatory democracy and transform the administrative state.
From the 2019 government shutdown, the longest in U.S. history, to the repeated insults (think “deep state” and “fire Fauci”), to undermining the work of vital agencies like the Food and Drug Administration, the Centers for Disease Control and Prevention, and the National Institute of Allergy and Infectious Diseases, the Trump administration’s approach to governing reflected an emphasis on loyalty to Trump over expertise and delivering results for the public. But the Biden administration will need to do more than roll back enacted regulations or rehire the people Trump fired on his way out the door. If officials are to take advantage of data and technology to enhance both the regulatory and service delivery functions of government, Washington has to: invest in broadscale training in digital, innovation, and public problem-solving skills across the federal enterprise; learn who works in government and understand their skills and performance; find the talent hiding in plain sight and take advantage of their innovative know-how; speed up the process of bringing in more diverse people to serve; and relax the rules and customs that prevent federal officials from exercising common sense and creativity. We can use technology and new ways of working to steer the ship of state toward a future in which the public sector works openly and collaboratively, informed by data and engagement. We can overcome our fears about becoming slaves to new technology by putting those same tools to work for us to create a stronger, more robust democracy and better government.
© 2021 by Beth Simone Noveck. Published under a license.
author's note
This essay draws from Beth Simone Noveck, Solving Public Problems: How to Fix Our Government and Change Our World (New Haven: Yale University Press, 2021).
Endnotes
- 1Federation of American Scientists, “Ask a Scientist” (accessed November 1, 2020).
- 2Given the explosion of unemployment claims during the pandemic and the fact that, unlike Social Security, unemployment is administered separately by each state, the White House Digital Service volunteered to help states fix their systems to respond to the demand.
- 3ProPublica reported that a single junior consultant just out of school runs clients $3.5 million annually. Ian MacDougall, “,” ProPublica, July 15, 2020.
- 4Ibid.
- 5International Monetary Fund, “” (Washington, D.C.: International Monetary Fund, 2021).
- 6Fernando G. De Maio, “,” Journal of Epidemiology and Community Health 61 (10) (2007): 849–852; and Lawrence Mishel, Elise Gould, and Josh Bivens, “,” Economic Policy Institute, January 6, 2015.
- 7Steven H. Woolf and Heidi Schoomaker, “,” JAMA 322 (20) (2019): 1996–2016.
- 8National Academies of Sciences, Engineering, and Medicine, (Washington, D.C.: The National Academies Press, 2015).
- 9Jacob Bor, Gregory H. Cohen, and Sandro Galea, “,” The Lancet 389 (10077) (2017); and Samuel L. Dickman, David U. Himmelstein, and Steffie Woolhandler, “,” The Lancet 389 (10077)(2017).
- 10Shervin Assari, “,” The Conversation, June 1, 2020.
- 11“,” Pew Research Center, September 14, 2020.
- 12Paul C. Light, “,” Brookings Institution, July 14, 2014.
- 13William P. Marshall, “Eleven Reasons Why Presidential Power Inevitably Expands and Why It Matters,” Boston University Law Review 88 (2008).
- 14Adrian Vermeule, “The Administrative State: Law, Democracy, and Knowledge,” in The Oxford Handbook of the United States Constitution, ed. Mark Tushnet, Mark A. Graber, and Sanford Levinson (Oxford: Oxford University Press, 2013).
- 15H.R. 4174, Foundations for Evidence-Based Policymaking Act of 2018, , January 14, 2019.
- 16Mitch Landrieu, “,” CNN, December 16, 2014.
- 17“,” Jails to Jobs, February 1, 2018.
- 18“,” U.S. News, January 1, 2019.
- 19“,” Occupational Safety and Health Administration, Federal Register 84 (17) (2019): 00101; and The White House, “,” January 21, 2021.
- 20“,” Consumer Financial Protection Bureau (accessed November 2, 2020); and Christa Gibbs, (Washington, D.C.: Consumer Financial Protection Bureau, 2017), 7–9.
- 21“,” Consumer Financial Protection Bureau, Federal Register 80 (208) (2015): 66128; Consumer Financial Protection Bureau, “” (updated April 16, 2020); and “,” Biden/Harris: A Presidency for All Americans (accessed May 5, 2021).
- 22Matthew J. Salganik, Bit by Bit: Social Research in the Digital Age (Princeton, N.J.: Princeton University Press, 2019), 16, 82–83.
- 23Diana Farrell, Fiona Greig, and Amar Hamoudi, (New York: JPMorgan Chase Institute, 2019). “The JPMorgan Chase Institute is harnessing the scale and scope of one of the world’s leading firms to explain the global economy as it truly exists. Its mission is to help decision-makers–policymakers, businesses, and nonprofit leaders–appreciate the scale, granularity, diversity, and interconnectedness of the global economic system and use better facts, timely data, and thoughtful analysis to make smarter decisions to advance global prosperity. Drawing on JPMorgan Chase’s unique proprietary data, expertise, and market access, the Institute develops analyses and insights on the inner workings of the global economy, frames critical problems, and convenes stakeholders and leading thinkers.”
- 24Some examples in this section are drawn from Beth Simone Noveck, “,” Yale Human Rights and Development Law Journal 19 (1) (2017): 18.
- 25Raj Chetty, Nathaniel Hendren, and Lawrence Katz, “The Effects of Exposure to Better Neighborhoods on Children: New Evidence from the Moving to Opportunity Project,” American Economic Review 106 (4) (2016).
- 26Louisiana Department of Health, “,” June 1, 2016.
- 27Daniel T. O’Brien, The Urban Commons: How Data and Technology Can Rebuild Our Communities (Cambridge, Mass.: Harvard University Press, 2018).
- 28Beth Simone Noveck and Joel Gurin, “Corporations and Transparency: Improving Consumer Markets and Increasing Public Accountability,”in Transparency in Politics and the Media: Accountability and Open Government, ed. Nigel Bowles, James T. Hamilton, and David A. Lev (New York: I. B. Tauris, 2013).
- 29Justin Elliott and Lucas Waldron, “” ProPublica, April 22, 2019.
- 30Gideon Mann, “,” Solving Public Problems with Data recorded lecture, 2017.
- 31Jordi Laguarta, Ferran Hueto, and Brian Subirana, “,” IEEE Open Journal of Engineering in Medicine and Biology 1 (2020). See also Andre Esteva, Brett Kuprel, Roberto A. Novoa, et al., “,” Nature 542 (7639) (2017): 115–118.
- 32Scott E. Page, The Model Thinker:What You Need to Know to Make Data Work for You (New York: Basic Books, 2018), 15.
- 33“,” Conference Proceedings, April 30, 2021, Smarter Crowdsourcing, The GovLab.
- 34“Restoring Internet Freedom, A Proposed Rule by the Federal Communications Commission,” Federal Register 82 (105) (2017): 25568; and “Restoring Internet Freedom, A Rule by the Federal Communications Commission,” Federal Register 83 (36) (2018): 7852.
- 35The U.S. Fish and Wildlife Service, “,” June 25, 2007.
- 36Steve Balla, Reeve Bull, Bridget Dooling, Beth Simone Noveck, et al., “,” draft report for the Administrative Conference of the United States, April 2, 2021.
- 37Fabiola Torres López, “,” Global Investigative Journalism Network, December 4, 2017.
- 38Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt, “,” conference paper presented at the International Conference on Learning Representations (ICLR 2019), February 2019; and “,” Google AI Blog, August 2016.
- 39Andrew Zahuranec, Andrew Young, and Stefaan G. Verhulst, (New York: The GovLav, 2019).
- 40Betsy Anne Williams, Catherine F. Brooks, and Yotam Shmargad, “,” Journal of Information Policy 8 (2018): 78–115.
- 41Solon Barocas and Andrew Selbst, “,” The New York Times, August 6, 2014.
- 42“The role of the professional civil servant is enshrined by the law itself, which reinforces the profession’s control over the flow of information into and out of institutions–what Pierre Bourdieu calls the ‘officializing strategy’ of bureaucracy–in ways designed to dissuade citizens from engagement. There is a wealth of administrative law that limits control over speech in the public sector to public management professionals and treats their decisionmaking with legal deference. For example, key information law statutes intentionally limit information sharing and collaboration and preserve the domain of the public servant distinct from and closed to others. The public earned a right to access information held by government relatively late in the twentieth century, and even then, only upon request and with significant limitations. For those of us outside the curtain, the effect is impressive.” Beth Simone Noveck, Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing (Cambridge, Mass.: Harvard University Press, 2015), 49.
- 43Brian Glick, “,” Computer Weekly, August 13, 2015.
- 44Geoff Mulgan, Big Mind: How Collective Intelligence Can Change Our World (Princeton, N.J.: Princeton University Press, 2017).
- 45Jason Webb Yackee and Susan Webb Yackee, “A Bias towards Business? Assessing Interest Group Influence on the U.S. Bureaucracy,” The Journal of Politics 68 (1) (2006): 128–139.
- 46David E. Pozen, “,” University of Pennsylvania Law Review 165 (2016): 1097.
- 47Aaron Smith, “,” in Civic Engagement in the Digital Age (Washington, D.C.: Pew Research Center, 2013). “A key finding of our 2008 research was that Americans with high levels of income and educational attainment are much more likely than the less educated and less well-off to take part in groups or events organized around advancing political or social issues. That tendency is as true today as it was four years ago, as this type of political involvement remains heavily associated with both household income and educational attainment.”
- 48“” (accessed April 29, 2021). See also Gopal Ratnam, “,” Roll Call, April 20, 2021.
- 49Beth Simone Noveck, Rose Harvey, and Anirudh Dinesh, (New York: The GovLab, 2019).
- 50Ratnam, “Tech Tools Help Deepen Citizen Input.”
- 51Jeff Dyer, Hal Gregersen, and Clayton Christensen, The Innovator’s DNA: Mastering the Five Skills of Disruptive Innovators (Brighton, Mass.: Harvard Business Review Press, 2011), 22.
- 52Blue Wooldridge, “Increasing the Productivity of Public-Sector Training,” Public Productivity Review 12 (2) (1988): 205–217.
- 53Michael Gove, “‘,” as reprinted in The New Statesman, June 28, 2020.
- 54For details on the public entrepreneurship innovation skills surveys sponsored by the International City and County Managers Association (ICMA), see the .
- 55Beth Simone Noveck and Rod Glover, The Public Problem Solving Imperative (Carlton, Australia: The Australia and New Zealand School of Government, 2019).
- 56This report builds on earlier work by the OECD, which did the first countrywide study of the pervasiveness of innovation skills in a survey of 150 Chilean public servants, and subsequently elaborated on this work in a report on core governance innovation skills, both in 2017. See Observatory of Public Sector Innovation, (Paris: Organisation for Economic Co-operation and Development, 2017).
- 57Noveck and Glover, The Public Problem Solving Imperative.
- 58“Executive Order 13957 of October 21, 2020: Creating Schedule F in the Excepted Service,” Federal Register 85 (207) (2020).
- 59The TrimTab Conspiracy was the name of the “salon” that David Johnson, Susan Crawford, and I ran between 2003 and 2008 in New York and then in Washington, D.C. It was self-consciously styled as an opportunity to discuss strategies for public problem-solving using new technologies. We met either every two weeks or once a month for these discussions for many years. See Buckminster Fuller, “,” Playboy magazine, February 1972.
- 60Tech Talent Project, “” (accessed November 2, 2020).
- 61The White House, “Strengthening the Federal Workforce, FY 2019.” See also Dan Durak, “,” Partnership for Public Service, March 11, 2019.
- 62The GovLab, (New York: The GovLab, 2016).