Hi EA folks! I’m relatively new around here though wanted to share a question I haven’t been able to get out of my head: how might we bring the sort of rigorous measurement of outputs of public interventions that’s so wonderfully common in the EA community to the public sector? Here I mean not just philanthropy but also government investment.
In grad school at NYU’s Center for Urban Science and Progress, a unique program that was basically applied data science for city challenges, I vividly remember discussing an NYU Gov Lab estimate that only one dollar out of every hundred in public investment in the United States was backed by any sort of rigorous investment that the program worked. That’s a big mountain-sized evidence gap!
A few friends and I co-founded a public data analytics NGO, ARGO Labs, out of grad school to help meet that mountain. You can see the full spectrum of projects we worked on here. And you can see our water data work here, which lives on — a rarity in civic / urban / gov tech — and is growing rapidly with the ongoing CA drought. We emphatically did NOT succeed however in the larger vision to transform how government operates, manage public data as a utility and make the rigorous measurement of the output of public investment as regular and routine as budgeting.
CALL FOR COLLABORATION: I’d love to explore a big question: how might we fill that mountain-sized evidence gap in the public sector? I’m happy to be an open book with ARGO’s successes, failures, and muddling through in between. I’d also love to hear other examples from around the world. If you have ideas and/or are interested in exploring, please comment away below!
I’ve been EA adjacent for a while though am new to this forum. So I’m still learning how things work! If you have a better idea how to collaboratively investigate the public sector evidence gap, I’m all ears. Below are a couple excerpts from previous writing I’ve done that provide a bit more context on the need here and where I’m coming from. Thanks for reading!
Contextual reading
“Cities face a wave of change. Global upheaval and digital disruption challenge cities in unprecedented ways. Meanwhile, a Cambrian explosion of promising case studies across the globe highlights how honest and effective use of data and digital services can transform how cities’ provision basic public services. Yet only 0.64% of cities across America have an open data portal,[1] let alone any deployment of advanced analytics targeted to improve service delivery. Entire municipal sectors like water have largely missed the public technologist movement led by Code for America, 18F and the network of Chief Data Officers across the country.
More deeply a time traveler from the 50’s would find the operational practices of many municipalities strangely familiar. Nothing equal to the development over a century ago of professional water utilities or universal public schooling — institutions that implemented nearly ubiquitous access to clean water and essentially eradicated illiteracy in America — has been developed for the digital era. The future of government operations remains a frontier.”
That’s a post I wrote in 2018 titled “Introducing ARGO, the world’s first public data utility.” The rest goes into our theory of change and platform we built. That version of ARGO did not ultimately, obviously I suppose, succeed in the big vision of transforming how government operates. The water data work is cool though and still ongoing :)
Looking forward, here’s a post from 2019 on the need for data collaboratives to realize the potential of public data.
Like an iceberg, much of the work to meaningfully open up government data lies beneath the surface. Quality technology to ensure appropriate levels of secure data sharing and access is necessary but far from sufficient. Data by itself does nothing. Putting a whiz bang tool in front of a decision maker similarly does not inevitably lead to impact. Human interpretation and analysis must also play a role.
Data users are the analysts, statisticians, app developers, and others who actually work with the data to generate meaningful insights. Those users ideally come from a diverse community of practice with a healthy mix of organizations and sectors — such as the business community, local news outlets, government staff and the larger social sector. That enables dialogue and deliberation about what the data means for key policy, management and operational decisions.”
The Policy Impacts-project at Harvard University might be of interest to you, including their method for evaluating the impacts of public policies: the Marginal Value of Public Funds (MVPF).
The MVPF-method measures the “bang for the buck” of public spending on a given policy. How? It’s calculated as the ratio of two numbers: the benefits that a policy provides to its recipients (measured as “willingness to pay”), divided by the policy’s net cost to the government (including all long-term effects on its budget).
The Policy Impacts-project are also collaboratively building a Policy Impacts Library, a database of MVPF-estimates for different public policies derived from rigorous empirical research. The goal is to help policymakers and practitioners better understand and compare the long-term costs and benefits of a wide range of policies.
PS. I find this project super interesting, but have not looked into it in detail nor talked to anyone working on it. So there might be obvious weaknesses I’m not aware of.
Thanks! The challenge with these types of benefit cost analyses is that often willingness to pay can correlate with ability to pay. There can also be categorical imperative type issues that trump these types of logics. I do think though that its a super useful initiative. Better to have progressively less wrong measures and use best available evidence rather than shoot from the hip. Cheers!