I’m the co-founder and CEO of PolicyEngine, a tech nonprofit that computes the impacts of public policy (policyengine.org). I’m also the founder and president of the UBI Center, a think tank researching universal basic income policies (ubicenter.org).
I first got into EA in 2012: I worked at Google at the time, and Google.org made a grant to GiveDirectly. I’ve since taken the GWWC pledge and focused my giving on GiveDirectly and GiveWell. I was active in Google’s EA group and also MIT’s when I went there for grad school in 2020.
I’m also the founder of Ventura County YIMBY, and a volunteer California state coordinator for Citizens’ Climate Lobby, a grassroots organization advocating for a national carbon fee-and-dividend policy.
Thanks Benita, really appreciate the field perspective. You’re right that the parameters are a snapshot — the tool takes GiveWell’s November 2025 spreadsheet values as given and doesn’t attempt to model how they change over time. GiveWell updates their spreadsheets periodically as they get new data, and the tool would need to be re-extracted to reflect that.
On within-country variation, this is a real limitation. The model treats each country as a single unit with one set of parameters, but as you note, conditions in Sokoto vs. other parts of Nigeria can be very different. The sensitivity analysis helps show how much the result depends on any single parameter (like counterfactual coverage), but it doesn’t capture the kind of correlated shifts you’re describing where multiple parameters move together as systems evolve.
I built this as a side project to make GiveWell’s existing estimates more explorable, not to improve on their parameter estimation — that’s solidly their domain expertise. But the tool does make it easy to test “what if coverage in Zamfara drops by 10%” type questions, which I think is part of what you’re getting at.