Feedback welcome: www.admonymous.co/mo-putera
I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, and general org-boosting to support policies that incentivise innovation and ensure access to antibiotics to help combat AMR.
I was previously an AIM Research Program fellow, was supported by a FTX Future Fund regrant and later Open Philanthropy’s affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I’ve also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia’s giving landscape towards effectiveness, albeit with mixed results.
I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander’s polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler’s personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):
I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].
I admire influential orgs that publicly change their mind due to external feedback, and GiveWell is as usual exemplary of this (see also their grant “lookbacks”). From their recently published Progress on Issues We Identified During Top Charities Red Teaming, here’s how external feedback changed their bottomline grantmaking:
Some self-assessed progress that caught my eye — incomplete list, full one here; these “led to important errors or… worsened the credibility of our research” (0 = no progress made, 10 = completely resolved):
(As an aside, I’ve noticed plenty of claims of GW top charity-beating cost-effectiveness figures both on the forum and elsewhere, and I basically never give them the credence I’d give to GW’s own estimates, due to the kind of (usually downward) adjustments mentioned above like receiving interventions from other sources or between-program interventions, and GW’s sheer reasoning thoroughness behind those adjustments, seriously, click on any of those “(more)”s)
Some other issues they’d “been aware of at the time of red teaming and had deprioritized but that we thought were worth looking into following red teaming” — again incomplete list, full one here:
I always had the impression GW engaged outside experts a fair bit, so I was pleasantly surprised to learn they thought they weren’t doing enough of it and then actually followed through so seriously, this is an A+ example of organisational commitment to and follow-through on self-improvement so I’d like to quote this section in full:
Some quick reactions:
I like that GW thinks they should allocate more time to expert conversations vs desk research in most cases
I like that GW are improving their own red-teaming process by having experts review their work in parallel
I too am keen to see what CGD find out re: why GW top-recommended programs aren’t funded by other groups you’d expect to do so
the Zipline exploratory grant is very cool, I raved about it previously
I wouldn’t have expected that the biggest driver in terms of grants made/not made would be failure to sense check raw data in burden calculations; while they’ve done a lot to redress this there’s still a lot more on the horizon, poised to affect grantmaking for areas like maternal mortality (prev. underrated, deserves a second look)
funnily enough, they self-scored 5⁄10 on “insufficient focus on simplicity in cost-effectiveness models”; as someone who spent all my corporate career
pained byworking with big messy spreadsheets and who’s also checked out GW’s CEAs over the years I think they’re being a bit harsh on themselves here...Ben Kuhn has a great essay about how
I agree, and I think there’s an organisational analogue as well, which GiveWell exemplifies above.