CE Research Training Program graduate and research contractor at ARMoR under the Global Impact Placements program, working on cost-benefit analyses to help combat AMR. Currently exploring roles involving research distillation and quantitative analysis to improve decision-making e.g. applied prioritization research, previously supported by a FTX Future Fund regrant and later Open Philanthropy’s affected grantees program. Previously spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA. Also collaborating on a local charity evaluation initiative with the moonshot aim of reorienting Malaysia’s giving landscape towards effectiveness.
I first learned about effective altruism circa 2014 via A Modest Proposal, a polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler’s personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):
I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].
I think one consideration worth surfacing is that GiveWell explicitly notes that
(That page goes into more detail as to why.)
The philosophical underpinning behind this (if you want to call it that) is in Holden Karnofsky’s Sequence thinking vs cluster thinking essay—in short, GW is more cluster-style, while the Happier Lives Institute strikes me as more sequence-style (correct me if I’m wrong). Holden:
And then further down:
Elsewhere, Holden has also written that
If I’m right that HLI leans more towards sequence than cluster-style thinking, then you can interpret this passage as Holden directly addressing HLI (in the future).
This comment is getting too long already, so I’ll just add one more Holden quote, from Some considerations against more investment in cost-effectiveness estimates:
Note that these are very old posts, last updated in 2016; it’s entirely possible that GiveWell has changed their stance on them, but that isn’t my impression (correct me if I’m wrong).
Side note on deworming in particular: in David Roodman’s writeups on the GW blog, which you linked to in the main post, GW’s “total discount” is actually something like 99%, because it’s a product of multiple adjustments, of which replicability is just one. I couldn’t however find any direct reference to this 99% discount in the actual deworming CEAs, so I don’t know if they’re actually used.
I think the main post’s recommendations are great—having spent hours poring over GW’s CEAs as inspiration for my own local charity assessment work, the remark above that “these [CEAs] are hard to follow unless you already know what’s going on” keenly resonated with me—but given GW’s stance above on CEAs and the fact that they only have a single person updating their model, I’m not sure it’ll be all that highly prioritized?