I really like this example! I used in an interview I gave about EA and thought it went down pretty well. My main concern with using it is that I don’t personally fund direct cash transfers (or think they’re anywhere near the highest impact thing), and both think it can misrepresent the movement, and think that it’s disingenuous to imply that EA is about robustly good things like this, when I actually care most about things like AI Safety.
As a result, I frame the example like this (if I can have a high-context conversation):
Effectiveness, and identifying the highest impact interventions is a cornerstone of EA. I think this is super important, because there’s really big spread between how much good different interventions, much more than feels intuitive
Direct cash transfers are a proof of concept: There’s good evidence that doubling your income increases your wellbeing by the same amount, no matter how wealthy you were to start with. We can roughly think of helping someone as just giving them money, and so increasing their income. The average person in the US has income about 100x the income of the world’s poorest people, and so with the resources you’d need to double the income of an average American, you could help 100x as many of the world’s poorest people!
Contextualise, and emphasise just how weird 100x differences are—these don’t come up in normal life. It’d be like you were considering renting buying a laptop for $1000, shopped around for a bit, and found one just as good for $10! (Pick an example that I expect to resonate with big expenses the person faces, eg a laptop, car, rent, etc)
Emphasis that this is just a robust example as a proof of concept, and that in practice I think we can do way better—this just makes us confident that spread is out there, and worth looking for. Depending on the audience, maybe explain the idea of hits-based giving, and risk neutrality.
I really like this example! I used in an interview I gave about EA and thought it went down pretty well. My main concern with using it is that I don’t personally fund direct cash transfers (or think they’re anywhere near the highest impact thing), and both think it can misrepresent the movement, and think that it’s disingenuous to imply that EA is about robustly good things like this, when I actually care most about things like AI Safety.
As a result, I frame the example like this (if I can have a high-context conversation):
Effectiveness, and identifying the highest impact interventions is a cornerstone of EA. I think this is super important, because there’s really big spread between how much good different interventions, much more than feels intuitive
Direct cash transfers are a proof of concept: There’s good evidence that doubling your income increases your wellbeing by the same amount, no matter how wealthy you were to start with. We can roughly think of helping someone as just giving them money, and so increasing their income. The average person in the US has income about 100x the income of the world’s poorest people, and so with the resources you’d need to double the income of an average American, you could help 100x as many of the world’s poorest people!
Contextualise, and emphasise just how weird 100x differences are—these don’t come up in normal life. It’d be like you were considering renting buying a laptop for $1000, shopped around for a bit, and found one just as good for $10! (Pick an example that I expect to resonate with big expenses the person faces, eg a laptop, car, rent, etc)
Emphasis that this is just a robust example as a proof of concept, and that in practice I think we can do way better—this just makes us confident that spread is out there, and worth looking for. Depending on the audience, maybe explain the idea of hits-based giving, and risk neutrality.
Thanks for sharing! I like the way you phrased it in the interview, I think that’s a nice way to start.