I found this clear and reassuring. Thank you for sharing
Sanjay
£4bn for the global poor: the UK’s 0.7%
EA and tackling racism
Rhodri Davies on why he’s not an EA
ESG investing isn’t high-impact, but it could be
Why we have over-rated Cool Earth
I read that critique with hope, but ultimately I found it largely unconvincing.
I’m very surprised by the claim that mosquito nets keep their beneficiaries in poverty. Mosquito nets are not trying to lift people out of poverty, and yet there is some evidence that they do help lift people out of poverty to some extent. I really don’t understand how distributing nets can keep people in poverty.
Kalulu says:
if you randomly asked one of the people who themselves live in abject poverty, there is no chance that they will mention one of EA’s supported “effective” charities, as having impacted their lives more than the work of traditional global antipoverty agencies. No. That’s out of question.
To be honest, if you asked someone who had received $1000 from GiveDirectly whether it impacted their lives, I’m pretty confident they would say a hearty yes. It also allows the lived experiences of the poor to dictate what happens to the money—something which Kalulu demands.
GiveWell believes that all of the GiveWell Top Charities outperform GiveDirectly, and I think this is correct, unless you place an unusually low amount of value on saving a life. Again, GiveWell have validated whether they are imposing a Western perspective when it comes to this moral weight judgement—they have surveyed people in Africa on this question.
One area where I do agree to some extent: I think it would be good if more people from the populations which benefit from these interventions actually worked at GiveWell. I have certainly had conversations with GiveWell where we have discussed the details of models and I have invoked lived experience of spending time among the global poor, and I got the impression that GiveWell could have benefited from having more of this perspective.
Overall I still don’t feel we need to galvanise actions to improve the situation.
Some might be sceptical of a critique which could be paraphrased as: “EA is getting it wrong because it should be funding NGOs which are run by people who have lived experience of being ultra poor. By the way I have lived experience of being ultra poor and I run an NGO.” I don’t think you need to invoke this scepticism in order to find this critique unconvincing.
The BEAHR: Dust off your CVs for the Big EA Hiring Round!
The $100trn opportunity: ESG investing should be a top priority for EA careers
SoGive rates Open-Phil-funded charity NTI “too rich”
No More Pandemics: a grassroots group?
SoGive’s 2023 plans + funding request
Malaria vaccines: how confident are we?
We’re (surprisingly) more positive about tackling bio risks: outcomes of a survey
This is some of the finest writing I’ve seen on AI alignment which both (a) covers technical content , and (b) is accessible to a non-technical audience.
I particularly liked the fact that the content was opinionated; I think it’s easier to engage with content when the author takes a stance rather than just hedges their bets throughout.
David Moss and I recently conducted a study with about 500 participants looking at the extent to which people place moral weight on the far future.
The study found that older people give much less moral weight to the future.
The study included the following questions:
Is it better to save (A) 1 person now or (B) 1/2/1,000/1,000,000 people 500 years from now? (This is 4 different questions, one after the other, with differing numbers of people stated in option (B))
How far do you disagree or agree (on a 7-point scale) that:
“Future generations of people, who have not been born yet, are equal in moral importance to people who are already alive”
“We should morally prioritise helping people who are in need now, relative to those who have yet to be born”
[Question] How many people are neartermist and have high P(doom)?
There was a period around 2016-18 when I took this idea very seriously.
This led to probably around 1 year’s worth of effort spent on seeking funds from sources who didn’t understand why tackling EA issues was so important. This was mostly a waste of my time and theirs.
The formula isn’t just:
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise
Instead it’s :
Impact of taking money from a high-impact funder = impact you achieve minus impact achieved by what they would have funded otherwise plus the amount of extra work you get done by not having spend time seeking funding
In a post this long, most people are probably going to find at least one thing they don’t like about it. I’m trying to approach this post as constructively as I can, i.e. “what I do find helpful here” rather than “how I can most effectively poke holes in this?” I think there’s enough merit in this post that the constructive approach will likely yield something positive for most people as well.