On this particular point
message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public
I can’t find info on Rethink’s site, is there anything you can link to?
Of the three best-performing messages you’ve linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase ‘ensure a good future’ is a large part of what resonates.
All that said, more info on the tests they ran would obviously update my position.
So people actually pretty like messages that are about unspecified, and not necessarily high-probability threats, to the (albeit nearer-term) future.
This seems correct to me, and I would be excited to see more of them. However, I wouldn’t interpret this as meaning ‘longtermism and existential risk have similarly-good reactions from the educated general public’, I would read this as risk messaging performing better.
Also, messages ‘about unspecified, and not necessarily high-probability threats’ is not how I would characterize most of the EA-related press I’ve seen recently (NYTimes, BBC, Time, Vox).
(More generally, I mostly see journalists trying to convince their readers that an issue is important using negative emphasis. Questioning existing practices is important: they might be ineffective; they might be unsuitable to EA aims (e.g. manipulative, insufficiently truth-seeking, geared to persuade as many people as possible which isn’t EA’s objective, etc.). But I think the amount of buy-in this strategy has in high-stakes, high-interest situations (e.g. US presidential elections) is enough that it would be valuable to be clear on when EA deviates from it and why).
tl;dr: I suspect risk-ish messaging works better. Journalists seem to have a strong preference for it. Most of the EA messaging I’ve seen recently departs from this. I think it would be great to be very clear on why. I’m aware I’m missing a lot of data. It would be great to see the data from rethink that you referenced. Thanks!
Thanks for taking the time to write this up. I have a few reactions to reading it:
EA as a consequence of capitalism
I just want to call out that this in itself isn’t a valid criticism of EA, any more than it would be a valid criticism of the social movements that you favour. But I suspect you agree with this, so let’s move on.
EA as a form of capitalism
I think you’ve made a category error here. I hear your comment that ‘critical theorists have long viewed capital in extra-monetary terms’, but whatever resource we’re talking about, the kind of capitalist system you’re describing is people trying to grab as much of that resource as possible for themselves. That’s what the ‘maximization’ is all about.
EA is not trying to use time, money, and labour to maximally hoard resources, it’s using them to try and maximally improve the long-term future/alleviate suffering/avoid extinction risks/etc.
I would expect any social movement you care about to be doing something similar with regards to its own goals. I do hear that you have concerns about focusing too hard on efficiency/optimization, but I don’t agree that this is the property of capitalism that causes harm, rather its lack of a means to incentivise optimizing for public (vs private) goods.
EA as a facilitator of capitalism
I would really like concrete examples if you’re going to make this argument. My impression is that people tend to make this case without providing any, and as a result I’m highly sceptical of the claim.
Can you show me a couple of case studies where an EA-backed aid program plausibly thwarted meaningful political change that would otherwise have occurred in some area of the world? Without that, I don’t think we can have a productive conversation on this point.
I accept that aid is sometimes used to disingenuously manipulate public opinion, but I do not think the correct response to this is to stop trying to help people! (I think this would be true even if most aid was given in bad faith).
I also think the idea that EA funds help bad actors to better disingenuously manipulate public opinion doesn’t make sense. I think most of the public would consider EA funds a pretty weird place to put money, and even if some bad actor could claim they saved 100x as many lives and EA had helped them do it, our cognitive biases around large numbers mean this probably wouldn’t play significantly better in terms of PR. The extra 99x lives saved, however, would remain saved.
Finally, I am strongly against any line of thinking that implies we should deliberately be more neglectful so people in need get angry and revolt, ultimately making things better in the long run. You don’t go this far in your piece, but I think you get pretty close.
There are many reasons I think this kind of idea is wrongheaded, but for a start I think it’s disrespectful to those in suffering to act like they somehow need to be ‘prodded’ into realizing things could work better than they do, and doubly so to try and do this by deliberately abstaining from helping them when it’s within your ability to do so. I hope we can agree on that!
Foucault, Critical Theory, etc.
I actually spent some time at university in the post-Kantian philosophy space. There was a point when I really liked it, but now I find it problematically navel-gazey.
For example, claiming a worldview that values ‘joy, community, and non-human life’ would somehow ‘de-reify’ scarcity as something that actually exists in the world seems completely unhelpful to me. Scarcity pretty straightforwardly predates capitalism and feudalism (see starvation), and I think having a joy- and community-based value system comes nowhere close to building the structures that will let us avoid it.
That said, ‘EA-admissible data therefore only captures a small fraction of total ideas’ is correct, and I tend to agree with you that EA should act a lot more like it (post here). I just encourage you not to push this to the point where you’re making statements like ‘objective evidence doesn’t exist’. Even if they are in some sense true, they are totally impractical, and so are (rightfully, I think) offputting to most people.
I think you mostly bring critical theory up in the context of deciding what evidence to act on. For all of its flaws, including, as you say, that it inevitably falls short of being fully inclusive, focused and quantitative approaches have yielded some pretty amazing results.
What pushed me over into the EA consensus here is Philip Tetlock’s Superforecasting. There’s something about being able to consistently predict the future better than everyone else that I find pretty convincing, and Tetlock’s background is also in the humanities. I really recommend reading it.
I agree it is very important that people get to work on projects in areas without pre-researched interventions or randomized control trials they can use to argue their ideas will work, because I think your observations relating to diversity and bias in who gets to have those things funded are correct. I just don’t think there’s anything wrong with an ecosystem that decides to focus on areas where the RCTs do already exist.
Finally, I care about a method of change’s track record, and I’m not particularly convinced by critical theory in this area (you might want to look into Martin Heidegger’s politics). I want to take their insights around how atypical and underleveraged it is to listen to diverse viewpoints, use that to update very hard on the views of people with different experiences and backgrounds to my own, and then get to work.
In the volunteering that I have done, no other part of the critical theory you cite throughout your piece has proven particularly useful. It’s worth noting that unions and social movements predate critical theory!
General comment on the idea that EA is opposed to social change
EA is not opposed to social change movements. I donate to Sunrise Movement on the recommendation of https://www.givinggreen.earth/ , there is also https://www.socialchangelab.org/ .
I regularly hear criticism along the lines of ‘EA by its very structure cannot question the dynamics of power, it can only work within the existing political system’, and I think this is straightforwardly false.
Political system change certainly isn’t a focus of EA from what I’ve seen, but that is mostly because EA folks tend to like numbers and statistics, which can’t be leveraged in quite such interesting ways when working with grassroots organizations. The typical elite background of EAs probably also makes grassroots organization unappealing on some aesthetic level too to be fair, which seems more problematic.
That said, this says something about the personal preferences of the EA community, but it does not render EA opposed to other communities doing grassroots work. In specific cases where EA gets in the way of another community, of course they should communicate and try to resolve the issue, but generally I think the best solution is pretty clearly to live and let live.
Some people like doing good with statistics, some people like doing good with organizing, those preferences lend themselves to different cause areas, and I am very grateful to both groups of people.
Tropical rainforest example
I want to hear more about this, as based on what you’ve written it sounds like a great cause to prioritize. (I acknowledge that you’re worried EA cause-prioritizing the amazon will lead to commodifying the amazon, and hopefully I’ve explained why I disagree with that above).