This is cool, thanks for posting :) How do you think this generalises to a situation where labor is the key resource rather than money?
I’m a bit more interested in the question ’how much longtermist labor should be directed towards capacity-building vs. ‘direct’ work (eg. technical AIS research)?′ than the question ‘how much longtermist money should be directed towards spending now vs. investing to save later?’
I think this is mainly because longtermism, x-risk, and AIS seem to be bumping up against the labor constraint much more than the money constraint. (Or put another way, I think OpenPhil doesn’t pick their savings rate based on their timelines, but based on whether they can find good projects. As individuals, our resource allocation problem is to either try to give OpenPhil marginally better direct projects to fund or marginally better capacity-building projects to fund.)
[Also aware that you were just building this model to test whether the claim about AI timelines affecting the savings rate makes sense, and you weren’t trying to capture labor-related dynamics.]
Also this: https://longtermrisk.org/the-future-of-growth-near-zero-growth-rates/
This seems relevant: https://www.overcomingbias.com/2009/09/limits-to-growth.html
I’ve haven’t read it, but the name of this paper from Andreas at GPI at least fits what you’re asking—“Staking our future: deontic long-termism and the non-identity problem”
Is The YouTube Algorithm Radicalizing You? It’s Complicated.
Recently, there’s been significant interest among the EA community in investigating short-term social and political risks of AI systems. I’d like to recommend this video (and Jordan Harrod’s channel as a whole) as a starting point for understanding the empirical evidence on these issues.
I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.
But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.
I’m not sure I understand your distinction – are you saying that while it would be objectionable to conclude that saving lives in rich countries is more “substantially more important”, it is not objectionable to merely present an argument in favour of this conclusion?
Yep that is what I’m saying. I think I don’t agree but thanks for explaining :)
Can you say a bit more about why the quote is objectionable? I can see why the conclusion ‘saving a life in a rich country is substantially more important than saving a life in a poor country’ would be objectionable. But it seems Beckstead is saying something more like ‘here is an argument for saving lives in rich countries being relatively more important than saving lives in poor countries’ (because he says ‘other things being equal’).
There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)
Agree with Marisa that you’d be well suited to do an AMA
Also not CS and you may already know it: this EAG talk is about wild animal welfare research using economics techniques. Both authors of the paper discussed are economists, not biologists.
Thanks for you comment, it makes a good point . My comment was hastily written and I think my argument that you’re referring to is weak, but not as weak as you suggest.
At some points the author is specifically critiquing longtermism the philosophy (not what actual longtermists think and do) eg. when talking about genocide. It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it’d be better if the switch was made clear.
There are many longtermists that don’t hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn’t think we’re at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).
I’m also not sure that lots of longtermists (even of the Bostrom/hinge of history type) would agree that the quoted claim accurately represent their views
our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.”
But, I do agree that some longtermists do think
there are likely to be very transformative events soon eg. within 50 years
in the long run, if they go well, these events will massively improve the human condition
And there’s some criticisms you can make of that kind of ideology that are similar to the criticisms the author makes.
from ‘Things CEA is not doing’ forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing
We are not actively focusing on:...Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
We are not actively focusing on:
Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)
I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted. Happy to expand on any points and have a discussion.
In general, I think criticisms of longtermism from people who ‘get’ longtermism are incredibly valuable to longtermists.
One reason if that if the criticisms carry entirely, you’ll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn’t have spotted themselves. And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.
In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism.
Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!
I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else.
The thesis of the book (for people reading this comment, and to check my understanding)
“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”
“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”
Utilitarianism (Edit: I think Tyle has added a better reading of this section below)
This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn’t really feel like this was argued for (please correct me if I’m wrong!)
I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
I would have preferred if these parts of the book were clear about exactly what the argument was
I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)
“A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
Longtermism does not say our current world is replete with suffering and death
Longtermism does not say the world will be transformed soon
Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
Therefore, longtermism does not meet the stated definition of a millennialist movement
Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism
Some things are bigger than other things
That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
If you can make a good big thing happen or make a good small thing happen you can make more good by making the big thing happen
That doesn’t mean the small thing is not important, but it is smaller than the big thing
I feel confused
The book quotes this section from Beckstead’s Thesis:
Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
The book goes on to say:
In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.
In a phrase, they support white supremacist ideology. To be clear, I am using this term in a technical scholarly sense. It denotes actions or policies that reinforce “racial subordination and maintaining a normalized White privilege.” As the legal scholar Frances Lee Ansley wrote in 1997, the concept encompasses “a political, economic and cultural system in which whites overwhelmingly control power and material resources,” in which “conscious and unconscious ideas of white superiority and entitlement are widespread, and relations of white dominance and non-white subordination are daily reenacted across a broad array of institutions and social settings.”
On this definition, the claims of Mogensen and Beckstead are clearly white supremacist: African nations, for example, are poorer than Sweden, so according to the reasoning above we should transfer resources from the former to the latter. You can fill in the blanks. Furthermore, since these claims derive from the central tenets of Bostromian longtermism itself, the very same accusation applies to longtermism as well. Once again, our top four global priorities, according to Bostrom, must be to reduce existential risk, with the fifth being to minimize “astronomical waste” by colonizing space as soon as possible. Since poor people are the least well-positioned to achieve these aims, it makes perfect sense that longtermists should ignore them. Hence, the more longtermists there are, the worse we might expect the plight of the poor to become.
I’m pretty sure the book isn’t using ‘white supremacist’ in the normal sense of the phrase. For that reason, I’m confused about this, and would appreciate answers to these questions
The Beckstead quote ends ‘other things being equal’. Doesn’t that imply that the claim is not ‘overall, it’s better to save lives in rich countries than poor countries’ but ‘here is an argument that pushes in favour of saving lives in rich countries over poor countries’?
Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
What if the poor people are white and the rich people are not white?
Why do rich-nation government health services not meet this definition of white supremacy?
I’d also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase
Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)
The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer
I’d be keen to hear your thoughts about the (small) field of AI forecasting and its trajectory. Feel free to say whatever’s easiest or most interesting. Here are some optional prompts:
Do you think the field is progressing ‘well’, however you define ‘well’?
What skills/types of people do you think AI forecasting needs?
What does progress look like in the field? Eg. does it mean producing a more detailed report, getting a narrower credible interval, getting better at making near-term AI predictions...(relatedly, how do we know if we’re making progress?)
Can you make any super rough predictions like ‘by this date I expect we’ll be this good at AI forecasting’?
Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship’s work—what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I’m guessing hinge of history hypothesis is irrelevant to your thinking?)
My guess is that few EAs care emotionally about cost effectiveness and that they care emotionally about helping others a lot. Given limited resources, that means they have to be cost effective. Imagine a mother with a limited supply of food to share between her children. She’s doesn’t care emotionally about rationing food, but she’ll pay a lot of attention to how best to do rationing.
I do think there are things in the vicinity of careful reasoning/thinking clearly/having accurate beliefs that are core to many EAs identities. I think those can be developed naturally to some extent, and don’t seem like complete prerequisites to being an EA
Thanks for writing this and contributing to the conversation :)
Relatedly, an “efficient market for ideas” hypothesis would suggest that if MB really was important, neglected, and tractable, then other more experienced and influential EAs would have already raised its salience.
I do think the salience of movement building has been raised elsewhere eg:
80,000 Hours do have a problem profile on it and consider it one of the most pressing problems to work on
The work around patient philanthropy has analogues to movement building (see Nuno Sempere’s in-progress paper extending this thinking to movement growth explicitly)
A bunch of other places eg. I really like this piece on movement growth
Having said that, I share the feeling that movement building seems underrated. Given how impactful it seems, I would expect more EAs to want to use their careers to work on movement building.
One resolution to this apparent conflict is that the fraction of people who can be good at movement building long-term might be smaller than it first seems. For lots of the interventions that you suggest, strong social skills and a strong understanding of EA concepts seem important, as well as some general executional or project management ability. Though movement builders don’t necessarily have to be excellent in any of these domains, they have to be at least pretty good at all of them. They also have to be interested enough in all of them to do movement building. This narrows down the pool of people who can work in movement building.
Another possible reason is that within the EA community movement building careers are generally seen as less prestigious than more ‘direct’ kinds of work and social incentives play a large role in career choice. For example, some people would be more impressed by someone doing technical AI safety research than by someone building talent pipelines into AI safety, even if the second one has more impact.
Also, as Aaron says, a lot of direct work has helpful movement building effects.
I also agree with Aaron that looking at funding is a bit complicated with movement building, partly because movement building is probably cheaper than other things, but also that it can be hard to tease apart what’s movement building and what’s not.