Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.
One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.
Some highlights:
I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be.
On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence might affect society and expansions of IPCC models that include permafrost methane release feedback loops.
On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys of how existential risks are perceived, and what the citizens of the world believe we ought to do about them.
An easier initial step here may be to specify dystopias that most value theories would say we should avoid, rather
I like this practical approach. I think this is probably enough to pin down bad outcomes we can use to guide policy, and I would be enthusiastic about seeing more perspectives in the field.
Furthermore, EV and decision theories more widely are affected by Pascal’s Mugging as well as what has been called fanaticism. We know of no pragmatic and consistent response to those challenges yet.
I do not either. One key thing that is brought forward to the paper and is one of my takeaway lessons is that the study of existential risks and longtermism ought to focus on issues which are “ethically robust”—that is, that are plausible priorities under many worldviews.
Now, I do believe that most of the current focus areas of these fields (including AI risk, biorisk, nuclear war, climate change and others) pass this litmus test. But it is something to be mindful of when engaging with work in the area. For example, I believe that arguments in favour of destroying the Earth to prevent S-risks would currently fail this test.
Do those who study the future of humanity have good grounds to ignore the visions, desires, and values of the very people whose future they are trying to protect? Choosing which risks to take must be a democratic endeavour.
I do broadly agree. We need to ensure that longtermist policy making, with its consequences, are properly explained to the electorate, and that they are empowered to collectively decide which risks to take and which ones to ignore.
EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes.
These suggestions stands to me as reasonable. I’ve bolded the ones that currently seem most actionable to me.
There are some issues with this paper which made me a bit uneasy.
I am going to highlight some examples.
First, the focus on “techno-utopian approach”, space expansionism, total utilitarism, etc seems undue. I do agree that most authors nowadays seem to endorse versions of this cluster of beliefs. This is no doubt a symptom of lack of diversity. Yet for the most part I think that most modern work on longtermism does not rely on these assumptions.
You have identified as a key example of a consequence of the TUA that proposals to stop AI development are scarce, while eg there are some proposals to stop biorisks. For what is worth, I have regularly seen proposals in the community to stop and regulate AI development.
I think making a case against this framing is something that would take a lot of energy to ensure I was correctly representing the authors, so I am afraid I will drop this thread. I think Avital’s response to Torres captures a big part of what I would bring up.
Second, it hurts my soul that you make a (good) case against unduly giving importance to info hazards, yet criticize Bostrom for talking about pro tanto reasons in favour of totalitarism (which to be clear, I am, all things considered, against. But that should not prevent us from discussing pro tanto reasons).
Third, I think the paper correctly argues that some foundational terms in the field (“existential risk”, “existential hazard”, “extinction risk” etc) are insufficiently defined. Indeed, it was in fact hard to follow this very article because of the confused vocabulary the field has come to use. However, I am unconvinced that the pinning down a terminology is critical to solve the key problems in the field. I expect others will disagree with this assessment.
Again, thank you for writing the paper and for engaging. Making it easier and more visible the criticisms of EA is vital for the health of the community. I was pleased to see that your previous criticism was well received, at least in terms of forum upvotes, and I hope this piece will be similarly impactful and change trends in Effective Altruism for the better.
My biggest takeaways from this paper:
We need to work towards an “ethically robust” field of existential risk and longtermism. That is, one that focuses on avoiding dystopias according to most commonly held worldviews.
The current state of affairs is that we are nowhere having enough cognitive diversity in the field to cover all mainstream perspectives. This is exacerbated by the lack of feedback loops between the world at large and scholars working on existential risk.
“I have regularly seen proposals in the community to stop and regulate AI development”—Are there any public ones you can signpost to or are these all private proposals?
Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.
One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.
Some highlights:
I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be.
On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence might affect society and expansions of IPCC models that include permafrost methane release feedback loops.
On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys of how existential risks are perceived, and what the citizens of the world believe we ought to do about them.
I like this practical approach. I think this is probably enough to pin down bad outcomes we can use to guide policy, and I would be enthusiastic about seeing more perspectives in the field.
I do not either. One key thing that is brought forward to the paper and is one of my takeaway lessons is that the study of existential risks and longtermism ought to focus on issues which are “ethically robust”—that is, that are plausible priorities under many worldviews.
Now, I do believe that most of the current focus areas of these fields (including AI risk, biorisk, nuclear war, climate change and others) pass this litmus test. But it is something to be mindful of when engaging with work in the area. For example, I believe that arguments in favour of destroying the Earth to prevent S-risks would currently fail this test.
I do broadly agree. We need to ensure that longtermist policy making, with its consequences, are properly explained to the electorate, and that they are empowered to collectively decide which risks to take and which ones to ignore.
These suggestions stands to me as reasonable. I’ve bolded the ones that currently seem most actionable to me.
There are some issues with this paper which made me a bit uneasy.
I am going to highlight some examples.
First, the focus on “techno-utopian approach”, space expansionism, total utilitarism, etc seems undue. I do agree that most authors nowadays seem to endorse versions of this cluster of beliefs. This is no doubt a symptom of lack of diversity. Yet for the most part I think that most modern work on longtermism does not rely on these assumptions.
You have identified as a key example of a consequence of the TUA that proposals to stop AI development are scarce, while eg there are some proposals to stop biorisks. For what is worth, I have regularly seen proposals in the community to stop and regulate AI development.
I think making a case against this framing is something that would take a lot of energy to ensure I was correctly representing the authors, so I am afraid I will drop this thread. I think Avital’s response to Torres captures a big part of what I would bring up.
Second, it hurts my soul that you make a (good) case against unduly giving importance to info hazards, yet criticize Bostrom for talking about pro tanto reasons in favour of totalitarism (which to be clear, I am, all things considered, against. But that should not prevent us from discussing pro tanto reasons).
Third, I think the paper correctly argues that some foundational terms in the field (“existential risk”, “existential hazard”, “extinction risk” etc) are insufficiently defined. Indeed, it was in fact hard to follow this very article because of the confused vocabulary the field has come to use. However, I am unconvinced that the pinning down a terminology is critical to solve the key problems in the field. I expect others will disagree with this assessment.
Again, thank you for writing the paper and for engaging. Making it easier and more visible the criticisms of EA is vital for the health of the community. I was pleased to see that your previous criticism was well received, at least in terms of forum upvotes, and I hope this piece will be similarly impactful and change trends in Effective Altruism for the better.
My biggest takeaways from this paper:
We need to work towards an “ethically robust” field of existential risk and longtermism. That is, one that focuses on avoiding dystopias according to most commonly held worldviews.
The current state of affairs is that we are nowhere having enough cognitive diversity in the field to cover all mainstream perspectives. This is exacerbated by the lack of feedback loops between the world at large and scholars working on existential risk.
“I have regularly seen proposals in the community to stop and regulate AI development”—Are there any public ones you can signpost to or are these all private proposals?