Here’s a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
“Do you not think we should work on x-risk?”
Of course we should work on x-risk
“Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?”
No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas.
But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.
“Do you hate longtermism?”
No. We are both longtermists (probs just not the techno utopian kind).
“You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option”
It doesn’t matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
There’s some hedging in the article but…
He published in a policy journal, with an opening ‘policy implication’ box
He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
The VWH was also published as a German book (why I don’t know…)
Seriously if we’re not allowed to criticise those choices, what are we allowed to criticise?
“Do you think longtermism is by nature techno-utopian?”
In theory, no. Intergenerational justice is an old idea. Clearly there are versions of longtermism that do not have to rely on the current set of assumptions. Longtermist thinking is a good idea.
In practice, most longtermists tend to operate firmly under the TUA. This is seen in the visions they present on the future, the value placed on continued technological and economic growth etc.
“Who is your target audience?”
Junior researchers who want to do something new and exciting in x-risk and
External academics who have thus far felt repelled by the TUA framing of the x-risk and might want to come into the field and bring in their own perspective
Anyone who really loves the TUA and wants to expose themselves to a different view
Anyone who doubted the existing approaches but could not quite put a finger on why
Our audience is not: philosophers working on x-risk who are thinking about these issues day and night and who are well aware of some of the problems we raise.
“Do you think we should abandon the TUA entirely?”
No. Especially those who feel personally compelled to work on the TUA or who have built an expertise in it, are obviously free to work on it.
We just shouldn’t pressure everyone else to do that too.
“Why didn’t you cite paper X?”
Sorry, we probably missed it. We’re covering an enormous amount in this paper.
“Why didn’t you cite blogpost X? ”
We constrained our lit search to papers that have the ambition to get through academic peer review. We also don’t read as many blog posts. That said, we appreciate that some people have raised similar concerns as we do on Twitter and on Blogs. We don’t think this renders a more formal listing of the concerns useless.
“You critique we need to solve problem X but Y has already written a paper on X!”
Great! Then we support Y having written that paper! We invite more people to do what Y did. Do you think this was enough and the problem is now solved? Do you think there are no valuable alternative papers to be written so that it’s ridiculous to have said we need more work on X?
“Why is your language so harsh? Or: Your language should have been more harsh!”
Believe it or not we got both perspectives—for some people the paper is beating around the bush too much, for others it feels like a hostile attack. We could not please them all.
Maybe ask youself what makes you as a reader fall into one of these categories?
It’s been interesting to re-read the discussion of this post in light of new knowledge that Emile P Torres was originally a co-author. For example, Cremer instructs reviewers to ask why they might have felt like the paper was a hostile attack. Well, I’d certainly see why readers could have had this perception if they read it after Emile had already started publicly insinuating that various longtermists are sympathetic to white supremacy, are plagiarists.
Cremer also says some reviewers asked, “Do you hate longtermism?”
The answer she gives above is “No. We are both longtermists (probs just not the techno utopian kind)”, but it seems like the answer would have in fact been “Two of us do not, but one the authors does hate longtermism and has publicly called it incredibly dangerous”
With regard to harshness, I think part of the reason you get different responses is because you’re writing in the genre of the academic paper. Since authors have to write in a particular formal style, it’s ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it’s not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe.
For example:
Under the TUA, an existential risk is understood as one with the potential to cause human extinction directly or lead us to fail to reach our future potential, expected value, or technological maturity. This means that what is classified as a prioritised “risk” depends on a threat model that involves considerable speculation about the mechanisms which can result in the death of all humans, their respective likelihoods, and a speculative and morally loaded assessment of what might constitute our inability to reach our potential.
[...]
A risk perception that depends so strongly on speculation and yet-to-be-verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination. If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.
As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it’s easy to read into it some amount of value judgment around longtermism and longtermists.
Here’s a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
“Do you not think we should work on x-risk?”
Of course we should work on x-risk
“Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?”
No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas.
But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.
“Do you hate longtermism?”
No. We are both longtermists (probs just not the techno utopian kind).
“You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option”
It doesn’t matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
There’s some hedging in the article but…
He published in a policy journal, with an opening ‘policy implication’ box
He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
The VWH was also published as a German book (why I don’t know…)
Seriously if we’re not allowed to criticise those choices, what are we allowed to criticise?
“Do you think longtermism is by nature techno-utopian?”
In theory, no. Intergenerational justice is an old idea. Clearly there are versions of longtermism that do not have to rely on the current set of assumptions. Longtermist thinking is a good idea.
In practice, most longtermists tend to operate firmly under the TUA. This is seen in the visions they present on the future, the value placed on continued technological and economic growth etc.
“Who is your target audience?”
Junior researchers who want to do something new and exciting in x-risk and
External academics who have thus far felt repelled by the TUA framing of the x-risk and might want to come into the field and bring in their own perspective
Anyone who really loves the TUA and wants to expose themselves to a different view
Anyone who doubted the existing approaches but could not quite put a finger on why
Our audience is not: philosophers working on x-risk who are thinking about these issues day and night and who are well aware of some of the problems we raise.
“Do you think we should abandon the TUA entirely?”
No. Especially those who feel personally compelled to work on the TUA or who have built an expertise in it, are obviously free to work on it.
We just shouldn’t pressure everyone else to do that too.
“Why didn’t you cite paper X?”
Sorry, we probably missed it. We’re covering an enormous amount in this paper.
“Why didn’t you cite blogpost X? ”
We constrained our lit search to papers that have the ambition to get through academic peer review. We also don’t read as many blog posts. That said, we appreciate that some people have raised similar concerns as we do on Twitter and on Blogs. We don’t think this renders a more formal listing of the concerns useless.
“You critique we need to solve problem X but Y has already written a paper on X!”
Great! Then we support Y having written that paper! We invite more people to do what Y did. Do you think this was enough and the problem is now solved? Do you think there are no valuable alternative papers to be written so that it’s ridiculous to have said we need more work on X?
“Why is your language so harsh? Or: Your language should have been more harsh!”
Believe it or not we got both perspectives—for some people the paper is beating around the bush too much, for others it feels like a hostile attack. We could not please them all.
Maybe ask youself what makes you as a reader fall into one of these categories?
It’s been interesting to re-read the discussion of this post in light of new knowledge that Emile P Torres was originally a co-author. For example, Cremer instructs reviewers to ask why they might have felt like the paper was a hostile attack. Well, I’d certainly see why readers could have had this perception if they read it after Emile had already started publicly insinuating that various longtermists are sympathetic to white supremacy, are plagiarists.
Cremer also says some reviewers asked, “Do you hate longtermism?”
The answer she gives above is “No. We are both longtermists (probs just not the techno utopian kind)”, but it seems like the answer would have in fact been “Two of us do not, but one the authors does hate longtermism and has publicly called it incredibly dangerous”
Just noting that I strongly endorse both this format for responding to questions, and the specific responses.
With regard to harshness, I think part of the reason you get different responses is because you’re writing in the genre of the academic paper. Since authors have to write in a particular formal style, it’s ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it’s not crazy to read their judgments into the text, but different readers will draw different conclusions about what you want them to feel or believe.
For example:
As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it’s easy to read into it some amount of value judgment around longtermism and longtermists.