Indeed Lukas, I guess what I’m saying is: given what I know about EA, I would not entrust it with the ring .
CarlaZoeC
ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don’t spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don’t think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I’m saying, it’s annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2.
- i’m rather sick of hearing from EAs that i’m arguing in bad faith- if I wanted to play nasty it wouldn’t be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse to name names in respect to specific actions I criticise or continue to get passed information about, because I want to make sure the debate is not about individuals but about incentives/structures
- a note on me exploiting the moment of FTX to get media attention
- really?
- please join me in speaking with the public or with journalists, you’ll see it’s no fun at all doing it. i have a lot of things i’d rather be doing. many people will be able to confirm that i’ve tried to convince them to speak out too but i failed, likely because
- it’s pretty risky because you end up having rather little control over how your quotes will be used, so you just hope to work with someone who cares, but every journalist has a pre-conception of course. it’s also pretty time consuming with very little impact and then you have to deal with forum debates like this one. but hey if anyone want to join me, I encourage anyone who want to speak to the press to message me and I’ll put you in touch.
- the reason I do it is because I think EA will ‘work’ just not in the way that many good people in it intend it to work
3.
- I indeed agree that these measures are not ‘proven’ to be good because of FTX
- i think they were a good idea before FTX and they continue to be good ideas
- they are not ‘my’ ideas, they are absolutely standard measures against big bureaucracy misconduct
- i don’t want anyone to ‘implement my recommentions’ just because they’re apparently mine (they are not), they are a far bigger project than a single person should handle and my hope was that the EA community would be full of people who’d maybe take it as inpiration and do something with it in their local context—it would then be their implmentation.
- i like the responses I had on twitter that were saying that FTX was in fact the first to do re-granting
- I agree and I thought that was great!
- in fact they were interested in funding a bunch of projects I care a lot about, including a whole section on ‘epistemics’! I’m not sure it was done for the right reasons (maybe the incentive to spend money fast was also at play), and the re-granting was done without any academic rigor, data collection or metrics about how well it works (as far as I know), but I was still happy to see it
- I don’t see how this invalidates the claim that re-granting is a good idea though
4.
- those who only want to know if my recommendations would have prevented this specific debacle are missing the point. someone may have blown the whistle, some transparency may have helped raise alarms, fewer people may have accepted the money, distributed funding may have meant more risk averse people would have had a say about whether to accept the money—or not. risk reduction is about reduction, not bringing it down to 0. so, do those measures, depending on how they’re set up, reduce risk? yes I can see how they would, e.g. is it true that there were slack messages on some slack for leaders which warned against SBF, or is it true that several orgisations decided (but don’t disclose why) against taking FTX funding https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity? I don’t know enough about the people involved to say what each would have needed to be incentivised to be more public about their concerns. but do you not think it would have been useful knowledge to have available, e. g. for those EA members who got indiv grants and made plans with those grants?
even if institutional measures would not have prevented the FTX case, they are likely to catch a whole host of other risks in the future.
5.
-The big mistake that I am making is to not be an EA but to comment on EA. It makes me vulnerable to the attack of “your propositions are not concrete enough to fix our problems, so you must be doing it to get attention?” I am not here trying to fix your problems.
- I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as ‘humanity’ or ‘future beings’. That means that even if I don’t want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it’s not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones.
6.
- i don’t say anywhere that ‘every procedure ought to be fully democratised’ or ‘every organisation has to have its own whistleblower protection scheme’ - do i?
- *clearly* these are broad arguments, geered towards starting a discussion across EA, within EA institutions that need to be translated into concrete proposals and adjustments and assessments that meet each contextual need
- there’s no need to dismiss the question of what procedures actually lead to the best epistemic outcomes by arguing that ‘democratising everything’ would bring bureaucracy (of course it would and no one is arguing for that anyway)
- for all the analyses of my tweets, please also look at the top page of the list of recommendations for reforms , it says something like “clearly this needs to be more detailed to be relevant but I’ll only put in my free time if I have reason to believe it will be worth my time”. There was no interest by Will and his team to follow up with any of it, so I left it at that (i had sent another email after the meeting with some more concrete steps necessary to at least get data, do some prototyping and reserach to test some of my claims about decentralised funding, and in which I offered I could provide advice and help out but that they should employ someone else to actually lead the project). Will said he was busy and would forward it to his team. I said ‘please reach out if you have any more questions’ and never heard from anyone again. It won’t be hard to come up with concrete experiments/ideas for a specific context/organisation/task/team but I’m not sure why it would be productive for me to do that publically rather than at the request of a specific organisation/team. If you’re an EA who cares about EA having those measures in place, please come up with those implemenation details for your community yourself.
7.
- I’d be very happy to discuss details of actually implementing some of these proposals for some particular contexts in which I believe it makes sense to try them. I’d be very happy to consult organizations that are trying to make steps in those directions. I’d be very happy to engage with and see a theoretical discussion about the actual state of the reserach.
But none of the discussions that I’ve seen so far are actually on the level of detail that would match the forefront of the experimental data and scholarly work that I’ve seen so far. Do you think scholars of democratic theory have not yet thought about a response to the typical ‘but most people are stupid’? Everyone who dismisses decentralised reasoning as a viable and epistemically valuable approach, should at least engage with the arguments by political scientists (I’ve cited a bunch in previous publications/twitter, here again, e.g. Landemore, Hong&Page are a good start) who spent years on these questions (ie not me) and then argue on their level to bring the debate forward if they then still think they can.
8.
Jan, you seem particularly unhappy with me, reach out if you like, I’m happy to have a chat or answer some more questions.
The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres.
There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don’t recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone in a very different direction, because Torres had (to our surprise) unilaterally published major parts of the initial draft as a mini-book already and because we thought that this collaboration was going to be very difficult. I recall video calls in which we discussed the matter with Torres, decided to take out sections that were initially supplied by Torres and cite Torres’ mini-book whereever we deemed it necessary to refer to it. The degree to which the Democratising Risk paper is influenced by Torres is seen in our in-text citations: we don’t hide the fact that we find some of the arguments noteworthy! Torres agreed with those plans.
At the time it seemed to me that I and Torres were trying to achieve fundamentally different goals: I wanted to start a critical discussion within EA and Torres was ready by that stage to incoculate others against EA and longtermism. It was clear to me that the tone and style of argumentation of initial drafts had little chance of being taken seriously in EA. My own opinion is that many arguments made by Torres are not rigorous enough to sway me, but that they often contain an initial source of contention that is worth spending time developping further to see whether it has substance. Torres and I agree in so far as we surely both think there are several worthy critiques of EA and longtermism that should be considered, but I think we differ greatly in our credences in the plausibility of different critiques, how we wanted to treat and present critiques and who we wanted to discuss them with.
The emotional contexual embedding of an argument matters greatly to its perception. I thought EAs, like most people, were not protected from assessing arguments emotionally and while I don’t follow EA dramas closely (someone also kindly alerted me to this one unfolding), by early 2021 I had gotten the memo that Torres had become an emotional signal for EAs to discount much of what the name was attached to. At the time I thought it would not do the arguments justice to let them be discounted because of an associated name that many in EA seem to have an emotional reaction against and the question of reception did become one factor for why we thought it best not to consider the co-authorship with Torres. One can of course manage perception of a paper via co-authorship and we considered collaborating with respected EAs to give it more credibility but we decided both against name-dropping those people who invested via long conversations and commentary in the piece to boost it as much as we decided not to advertise that there are obvious overlaps with some of Torres’ critiques. There is nothing to hide in my view: one can read Torres’ work and Democratising Risk (and in fact many other peoples’ critiques) and see similarities—this should probably strengthen one’s belief that there’s something in that ballpark of arguments that many people feel we should take seriously?
Apart from the fact that it really is an entirely different paper (what you saw is version 26 or something and I think about 30 people have commented on it. I’m not sure it’s meaningful to speak about V1 and V20 as being the same paper. And what you see is all there is: all the citations of Torres are indeed pointing to writing by Torres, but they are easily found and you’ll see that it is not a disproportionate influence), we did indeed hope to avoid the exact scenario we find ourselves in now! The paper is at risk of being evaluated in light of any connection to Torres rather than on it’s own terms, and my trustworthiness in reporting on EAs treatment of critiques is being questioned because I cared about the presentation and reception of the arguments in this paper? A huge amount of work went into adjusting the tone of the paper to EAs (irrespective of Torres, this was a point of contention between Luke and I too), to ensure the arguments would get a fair hearing and we had to balance this against non-EA outsiders who thought we were not forceful enough.
I think we succeeded in this balance, since both sides still to tell us we didn’t do quite enough (the tone still seems harsh to EAs and too timid to outsiders) but both EAs and outsiders do engage with the paper and the arguments and I do think it is true that there is a greater awareness about (self-) censorship risk and critiques being valuable. Having published , EAs have so far been kind towards me. This is great! I do hope it’ll stay this way. Contrary to popular belief, it’s not sexy to be seen as the critic. It doesn’t feel great to be told a paper will damage an institution, to have others insinuate that I plug my own papers under pseudonyms in forum comments or that I had malicious intentions in being open about the experience, and it’s annoying to be placed into boxes with other authors who you might strongly disagree with. While I understand that those who don’t know me must take any piece of evidence they can get to evaluate the trustworthiness of my claims, I find it a little concerning that anyone should be willing to infer and evaluate character from minor interactions. Shouldn’t we rather say: given that we can’t fully verify her experience, can we think about why such an experience would be bad for the project of EA and what safeguards we have in place such that those experiences don’t happen? My hope was that I can serve as a positive example to others who feel the need to voice whatever opinion (“see it’s not so bad!”), so I thank anyone on here who is trying to ease the exhaust that inevitably comes with navigating criticism in a community. The experience so far has made me think that EAs care very much that all arguments (including those they disagree with) are heard. Even if you don’t think I’m trustworthy and earnest in my concerns, do please continue to keep the benefit of doubt in mind towards your perceived critics, I think we all agree they are valuable to have among us and if you care about EA, do keep the process of assessing trustworthiness amicable, if not for me then for future critics who do a better job than I.
I agree this is clearly a terrible argument and I’d hope my proposition for distributed decision making would never be dragged into such an argumentative mess. Throwaway151, I’m happy to have a call to discuss the many doubts and questions you have?
Torres did provide comments on a draft indeed—so did many others, we were very liberal in sharing it before it went out. I would have to dig deep to know whether we accepted Torres’ comments on any later drafts, but I’m very sure there was no major rewriting in response to Torres comments and we certaintly saw now responsibility to do so: commentary is not authorship.
Hi Lukas—I’m sorry I didn’t get back to you, I think this should be considered bad form. tbh I cannot recall why I didn’t, I just remember having been on many calls about this (realising this approach wasn’t scalable) and simply wanting to take a break from this paper after many months of it taking emotional effort (and I am indeed rarely on FB and must have fogotten to reply). I would have hoped for you to ping me via email if it was important to you! I’m still happy to have a call to answer your questions.
The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres.
There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don’t recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone in a very different direction, because Torres had (to our surprise) unilaterally published major parts of the initial draft as a mini-book already and because we thought that this collaboration was going to be very difficult. I recall video calls in which we discussed the matter with Torres, decided to take out sections that were initially supplied by Torres and cite Torres’ mini-book whereever we deemed it necessary to refer to it. The degree to which the Democratising Risk paper is influenced by Torres is seen in our in-text citations: we don’t hide the fact that we find some of the arguments noteworthy! Torres agreed with those plans.
At the time it seemed to me that I and Torres were trying to achieve fundamentally different goals: I wanted to start a critical discussion within EA and Torres was ready by that stage to incoculate others against EA and longtermism. It was clear to me that the tone and style of argumentation of initial drafts had little chance of being taken seriously in EA. My own opinion is that many arguments made by Torres are not rigorous enough to sway me, but that they often contain an initial source of contention that is worth spending time developping further to see whether it has substance. Torres and I agree in so far as we surely both think there are several worthy critiques of EA and longtermism that should be considered, but I think we differ greatly in our credences in the plausibility of different critiques, how we wanted to treat and present critiques and who we wanted to discuss them with.
The emotional contexual embedding of an argument matters greatly to its perception. I thought EAs, like most people, were not protected from assessing arguments emotionally and while I don’t follow EA dramas closely (someone also kindly alerted me to this one unfolding), by early 2021 I had gotten the memo that Torres had become an emotional signal for EAs to discount much of what the name was attached to. At the time I thought it would not do the arguments justice to let them be discounted because of an associated name that many in EA seem to have an emotional reaction against and the question of reception did become one factor for why we thought it best not to consider the co-authorship with Torres. One can of course manage perception of a paper via co-authorship and we considered collaborating with respected EAs to give it more credibility but we decided both against name-dropping those people who invested via long conversations and commentary in the piece to boost it as much as we decided not to advertise that there are obvious overlaps with some of Torres’ critiques. There is nothing to hide in my view: one can read Torres’ work and Democratising Risk (and in fact many other peoples’ critiques) and see similarities—this should probably strengthen one’s belief that there’s something in that ballpark of arguments that many people feel we should take seriously?
Apart from the fact that it really is an entirely different paper (what you saw is version 26 or something and I think about 30 people have commented on it. I’m not sure it’s meaningful to speak about V1 and V20 as being the same paper. And what you see is all there is: all the citations of Torres are indeed pointing to writing by Torres, but they are easily found and you’ll see that it is not a disproportionate influence), we did indeed hope to avoid the exact scenario we find ourselves in now! The paper is at risk of being evaluated in light of any connection to Torres rather than on it’s own terms, and my trustworthiness in reporting on EAs treatment of critiques is being questioned because I cared about the presentation and reception of the arguments in this paper? A huge amount of work went into adjusting the tone of the paper to EAs (irrespective of Torres, this was a point of contention between Luke and I too), to ensure the arguments would get a fair hearing and we had to balance this against non-EA outsiders who thought we were not forceful enough.
I think we succeeded in this balance, since both sides still to tell us we didn’t do quite enough (the tone still seems harsh to EAs and too timid to outsiders) but both EAs and outsiders do engage with the paper and the arguments and I do think it is true that there is a greater awareness about (self-) censorship risk and critiques being valuable. Having published , EAs have so far been kind towards me. This is great! I do hope it’ll stay this way. Contrary to popular belief, it’s not sexy to be seen as the critic. It doesn’t feel great to be told a paper will damage an institution, to have others insinuate that I plug my own papers under pseudonyms in forum comments or that I had malicious intentions in being open about the experience, and it’s annoying to be placed into boxes with other authors who you might strongly disagree with. While I understand that those who don’t know me must take any piece of evidence they can get to evaluate the trustworthiness of my claims, I find it a little concerning that anyone should be willing to infer and evaluate character from minor interactions. Shouldn’t we rather say: given that we can’t fully verify her experience, can we think about why such an experience would be bad for the project of EA and what safeguards we have in place such that those experiences don’t happen? My hope was that I can serve as a positive example to others who feel the need to voice whatever opinion (“see it’s not so bad!”), so I thank anyone on here who is trying to ease the exhaust that inevitably comes with navigating criticism in a community. The experience so far has made me think that EAs care very much that all arguments (including those they disagree with) are heard. Even if you don’t think I’m trustworthy and earnest in my concerns, do please continue to keep the benefit of doubt in mind towards your perceived critics, I think we all agree they are valuable to have among us and if you care about EA, do keep the process of assessing trustworthiness amicable, if not for me then for future critics who do a better job than I.
jumping in here briefly because someone alerted me to this post mentioning my name: I did not comment, I was not even aware of your forum post John, (sorry I don’t tend to read the EA forum), don’t tend to advertise previous works of mine in other peoples comments sections and if I’d comment anywhere it would certainly be under my own name
Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding.
Thanks for stating this publically here Will!
Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued. If that was the case then most of the key texts in x-risk would all be poorly argued.
Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis.
Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.
As David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it’s hard to find how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently. Those two criticisms are in tension.
If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research).
You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?
Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again, your points are in tension.
I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.
Then it wouldn’t be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism.
This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn’t a criqitue.
We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda.The abstract and conclusion clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc. We highlight that the existing evidence basis and arguments for them are weak.
We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we’re covering too much—but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it’s fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let’s get on with the science.
Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell’s Animal Farm is considered a polemic against totalitarianism), but I’m guessing it’s not meant in that way.
We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.
Finally, the ‘speaking abstractly’ about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.
fwiw I was not offended at all.
This is a great comment, thank you for writing it. I agree—I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite.
I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening.
I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented?
To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was ‘lacking in rigor’ or that it wasn’t ‘loving enough’, were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it’s not surprising that mixture of power, community and research can produce biased scholarship.
Very happy to have a private chat Aaron!
It doesn’t matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
There’s some hedging in the article but…
He published in a policy journal, with an opening ‘policy implication’ box
He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
The VWH was also published as a German book (why I don’t know…)
Very happy to have a private chat and tell you about our experience then.
Here’s a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
“Do you not think we should work on x-risk?”
Of course we should work on x-risk
“Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?”
No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas.
But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.
“Do you hate longtermism?”
No. We are both longtermists (probs just not the techno utopian kind).
“You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option”
It doesn’t matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
There’s some hedging in the article but…
He published in a policy journal, with an opening ‘policy implication’ box
He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
The VWH was also published as a German book (why I don’t know…)
Seriously if we’re not allowed to criticise those choices, what are we allowed to criticise?
“Do you think longtermism is by nature techno-utopian?”
In theory, no. Intergenerational justice is an old idea. Clearly there are versions of longtermism that do not have to rely on the current set of assumptions. Longtermist thinking is a good idea.
In practice, most longtermists tend to operate firmly under the TUA. This is seen in the visions they present on the future, the value placed on continued technological and economic growth etc.
“Who is your target audience?”Junior researchers who want to do something new and exciting in x-risk and
External academics who have thus far felt repelled by the TUA framing of the x-risk and might want to come into the field and bring in their own perspective
Anyone who really loves the TUA and wants to expose themselves to a different view
Anyone who doubted the existing approaches but could not quite put a finger on why
Our audience is not: philosophers working on x-risk who are thinking about these issues day and night and who are well aware of some of the problems we raise.
“Do you think we should abandon the TUA entirely?”
No. Especially those who feel personally compelled to work on the TUA or who have built an expertise in it, are obviously free to work on it.
We just shouldn’t pressure everyone else to do that too.
“Why didn’t you cite paper X?”
Sorry, we probably missed it. We’re covering an enormous amount in this paper.
“Why didn’t you cite blogpost X? ”
We constrained our lit search to papers that have the ambition to get through academic peer review. We also don’t read as many blog posts. That said, we appreciate that some people have raised similar concerns as we do on Twitter and on Blogs. We don’t think this renders a more formal listing of the concerns useless.
“You critique we need to solve problem X but Y has already written a paper on X!”
Great! Then we support Y having written that paper! We invite more people to do what Y did. Do you think this was enough and the problem is now solved? Do you think there are no valuable alternative papers to be written so that it’s ridiculous to have said we need more work on X?
“Why is your language so harsh? Or: Your language should have been more harsh!”
Believe it or not we got both perspectives—for some people the paper is beating around the bush too much, for others it feels like a hostile attack. We could not please them all.
Maybe ask youself what makes you as a reader fall into one of these categories?
- 6 Feb 2023 19:18 UTC; 21 points) 's comment on [Link] How effective altruists ignored risk by (
The paper never spoke about getting rid of experts or replacing experts with citzens. So no.
Many countries now run citizen assemblies on climate change, which I’m sure you’re aware of. They do not aim to replace the role of IPCC.
EA or the field of existential risk cannot be equated with the IPCC.
To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper.
Democratising Risk—or how EA deals with critics
That sounds cool. Happy to see that some of this work is going on and glad to hear that you’re specifically thinking about tail-risk climate change too. Looking at fungi as a food source is obviously only one of the dimensions of use I describe as relevant here, and in ALLFED’s case, cost of the production is surely only one relevant dimension from a longtermist perspective. In general, I’m happy to see that some of your interventions do seem to consider fixing existing vulnerabilities as much as treating the sympoms of a catastrophe. I’ll go through the report you have online (2019 is the most recent one?) to check who you’re already in contact with and whether I can recommend any other experts to you who it might be useful for you to reach out to.
On a seperate note and because it’s not on the Q&A of your website: are you indeed fully funded by EA orgs (BERI, EA Lottery as per report)? I found it surprising that given your admirable attempts to connect with the relevant ecosystem of organisations you would not have funding from other sources. Is this because you didn’t try or because it seems no one except EAs want to grant money for the work you’re trying to do?
Thank you for taking the time to write this up, it is encouraging—I also had never thought to check my karma …