If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf.
Ah right. I suppose the unilateralist’s curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn’t really apply. Although one wrinkle might be considering the unilateralist’s curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.
What the researcher can do is try and build consensus/lobby for a collective decision making body on the internal climate heating (ICH) problem. Planning to release the information when they are satisfied that there is going to be a solution in time for fixing the problem when it occurs.
Thanks, this concrete example definitely helps.
I think I am also objecting to the expected payoff being thought of as a fixed quantity. You can either learn more about the world to alter your knowledge of the payoff or try and introduce things/insituttions into the world to alter the expected payoff. Building useful institutions may rely on releasing some knowledge, that is where things become more hairy.
This makes sense. “Release because the expected benefit is above the expected risk” or “not release because the vice versa is true” is a bit of a false dichotomy, and you’re right that we should be more thinking about options that could maximize the benefit while minimizing the risk when faced with info hazards.
Also as the the unilaterlist’s curse suggests discussing with other people such that they can undertake the information release, sometimes increases the expectation of a bad out come. How should consensus be reached in those situations?
This can certainly be a problem, and is a reason not to go too public when discussing it. Probably it’s best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist’s curse, and ideally who don’t have the means/authority of releasing the information themselves (e.g., if you have a written up blog post you’re thinking of posting that might contain info hazards, then maybe you could discuss in vague terms with other individuals first, without sharing the entire post with them?).
Ah right. I suppose the unilateralist’s curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn’t really apply. Although one wrinkle might be considering the unilateralist’s curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.
Interesting idea. This may be worth trying to develop more fully?
Probably it’s best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist’s curse,
I’m still coming at this from a lens of “actionable advice for people not in ea”. It might be that the person doesn’t know many other trusted individuals, what should be the advice then? It would probably also be worth giving advice on how to have the conversation as well. The original article gives some advice on what happens if consensus can’t be reached (voting/such like).
As I understand it you shouldn’t wait for consensus else you have the unilateralist’s curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster (this seems very possible if you consider crucial considerations flipping signs, rather than just random noise in beliefs in desirability).
Would you suggest discussion and vote (assuming no other courses of action can be agreed upon)? Do you see the need to correct for status quo bias in any way?
This seems very important to get right. I’ll think about this some more.
Interesting idea. This may be worth trying to develop more fully?
Yeah. I’ll have to think about it more.
I’m still coming at this from a lens of “actionable advice for people not in ea”. It might be that the person doesn’t know many other trusted individuals, what should be the advice then?
Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then perhaps it should be standard policy for this manuscript to be referred to some sort of special group consisting of journal editors from a number of different journals to deliberate. I don’t think people need to be taught the mathematical modeling behind the unilateralist’s curse for these kinds of policies to be set up, as I think people have an intuitive notion of “it only takes one person/group with bad judgment to fuck up the world; decisions this important really need to be discussed in a larger group.”
One important distinction is that people who are facing info hazards will be in very different situations when they are within EA vs. when they are out of EA. For people within EA, I think it is much more likely to be the case that a random individual has an idea that they’d like to share in a blog post or something, which may have info hazard-y content. In these situations the advice “talk to a few trusted individuals first” seems to be appropriate.
For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist’s curse.
As I understand it you shouldn’t wait for consensus else you have the unilateralist’s curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster.
You’re right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there’s mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist’s curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the “true sign” of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.
For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist’s curse.
There is a growing movement of maker’s and citizen scientists that are working on new technologies. It might be worth targeting them somewhat (although again probably without the math). I think the approaches for ea/non-ea seem sensible.
You’re right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there’s mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist’s curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the “true sign” of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.
I also like to weigh the downside of the lack of releasing the information as well. If you don’t release information you are making everyone make marginally worse decisions (if you think someone will release it anyway later). For example in the nuclear fusion example, you think that everyone currently building new nuclear fission stations are wasting their time, that people training on how to manage coal plants should be training on something else etc, etc.
I also have another consideration which is possibly more controversial. I think we need some bias to action, because it seems like we can’t go on as we are for too much longer (another 1000 years might be pushing it). The level of resources and coordination towards global problems fielded by the status quo seems insufficient. So it is a default bad outcome.
With this consideration, going back to the fusion pioneers, they might try and find people to tell so that they could increase the bus factor (the number of people that would have to die to lose the knowledge). They wouldn’t want the knowledge to get lost (as it would be needed in the long term) and they would want to make sure that whoever they told understood the import and potential downsides of the technology.
Edit:
Knowing the sign of an intervention is hard, even after the fact. Consider the invention and spread of the knowledge about nuclear chain reactions. Without it we would probably be burning a lot more fossil fuels, however with it we have the existential risk associated with it. If that risk never pays out, then it may have been a spur towards greater coordination and peace.
I’ll try and formalise these thoughts at some point, but I am bit work impaired for a while.
One more problem with the idea that I should consult my friends first before publishing a text is a “friend’ bias”: people who are my friends tend to react more positively on the same text than those who are not friends. I personally had a situation when my friends told me that my text is good and non-info-hazardous, but when I presented it to people who didn’t know me, their reaction was opposite.
Sometimes, when I work on a complex problem, I feel as if I become one of the best specialists in it. Surely, I know three other people who are able to understand my logic, but one of them is dead, another is not replying on my emails and the third one has his own vision, affected by some obvious flaw. So none of them could give me correct advice about the informational hazard.
Ah right. I suppose the unilateralist’s curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn’t really apply. Although one wrinkle might be considering the unilateralist’s curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.
Thanks, this concrete example definitely helps.
This makes sense. “Release because the expected benefit is above the expected risk” or “not release because the vice versa is true” is a bit of a false dichotomy, and you’re right that we should be more thinking about options that could maximize the benefit while minimizing the risk when faced with info hazards.
This can certainly be a problem, and is a reason not to go too public when discussing it. Probably it’s best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist’s curse, and ideally who don’t have the means/authority of releasing the information themselves (e.g., if you have a written up blog post you’re thinking of posting that might contain info hazards, then maybe you could discuss in vague terms with other individuals first, without sharing the entire post with them?).
Interesting idea. This may be worth trying to develop more fully?
I’m still coming at this from a lens of “actionable advice for people not in ea”. It might be that the person doesn’t know many other trusted individuals, what should be the advice then? It would probably also be worth giving advice on how to have the conversation as well. The original article gives some advice on what happens if consensus can’t be reached (voting/such like).
As I understand it you shouldn’t wait for consensus else you have the unilateralist’s curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster (this seems very possible if you consider crucial considerations flipping signs, rather than just random noise in beliefs in desirability).
Would you suggest discussion and vote (assuming no other courses of action can be agreed upon)? Do you see the need to correct for status quo bias in any way?
This seems very important to get right. I’ll think about this some more.
Yeah. I’ll have to think about it more.
Yeah, for people outside EA I think structures could be set up such that reaching consensus (or at least a majority vote) becomes a standard policy or an established norm. E.g., if a journal is considering a manuscript with potential info hazards, then perhaps it should be standard policy for this manuscript to be referred to some sort of special group consisting of journal editors from a number of different journals to deliberate. I don’t think people need to be taught the mathematical modeling behind the unilateralist’s curse for these kinds of policies to be set up, as I think people have an intuitive notion of “it only takes one person/group with bad judgment to fuck up the world; decisions this important really need to be discussed in a larger group.”
One important distinction is that people who are facing info hazards will be in very different situations when they are within EA vs. when they are out of EA. For people within EA, I think it is much more likely to be the case that a random individual has an idea that they’d like to share in a blog post or something, which may have info hazard-y content. In these situations the advice “talk to a few trusted individuals first” seems to be appropriate.
For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist’s curse.
You’re right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there’s mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist’s curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the “true sign” of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.
There is a growing movement of maker’s and citizen scientists that are working on new technologies. It might be worth targeting them somewhat (although again probably without the math). I think the approaches for ea/non-ea seem sensible.
I also like to weigh the downside of the lack of releasing the information as well. If you don’t release information you are making everyone make marginally worse decisions (if you think someone will release it anyway later). For example in the nuclear fusion example, you think that everyone currently building new nuclear fission stations are wasting their time, that people training on how to manage coal plants should be training on something else etc, etc.
I also have another consideration which is possibly more controversial. I think we need some bias to action, because it seems like we can’t go on as we are for too much longer (another 1000 years might be pushing it). The level of resources and coordination towards global problems fielded by the status quo seems insufficient. So it is a default bad outcome.
With this consideration, going back to the fusion pioneers, they might try and find people to tell so that they could increase the bus factor (the number of people that would have to die to lose the knowledge). They wouldn’t want the knowledge to get lost (as it would be needed in the long term) and they would want to make sure that whoever they told understood the import and potential downsides of the technology.
Edit: Knowing the sign of an intervention is hard, even after the fact. Consider the invention and spread of the knowledge about nuclear chain reactions. Without it we would probably be burning a lot more fossil fuels, however with it we have the existential risk associated with it. If that risk never pays out, then it may have been a spur towards greater coordination and peace.
I’ll try and formalise these thoughts at some point, but I am bit work impaired for a while.
One more problem with the idea that I should consult my friends first before publishing a text is a “friend’ bias”: people who are my friends tend to react more positively on the same text than those who are not friends. I personally had a situation when my friends told me that my text is good and non-info-hazardous, but when I presented it to people who didn’t know me, their reaction was opposite.
Sometimes, when I work on a complex problem, I feel as if I become one of the best specialists in it. Surely, I know three other people who are able to understand my logic, but one of them is dead, another is not replying on my emails and the third one has his own vision, affected by some obvious flaw. So none of them could give me correct advice about the informational hazard.