This post contextualizes info hazards within their normative and power/attention dynamics environment: the greater willingness and power to harm (spirit hazard), the greater the risk of sharing a piece of information. I suggest that notifications about the existence of hazardous info is shared in a way that highlights responsibility and only with relevant stakeholders. Attention-constrained decisionmakers’ interest can be gained by risk topics that cannot be used to harm or by sincere support. At the end of this piece, I ask a few questions about spirit hazards in and beyond EA.
I am thankful to Rían O Mahoney, Stanislav Fořt, and Owen Cotton-Barratt who inspired this post. All errors are mine.
Epistemic status: to inspire rather than to provide accurate definitions.
Spirit hazard is the risk arising from the normalization of a harmful attitude toward a piece of information within a group. It is the product of the group’s power to harm with the information (info hazard?) and its probability. Here, the group’s power to harm is the product of the information’s maximum harm per unit resource and the amount of these resources available to the group’s decisionmakers.
For example, if a group can cause 1000 DALYs with information about making a landmine, and is 10% likely to use it, the spirit hazard is 100 DALYs (1000 × 10%). In this calculation, assume that the information about a landmine can maximally cause 10,000 DALYs with $100,000 and the group decisionmakers can use $10,000.
Spirit hazard is mitigated by decreasing a group’s power or probability to harm with certain information. The power to harm can be lowered by decreased availability of resources to decisionmakers and the probability to harm by introducing safety understanding among them.
For instance, if only $1,000 is available to the decisionmakers in the group in the above example, then the group can cause only 10 DALYs. If, in addition, safety understanding among decisionmakers rises to lower the probability to harm to 1%, then the spirit hazard is only 1 DALY.
Within EA, biosecurity and AI safety could be subject to the greatest spirit hazard, because of the information’s high maximum harm per unit resource. Within these two cause areas, extensive checks should be done so that EA does not raise either of the remaining subcomponents of spirit hazard: decisionmakers’ resources or willingness to harm.
EA can empower decisionmakers by providing them with funding, skills, networks, and other resources and motivate them to cause harm through advancing narratives that motivate them to cause harm. Since supported decisionmakers can prevent or reduce harm, their motivation determines the sign of the effects of this support.
I suggest that risky topics are shared only with relevant decisionmakers and in a way that normalizes responsibility and rational decisionmaking. For example, information on biosecurity treaty provisions or budgetary recommendations should be shared with the few community members who inform these decisions. Or, for instance, AI safety should continue to be talked about in a way that focuses on the universal moral values alignment conundrum rather than the great harm it can cause.
If someone seeks to captivate people’s attention by sharing powerfully benevolent narratives, they can focus on topics that cannot be used to harm (such as low-probability natural phenomena protection) or offering effective support of the target audience’s positive impact objectives. ‘External risk’ protection memes can spread similarly expeditiously as those that different humans can increase. Skilled assistance, especially pro bono, can grow reputation among networks.
Instead of a summary, I would like to ask:
Should spirit hazards be considered in EA? If so, how?
What are the optimal ways of mitigating spirit hazards in EA while retaining its fundamental focus?
Should EA seek to reduce spirit hazards in areas that it does not focus on?
How did hazardous info become popular in EA? How can this be different outside of EA?
What are some scenarios in which the existence of hazardous topics is introduced to a community member and negative outcomes of various extent occur? How can these scenarios be prevented?
Introducing spirit hazards
This post contextualizes info hazards within their normative and power/attention dynamics environment: the greater willingness and power to harm (spirit hazard), the greater the risk of sharing a piece of information. I suggest that notifications about the existence of hazardous info is shared in a way that highlights responsibility and only with relevant stakeholders. Attention-constrained decisionmakers’ interest can be gained by risk topics that cannot be used to harm or by sincere support. At the end of this piece, I ask a few questions about spirit hazards in and beyond EA.
I am thankful to Rían O Mahoney, Stanislav Fořt, and Owen Cotton-Barratt who inspired this post. All errors are mine.
Epistemic status: to inspire rather than to provide accurate definitions.
Spirit hazard is the risk arising from the normalization of a harmful attitude toward a piece of information within a group. It is the product of the group’s power to harm with the information (info hazard?) and its probability. Here, the group’s power to harm is the product of the information’s maximum harm per unit resource and the amount of these resources available to the group’s decisionmakers.
Spirithazard=powerharm∗P(harm)powerharm=harmmax/resource∗resourcesFor example, if a group can cause 1000 DALYs with information about making a landmine, and is 10% likely to use it, the spirit hazard is 100 DALYs (1000 × 10%). In this calculation, assume that the information about a landmine can maximally cause 10,000 DALYs with $100,000 and the group decisionmakers can use $10,000.
Spirit hazard is mitigated by decreasing a group’s power or probability to harm with certain information. The power to harm can be lowered by decreased availability of resources to decisionmakers and the probability to harm by introducing safety understanding among them.
For instance, if only $1,000 is available to the decisionmakers in the group in the above example, then the group can cause only 10 DALYs. If, in addition, safety understanding among decisionmakers rises to lower the probability to harm to 1%, then the spirit hazard is only 1 DALY.
Within EA, biosecurity and AI safety could be subject to the greatest spirit hazard, because of the information’s high maximum harm per unit resource. Within these two cause areas, extensive checks should be done so that EA does not raise either of the remaining subcomponents of spirit hazard: decisionmakers’ resources or willingness to harm.
EA can empower decisionmakers by providing them with funding, skills, networks, and other resources and motivate them to cause harm through advancing narratives that motivate them to cause harm. Since supported decisionmakers can prevent or reduce harm, their motivation determines the sign of the effects of this support.
I suggest that risky topics are shared only with relevant decisionmakers and in a way that normalizes responsibility and rational decisionmaking. For example, information on biosecurity treaty provisions or budgetary recommendations should be shared with the few community members who inform these decisions. Or, for instance, AI safety should continue to be talked about in a way that focuses on the universal moral values alignment conundrum rather than the great harm it can cause.
If someone seeks to captivate people’s attention by sharing powerfully benevolent narratives, they can focus on topics that cannot be used to harm (such as low-probability natural phenomena protection) or offering effective support of the target audience’s positive impact objectives. ‘External risk’ protection memes can spread similarly expeditiously as those that different humans can increase. Skilled assistance, especially pro bono, can grow reputation among networks.
Instead of a summary, I would like to ask:
Should spirit hazards be considered in EA? If so, how?
What are the optimal ways of mitigating spirit hazards in EA while retaining its fundamental focus?
Should EA seek to reduce spirit hazards in areas that it does not focus on?
How did hazardous info become popular in EA? How can this be different outside of EA?
What are some scenarios in which the existence of hazardous topics is introduced to a community member and negative outcomes of various extent occur? How can these scenarios be prevented?