Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I’ll probably occasionally update that but not this.
“Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-scale communications.
Advancing some risky R&D areas (e.g., some AI hardware things, some biotech) via things other than infohazards
e.g., via providing better resources for upskilling in some areas, or via making some areas seem more exciting
Polarizing / making partisan some important policies, ideas, or communities
Making a bad first impression in some communities / poisoning the well
Causing some sticky yet suboptimal framings or memes to become prominent
Ways they could be suboptimal: inaccurate, misleading, focusing attention on the wrong things, non-appealing
By “sticky” I mean that, one these framings/memes are prominent, it’s hard to change that
Drawing more attention/players to some topics, and thereby making it less the case that we’re operating in a niche field and can have an outsized influence
This is partly about actors with unusually bad/selfish intentions or high recklessness, but also about anyone without unusually good intentions, epistemics, etc.
Feel free to let me know if you’re not sure what I mean by any of these or if you think you and me chatting more about these things seems worthwhile.
None of this means people shouldn’t do policy stuff or large-scale communications. Definitely some policy stuff should happen already, and over time more should happen. These are just things to be aware of so you can avoid doing bad things and so you can tweak net positive things to be even more net positive by patching the downsides.
Sometime after writing this, I saw Asya Bergal wrote an overlapping list of downsides here:
“I do think projects interacting with policymakers have substantial room for downside, including:
Pushing policies that are harmful
Making key issues partisan
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project”
Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I’ll probably occasionally update that but not this.
“Here’s my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-scale communications.
Locking in bad policies
Information hazards (primarily attention hazards)
Advancing some risky R&D areas (e.g., some AI hardware things, some biotech) via things other than infohazards
e.g., via providing better resources for upskilling in some areas, or via making some areas seem more exciting
Polarizing / making partisan some important policies, ideas, or communities
Making a bad first impression in some communities / poisoning the well
Causing some sticky yet suboptimal framings or memes to become prominent
Ways they could be suboptimal: inaccurate, misleading, focusing attention on the wrong things, non-appealing
By “sticky” I mean that, one these framings/memes are prominent, it’s hard to change that
Drawing more attention/players to some topics, and thereby making it less the case that we’re operating in a niche field and can have an outsized influence
See also https://www.overcomingbias.com/2019/03/tug-sideways.html
This is partly about actors with unusually bad/selfish intentions or high recklessness, but also about anyone without unusually good intentions, epistemics, etc.
Feel free to let me know if you’re not sure what I mean by any of these or if you think you and me chatting more about these things seems worthwhile.
Also bear in mind the unilateralist’s curse.
None of this means people shouldn’t do policy stuff or large-scale communications. Definitely some policy stuff should happen already, and over time more should happen. These are just things to be aware of so you can avoid doing bad things and so you can tweak net positive things to be even more net positive by patching the downsides.
See also Hard-to-reverse decisions destroy option value and Adding important nuances to “preserve option value” arguments”
Sometime after writing this, I saw Asya Bergal wrote an overlapping list of downsides here: