Preferences for the long-term future [an abandoned research idea]
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] Iām unsure how useful this idea is. But twice this week I felt itād be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (see Dafoe, 2018 on āAI Ideal Governanceā). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying peopleās preferences for them. Itād be valuable to conduct similar projects (via online surveys) that address several limitations of prior efforts.
First, these projects should provide relatively detailed portrayals of the potential futures under consideration. This could be done using summaries of scenarios richly imagined in existing sources (e.g., Tegmarkās Life 3.0, Hansonās Age of Em) or generated during the āworld-buildingā efforts to be conducted at the Augmented Intelligence Summit. This could address peopleās apparent tendency to be repelled by descriptions of futures that simplistically maximise things they claim to intrinsically value while stripping away things they donāt. It could also allow for quantitative and qualitative feedback on these scenarios and various elements of them. People may find it easier to critique and build upon presented scenarios than to imagine ideal scenarios from scratch.
Second, these projects should include large, representative, cross-national samples. Existing research has typically included only small samples which often differ greatly from the general population. This doesnāt fully capture the three above-mentioned benefits of efforts to understand what futures we actually want.
Third, experimental manipulations could be embedded within the surveys to explore the impact of different framings, different information, and different arguments, partly to reveal how fragile peopleās preferences are.
It would be useful to also similarly survey medium-term-relevant preferences (e.g., regarding institutions for managing adaptations to increasing AI capabilities; Dafoe, 2018).
One concern with this idea is that the long-term future may be so radically unfamiliar and unpredictable that any information regarding peopleās present preferences for it would be irrelevant to scenarios that are actually plausible. Another concern is that present preferences may not be worth following anyway, as they may reflect intuitions that make sense in our current environment but wouldnāt in radically different future environments. They may also not be worth following if issues like framing effects and scope neglect become particularly impactful when evaluating such unfamiliar and astronomical options.
[1] I wrote this application when I was very new to EA and I was somewhat grasping at straws to come up with longtermism-relevant research ideas that would make use of my psychology degree.
Preferences for the long-term future [an abandoned research idea]
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] Iām unsure how useful this idea is. But twice this week I felt itād be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (see Dafoe, 2018 on āAI Ideal Governanceā). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying peopleās preferences for them. Itād be valuable to conduct similar projects (via online surveys) that address several limitations of prior efforts.
First, these projects should provide relatively detailed portrayals of the potential futures under consideration. This could be done using summaries of scenarios richly imagined in existing sources (e.g., Tegmarkās Life 3.0, Hansonās Age of Em) or generated during the āworld-buildingā efforts to be conducted at the Augmented Intelligence Summit. This could address peopleās apparent tendency to be repelled by descriptions of futures that simplistically maximise things they claim to intrinsically value while stripping away things they donāt. It could also allow for quantitative and qualitative feedback on these scenarios and various elements of them. People may find it easier to critique and build upon presented scenarios than to imagine ideal scenarios from scratch.
Second, these projects should include large, representative, cross-national samples. Existing research has typically included only small samples which often differ greatly from the general population. This doesnāt fully capture the three above-mentioned benefits of efforts to understand what futures we actually want.
Third, experimental manipulations could be embedded within the surveys to explore the impact of different framings, different information, and different arguments, partly to reveal how fragile peopleās preferences are.
It would be useful to also similarly survey medium-term-relevant preferences (e.g., regarding institutions for managing adaptations to increasing AI capabilities; Dafoe, 2018).
One concern with this idea is that the long-term future may be so radically unfamiliar and unpredictable that any information regarding peopleās present preferences for it would be irrelevant to scenarios that are actually plausible. Another concern is that present preferences may not be worth following anyway, as they may reflect intuitions that make sense in our current environment but wouldnāt in radically different future environments. They may also not be worth following if issues like framing effects and scope neglect become particularly impactful when evaluating such unfamiliar and astronomical options.
[1] I wrote this application when I was very new to EA and I was somewhat grasping at straws to come up with longtermism-relevant research ideas that would make use of my psychology degree.