The suits who hear forecasts that AGI (or other stuff) is powerful and doom-inducing might just hear that it’s POWERFUL and doom-inducing whereas the message we really want to get across (to the extent we want to get messages across at all) is that it’s powerful and DOOM-INDUCING.
Altruistic actors may be more inclined to steer the world towards some plausible conceptions of utopia. In contrast, even if we avert doom, less altruistic actors might still overall be inclined to preserve existing hierarchies and stuff, which could be many orders of magnitude away from optimality.
Interesting. Besides the broad altruism of the public it probably also depends on
how much one expects to having to coordinate with other actors/institutions to reduce x-risks
how much you actually need especially altruistic values to be aligned on issues like reducing x-risk from AI & pandemics
how much “epistemic conscientiousness” / the desire to improve forecasting skills correlates with trustworthiness
I’m probably more like 80-90% that this generally is net positive.
Some quick thoughts:
The suits who hear forecasts that AGI (or other stuff) is powerful and doom-inducing might just hear that it’s POWERFUL and doom-inducing whereas the message we really want to get across (to the extent we want to get messages across at all) is that it’s powerful and DOOM-INDUCING.
Altruistic actors may be more inclined to steer the world towards some plausible conceptions of utopia. In contrast, even if we avert doom, less altruistic actors might still overall be inclined to preserve existing hierarchies and stuff, which could be many orders of magnitude away from optimality.
Also happy to chat further in person.