In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering.
Thanks, this is pretty horrifying.
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?
Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.
EDIT: Ah, of course, they can just run suffering algorithms on existing general-purpose computing hardware.
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?
Yes, see here. Though we also put some credence on other “unknown unknowns” that we might prevent through broad interventions (like promoting compassion and cooperation).
Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.
Both could be concerning. I find it hard to think about future technological capabilities and agents in sufficient detail. So rather than thinking about specific scenarios, we’d like to reduce s-risks through (hopefully) more robust levers such as making the future less multipolar and differentially researching peaceful bargaining mechanisms.
Thanks, this is pretty horrifying.
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?
Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.
EDIT: Ah, of course, they can just run suffering algorithms on existing general-purpose computing hardware.
Yes, see here. Though we also put some credence on other “unknown unknowns” that we might prevent through broad interventions (like promoting compassion and cooperation).
Both could be concerning. I find it hard to think about future technological capabilities and agents in sufficient detail. So rather than thinking about specific scenarios, we’d like to reduce s-risks through (hopefully) more robust levers such as making the future less multipolar and differentially researching peaceful bargaining mechanisms.