Thank you for posting this question and encouraging people to talk openly about this topic!
Here are some of the AI-related questions that I’ve thought about from time to time:
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions? Open Phil gives $10′s of millions to AI safety and biosecurity every year, but other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks.
What would make an artificial entity (like a computer program) sentient? What would count as a painful experience for said entity? Can we learn about this by studying the neuroscience of animal sentience?
In expectation, will there be more sentient artificial beings than sentient biological beings (including animals) over the long-term future? (brought up as an objection to this)
Is “intelligence” (commonly defined as the cognitive ability to make and execute plans to achieve goals) really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?
Should we expect AI development to move toward AGI, narrowly superhuman AIs, CAIS, or something else?
What benefits and risks should we expect in a CAIS scenario?
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions?
To the extent that this question overlaps with Mauricio’s question 1.2 (i.e. A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?), then you might find his answer helpful.
other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff
Thank you for posting this question and encouraging people to talk openly about this topic!
Here are some of the AI-related questions that I’ve thought about from time to time:
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions? Open Phil gives $10′s of millions to AI safety and biosecurity every year, but other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks.
What would make an artificial entity (like a computer program) sentient? What would count as a painful experience for said entity? Can we learn about this by studying the neuroscience of animal sentience?
In expectation, will there be more sentient artificial beings than sentient biological beings (including animals) over the long-term future? (brought up as an objection to this)
Is “intelligence” (commonly defined as the cognitive ability to make and execute plans to achieve goals) really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?
Should we expect AI development to move toward AGI, narrowly superhuman AIs, CAIS, or something else?
What benefits and risks should we expect in a CAIS scenario?
Some discussion of this question here: https://www.alignmentforum.org/posts/eGihD5jnD6LFzgDZA/agi-safety-from-first-principles-control
To the extent that this question overlaps with Mauricio’s question 1.2 (i.e. A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?), then you might find his answer helpful.
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff