Thank you for posting this question and encouraging people to talk openly about this topic!
Here are some of the AI-related questions that Iāve thought about from time to time:
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions? Open Phil gives $10ā²s of millions to AI safety and biosecurity every year, but other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks.
What would make an artificial entity (like a computer program) sentient? What would count as a painful experience for said entity? Can we learn about this by studying the neuroscience of animal sentience?
In expectation, will there be more sentient artificial beings than sentient biological beings (including animals) over the long-term future? (brought up as an objection to this)
Is āintelligenceā (commonly defined as the cognitive ability to make and execute plans to achieve goals) really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?
Should we expect AI development to move toward AGI, narrowly superhuman AIs, CAIS, or something else?
What benefits and risks should we expect in a CAIS scenario?
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions?
To the extent that this question overlaps with Mauricioās question 1.2 (i.e. A bunch of people seem to argue for āAI stuff is importantā but believe /ā act as if āAI stuff is overwhelmingly importantāāwhat are arguments for the latter view?), then you might find his answer helpful.
other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff
Thank you for posting this question and encouraging people to talk openly about this topic!
Here are some of the AI-related questions that Iāve thought about from time to time:
On the margin, should donors prioritize AI safety above other existential risks and broad longtermist interventions? Open Phil gives $10ā²s of millions to AI safety and biosecurity every year, but other x-risks and longtermist areas seem rather unexplored and neglected, like s-risks.
What would make an artificial entity (like a computer program) sentient? What would count as a painful experience for said entity? Can we learn about this by studying the neuroscience of animal sentience?
In expectation, will there be more sentient artificial beings than sentient biological beings (including animals) over the long-term future? (brought up as an objection to this)
Is āintelligenceā (commonly defined as the cognitive ability to make and execute plans to achieve goals) really enough to make an AI system more powerful than humans (individuals, groups, or all of humanity combined)?
Should we expect AI development to move toward AGI, narrowly superhuman AIs, CAIS, or something else?
What benefits and risks should we expect in a CAIS scenario?
To the extent that this question overlaps with Mauricioās question 1.2 (i.e. A bunch of people seem to argue for āAI stuff is importantā but believe /ā act as if āAI stuff is overwhelmingly importantāāwhat are arguments for the latter view?), then you might find his answer helpful.
Only a partial answer, but worth noting that I think the most plausible source of s-risk is messing up on AI stuff
Some discussion of this question here: https://āāwww.alignmentforum.org/āāposts/āāeGihD5jnD6LFzgDZA/āāagi-safety-from-first-principles-control