The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:
AGI being an existential risk with a high probability of occurring
Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
Other extinction risks (e.g pandemics or nuclear war) not likely manifesting prior to AGI and curtailing AI development such that AGI risk is no longer of relevance in any near-term timeline as a result
AGI being a more deadly existential risk than other existential risks (e.g pandemics or nuclear war)
AI alignment research being neglected and/or tractable
Current work on fairness and transparency improving methods for AI models not being particularly useful towards solving AI alignment
There are many other AI researchers and individuals from other relevant, adjacent disciplines that would disagree with all or most of these assumptions. Debates between that group and people within the EA/LessWrong community who would mostly agree with the above assumptions is something that is sorely lacking, save for some mud-flinging on Twitter between AI ethicists and AI alignment researchers.
The debate on this subject has been ongoing between individuals who are within or adjacent to the EA/LessWrong communities (see posts that other comments have linked and other links that are sure to follow). However, these debates often are highly insular and primarily are between people who share core assumptions about:
AGI being an existential risk with a high probability of occurring
Extinction via AGI having a significant probability of occurring within our lifetimes (next 10-50 years)
Other extinction risks (e.g pandemics or nuclear war) not likely manifesting prior to AGI and curtailing AI development such that AGI risk is no longer of relevance in any near-term timeline as a result
AGI being a more deadly existential risk than other existential risks (e.g pandemics or nuclear war)
AI alignment research being neglected and/or tractable
Current work on fairness and transparency improving methods for AI models not being particularly useful towards solving AI alignment
There are many other AI researchers and individuals from other relevant, adjacent disciplines that would disagree with all or most of these assumptions. Debates between that group and people within the EA/LessWrong community who would mostly agree with the above assumptions is something that is sorely lacking, save for some mud-flinging on Twitter between AI ethicists and AI alignment researchers.