As part of a presentation I’ll be giving soon, I’ll be spending a bit of time explaining why AI might be an x-risk.
Can anyone point me to existing succinct explanations for this, which are as convincing as possible, brief (I won’t be spending long on this), and (of course) demonstrating good epistemics.
The audience will be actuaries interested in ESG investing.
If someone fancies entering some brief explanations as an answer, feel free, but I was expecting to see links to content which already exists, since I’m sure there’s loads of it.
[Question] Why might AI be a x-risk? Succinct explanations please
As part of a presentation I’ll be giving soon, I’ll be spending a bit of time explaining why AI might be an x-risk.
Can anyone point me to existing succinct explanations for this, which are as convincing as possible, brief (I won’t be spending long on this), and (of course) demonstrating good epistemics.
The audience will be actuaries interested in ESG investing.
If someone fancies entering some brief explanations as an answer, feel free, but I was expecting to see links to content which already exists, since I’m sure there’s loads of it.