Yes that’s correct and I do agree with you. To be honest the main reasons were due to limited knowledge and simplification reasons. Putting any high number for the likelihood of “artificial sentience” would make it the most important cause area (which based on my mindset, it might be). But I’m currently trying to figure out which of the following I think is the most impactful to work on: AI-alignment, WAW or AI sentience. This post was simply only about the first two.
When all of that’s been said, I do think AI-sentience is a lot less likely than many EAs think (which still doesn’t justify “0.01%”). But note that this is just initial thoughts based on limited information. Anyways, here’s my reasoning:
While I do agree that it might be theoretically possible and could cause suffering on an astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I don’t see any reason why a sentient AI would perform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more “intelligent” binary system.
Even if we create it, it would only be relevant as an s-risk if we don’t realize it and fix it.
However I do think the probability of me changing my mind is high.
If I understand correctly, you put 0.01% on artificial sentience in the future. That seems overconfident to me—why are you so certain it won’t happen?
Yes that’s correct and I do agree with you. To be honest the main reasons were due to limited knowledge and simplification reasons. Putting any high number for the likelihood of “artificial sentience” would make it the most important cause area (which based on my mindset, it might be).
But I’m currently trying to figure out which of the following I think is the most impactful to work on: AI-alignment, WAW or AI sentience. This post was simply only about the first two.
When all of that’s been said, I do think AI-sentience is a lot less likely than many EAs think (which still doesn’t justify “0.01%”). But note that this is just initial thoughts based on limited information. Anyways, here’s my reasoning:
While I do agree that it might be theoretically possible and could cause suffering on an astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I don’t see any reason why a sentient AI would perform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more “intelligent” binary system.
Even if we create it, it would only be relevant as an s-risk if we don’t realize it and fix it.
However I do think the probability of me changing my mind is high.