Yes thatās correct and I do agree with you. To be honest the main reasons were due to limited knowledge and simplification reasons. Putting any high number for the likelihood of āartificial sentienceā would make it the most important cause area (which based on my mindset, it might be). But Iām currently trying to figure out which of the following I think is the most impactful to work on: AI-alignment, WAW or AI sentience. This post was simply only about the first two.
When all of thatās been said, I do think AI-sentience is a lot less likely than many EAs think (which still doesnāt justify ā0.01%ā). But note that this is just initial thoughts based on limited information. Anyways, hereās my reasoning:
While I do agree that it might be theoretically possible and could cause suffering on an astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I donāt see any reason why a sentient AI would perform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more āintelligentā binary system.
Even if we create it, it would only be relevant as an s-risk if we donāt realize it and fix it.
However I do think the probability of me changing my mind is high.
Yes thatās correct and I do agree with you. To be honest the main reasons were due to limited knowledge and simplification reasons. Putting any high number for the likelihood of āartificial sentienceā would make it the most important cause area (which based on my mindset, it might be).
But Iām currently trying to figure out which of the following I think is the most impactful to work on: AI-alignment, WAW or AI sentience. This post was simply only about the first two.
When all of thatās been said, I do think AI-sentience is a lot less likely than many EAs think (which still doesnāt justify ā0.01%ā). But note that this is just initial thoughts based on limited information. Anyways, hereās my reasoning:
While I do agree that it might be theoretically possible and could cause suffering on an astronomical scale, I do not understand why we would intentionally or unintentionally create it. Intentionally I donāt see any reason why a sentient AI would perform any better than a non-sentient AI. And unintentionally, I could imagine that with some unknown future technology, it might be possible. But no matter how complex we make AI with our current technology, it will just become a more āintelligentā binary system.
Even if we create it, it would only be relevant as an s-risk if we donāt realize it and fix it.
However I do think the probability of me changing my mind is high.