When I bring this up with EAs who are focused on AI safety, many of them suggest that we only need to get AI safety right and then the AI can solve the question of what consciousness is.
I find this somewhat frustrating. Obviously there’s a range of views in the EA community on this issue, but I think the most plausible arguments for focusing on AI safety are that there is a low but non-negligible chance of a huge impact. If that’s true, then “getting AI safety right” leaves a lot of things unaddressed, because in most scenarios “getting AI safety” right is only a small portion of the picture. In general I think we need to find ways to hold two thoughts at the same time, that AI safety is critical and that there’s a very significant chance of other things mattering too.
If that’s true, then “getting AI safety right” leaves a lot of things unaddressed, because in most scenarios “getting AI safety” right is only a small portion of the picture.
I find this somewhat frustrating. Obviously there’s a range of views in the EA community on this issue, but I think the most plausible arguments for focusing on AI safety are that there is a low but non-negligible chance of a huge impact. If that’s true, then “getting AI safety right” leaves a lot of things unaddressed, because in most scenarios “getting AI safety” right is only a small portion of the picture. In general I think we need to find ways to hold two thoughts at the same time, that AI safety is critical and that there’s a very significant chance of other things mattering too.
I didn’t understand this. Could you explain more?