I’ve tried speaking to a few non-EA people (very few, countable on hand) and I kind of agree that they think you’ve watched way too much sci-fi when talking about AI safety but they don’t think it’s too far-fetched. A specific conversation I remember having made me realize that one reason might be that a lot of people simply think that they cannot do much about it. ‘Leave it to the experts’ or ‘I don’t know anything about AI and ML’ seems to be a thought that non-EA people might have on the issue, hence preventing them from actively trying to reduce the risk, if it finds a way into their list of important problems at all. There’s also the part about AI safety not being a major field leading to misconceptions like the need for a compsci PhD and a lot of technical math/CS knowledge to work in AI safety when there actually exist roles that do not require such expertise. This quite obviously prevents them from changing their career to work in AI safety, but, even more so, it discourages them to read about it at all (this might also be the reason why distillation of AI alignment work is in high demand) even though we see people read about international conflicts, nuclear risk, and climate change more frequently (I’m not sure of the difference in scale but I can personally vouch for this since I had never heard of AI alignment before joining the EA community).
I hadn’t thought of the fact that people may think they have no power so just kind of...don’t think about it. I suppose more work needs to be done to show that people can work on it.
A few random thoughts I have on this:
I’ve tried speaking to a few non-EA people (very few, countable on hand) and I kind of agree that they think you’ve watched way too much sci-fi when talking about AI safety but they don’t think it’s too far-fetched. A specific conversation I remember having made me realize that one reason might be that a lot of people simply think that they cannot do much about it. ‘Leave it to the experts’ or ‘I don’t know anything about AI and ML’ seems to be a thought that non-EA people might have on the issue, hence preventing them from actively trying to reduce the risk, if it finds a way into their list of important problems at all. There’s also the part about AI safety not being a major field leading to misconceptions like the need for a compsci PhD and a lot of technical math/CS knowledge to work in AI safety when there actually exist roles that do not require such expertise. This quite obviously prevents them from changing their career to work in AI safety, but, even more so, it discourages them to read about it at all (this might also be the reason why distillation of AI alignment work is in high demand) even though we see people read about international conflicts, nuclear risk, and climate change more frequently (I’m not sure of the difference in scale but I can personally vouch for this since I had never heard of AI alignment before joining the EA community).
I hadn’t thought of the fact that people may think they have no power so just kind of...don’t think about it. I suppose more work needs to be done to show that people can work on it.