I think it’s extremely relevant. To be honest I think that if someone coming without technical background wanted to contribute, that looking into these things would be one of the best default opportunities because: 1) These points you mention are blindspots of the AI alignment community because the typical member of the AI alignment doesn’t really care about all this political stuff. Especially questions on values and on “How does magically those who are 1000x more powerful than others don’t start ruling the entire world with their aligned AI” are very relevant IMO. 2) I think that the fact that AI safety arguments are still too conceptual is still a big weakness of the field. I think that increasing the concreteness of “how it will happen” what will be the concrete problems is a great way to have ourselves clearer scenarios in mind and also to increase the number of people that are taking these risks seriously.
To be clear, when I say that you should work on AI, I totally include people who have thoughts that are very different from the AI alignment field. For instance, I really like the fact that Jacy is thinking about this with animals in mind (I think that animal people should do that more) & being uncertain about the value of the longterm future if it’s driven by human values.
I think it’s extremely relevant.
To be honest I think that if someone coming without technical background wanted to contribute, that looking into these things would be one of the best default opportunities because:
1) These points you mention are blindspots of the AI alignment community because the typical member of the AI alignment doesn’t really care about all this political stuff. Especially questions on values and on “How does magically those who are 1000x more powerful than others don’t start ruling the entire world with their aligned AI” are very relevant IMO.
2) I think that the fact that AI safety arguments are still too conceptual is still a big weakness of the field. I think that increasing the concreteness of “how it will happen” what will be the concrete problems is a great way to have ourselves clearer scenarios in mind and also to increase the number of people that are taking these risks seriously.
To be clear, when I say that you should work on AI, I totally include people who have thoughts that are very different from the AI alignment field. For instance, I really like the fact that Jacy is thinking about this with animals in mind (I think that animal people should do that more) & being uncertain about the value of the longterm future if it’s driven by human values.