quick thougths RE your reasons for working on it or not:
1) It seems like many people are not seeing them coming (e.g. AI safety community seems surprisingly unreceptive and to have made many predictable mistakes by ignoring structural causes of risk, e.g. being overly optimistic about companies prioritizing safety over competitiveness) 1) It seems like seeing them coming is predictably insufficient to stopping them happening, because they are the result of social dilemmas. 1) The structure of the argument appears to be the (fallacious): “if it is a real problem, other people will address it, so we don’t need to” (cf https://www.explainxkcd.com/wiki/index.php/2278:_Scientific_Briefing)
2) Interesting. Seems potentially cruxy.
3) I guess we might agree here… combined with (1), I guess your argument is: “won’t be neglected (1) and is not tractable (3)”, whereas I might say: “currently neglected, could require a lot of work to become tractable, seems important enough to warrant that effort”
The main upshots I see are: - higher P(doom) due to stories that are easier for many people to swallow --> greater ability and potential for public awareness and political will if messaging includes this. - more attention needed to questions of social organization post-AGI.
quick thougths RE your reasons for working on it or not:
1) It seems like many people are not seeing them coming (e.g. AI safety community seems surprisingly unreceptive and to have made many predictable mistakes by ignoring structural causes of risk, e.g. being overly optimistic about companies prioritizing safety over competitiveness)
1) It seems like seeing them coming is predictably insufficient to stopping them happening, because they are the result of social dilemmas.
1) The structure of the argument appears to be the (fallacious): “if it is a real problem, other people will address it, so we don’t need to” (cf https://www.explainxkcd.com/wiki/index.php/2278:_Scientific_Briefing)
2) Interesting. Seems potentially cruxy.
3) I guess we might agree here… combined with (1), I guess your argument is: “won’t be neglected (1) and is not tractable (3)”, whereas I might say: “currently neglected, could require a lot of work to become tractable, seems important enough to warrant that effort”
The main upshots I see are:
- higher P(doom) due to stories that are easier for many people to swallow --> greater ability and potential for public awareness and political will if messaging includes this.
- more attention needed to questions of social organization post-AGI.