Maybe: “We should give outsized attention to risks that manifest unexpectedly early, since we’re the only people who can.”
(I think this is borderline major? The earliest occurrence I know of was 2015 but it’s sufficiently simple that I wouldn’t be surprised if it was discovered multiple times and some of them were earlier.)
FWIW, I also haven’t seen that idea mentioned before your 2015 paper. And I think there’s a good chance I would’ve seen it if the idea was decently widely discussed in EA before then, as I looked into this and related matters a bit for my recent post Crucial questions about optimal timing of work and donations.
This is a bit of a nitpick: Perhaps you mean the more general point you mentioned above rather than the specific claim about AI risk, but you published this report in already in 2014, and I vaguely remember hearing a lot of discussion of those kinds of arguments in 2014 already.
Maybe: “We should give outsized attention to risks that manifest unexpectedly early, since we’re the only people who can.”
(I think this is borderline major? The earliest occurrence I know of was 2015 but it’s sufficiently simple that I wouldn’t be surprised if it was discovered multiple times and some of them were earlier.)
FWIW, I also haven’t seen that idea mentioned before your 2015 paper. And I think there’s a good chance I would’ve seen it if the idea was decently widely discussed in EA before then, as I looked into this and related matters a bit for my recent post Crucial questions about optimal timing of work and donations.
(The relevant section is “What “windows of opportunity” might there be? When might those windows open and close? How important are they?”)
This is a bit of a nitpick: Perhaps you mean the more general point you mentioned above rather than the specific claim about AI risk, but you published this report in already in 2014, and I vaguely remember hearing a lot of discussion of those kinds of arguments in 2014 already.