Isn’t then somewhere between 2028 and 2031 really “things go roughly as expected” and 2027 is “things go faster than expected if every AI improvement rolls out without roadblocks?” I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose
“I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent.”
I see and respect that position, but you can imagine someone saying the opposite: “I feel like if you’re going to put something out there in the public sphere as a leader in AI, it’s probably prudent to warn people of significant risks that happens much sooner than people expect, even if you think it’s less than 50% likely to happen then.”
The concreteness is fine makes sense for sure
Isn’t then somewhere between 2028 and 2031 really “things go roughly as expected” and 2027 is “things go faster than expected if every AI improvement rolls out without roadblocks?” I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose
“I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent.”
I see and respect that position, but you can imagine someone saying the opposite: “I feel like if you’re going to put something out there in the public sphere as a leader in AI, it’s probably prudent to warn people of significant risks that happens much sooner than people expect, even if you think it’s less than 50% likely to happen then.”