Thanks Toby interesting one on the communication. For policy makers I think that communcation style can work OK, less so with my friends haha.
I’m still confused by why they picked 2027 even in 2025. Back when they made it, Daniel’s median forecast was 2028 and Eli’s 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the “most likely year for it to happen” still feels a bit disingenous to me.
I’m still confused by why they picked 2027 even in 2025. Back when they made it, Daniel’s median forecast was 2028 and Eli’s 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the “most likely year for it to happen” still feels a bit disingenous to me.
I’m not sure why picking the mode feels disingenuous to you, it feels fine to me as long as it’s between roughly 15th and 85th percentiles and you are transparent about it.
The causual history of why it was 2027 is that this was Daniel’s median when we started writing it, and it would have been a lot of work to rewrite our near-final draft to make it 2028 after Daniel changed his view. The reason it was based off of Daniel’s view and not other authors’ is that AI 2027 was ultimately supposed to represent his view rather than amalgam that include others’ views. Giving a single person final say seems better than design by committee. That said, we had few strong disagreements.
The other authors considered 2027 plausible enough and close enough to a modal scenario that they (including me) felt happy to help with the project.
Edit 2: It probably would have been reasonable for me to push for the timelines to be in between our views rather than Daniel’s. I didn’t really consider it because I thought 2027 was plausible enough and Daniel was leading the project. I think I also gave some weight to Linch’s point about it being important to communicate that things could get crazy very soon, but I’m not sure if this was cruxy. However, memetic fitness wasn’t an (explici)t consideration.
From what Daniel said I thought his median was 2028 when he started to write it? But that’s perhaps a bit nitpicky.
I think there might be a wider EA/Rationalist Comms issue here when communicating with the general public. Communicating projects like this isn’t just about whether it “feels” fine—I think its important to think about how it might come accross and future implications. To the general public, this scenario even in 2030 still feels mega-soon and sci-fi. The problem is if we go past 2027 now, many people will say “those tech-bro idiots they’re always wrong” and might miss the point o the thing
If anything I think here picking a more conservative, tail end of the timeline (2028-2030) would have been better, to keep it relevant for longer.
To be clear, I agree that we should make comms decisions based on what we think the effects will be, I wasn’t using “feel” intending to mean otherwise.
I think they were optimizing for a combination of concreteness (so there’s an exact story to point to, where the 2027 story is “things go roughly as they expected” whereas 2028 and 2031 were pricing in different types of individually unexpected delays[1]), and for memetic value.
Compare: My best estimate is that this project will take me 6 months. However, if you ask me to write it out step-by-step, it’d take me 4 months.The 6 months include buffers for various delays, some expected and some unexpected.
I think for project time estimations as part of a larger plan, the 6 month reply is more useful. But for someone following along on my thinking process, or a manager/collegue/direct report trying to help me optimize, the 4 month step-by-step report might be easier to follow along and/or more useful to critique or improve.
Isn’t then somewhere between 2028 and 2031 really “things go roughly as expected” and 2027 is “things go faster than expected if every AI improvement rolls out without roadblocks?” I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose
“I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent.”
I see and respect that position, but you can imagine someone saying the opposite: “I feel like if you’re going to put something out there in the public sphere as a leader in AI, it’s probably prudent to warn people of significant risks that happens much sooner than people expect, even if you think it’s less than 50% likely to happen then.”
Thanks Toby interesting one on the communication. For policy makers I think that communcation style can work OK, less so with my friends haha.
I’m still confused by why they picked 2027 even in 2025. Back when they made it, Daniel’s median forecast was 2028 and Eli’s 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the “most likely year for it to happen” still feels a bit disingenous to me.
I’m not sure why picking the mode feels disingenuous to you, it feels fine to me as long as it’s between roughly 15th and 85th percentiles and you are transparent about it.
The causual history of why it was 2027 is that this was Daniel’s median when we started writing it, and it would have been a lot of work to rewrite our near-final draft to make it 2028 after Daniel changed his view. The reason it was based off of Daniel’s view and not other authors’ is that AI 2027 was ultimately supposed to represent his view rather than amalgam that include others’ views. Giving a single person final say seems better than design by committee. That said, we had few strong disagreements.
The other authors considered 2027 plausible enough and close enough to a modal scenario that they (including me) felt happy to help with the project.
Edit: Daniel discusses his perspective here
Edit 2: It probably would have been reasonable for me to push for the timelines to be in between our views rather than Daniel’s. I didn’t really consider it because I thought 2027 was plausible enough and Daniel was leading the project. I think I also gave some weight to Linch’s point about it being important to communicate that things could get crazy very soon, but I’m not sure if this was cruxy. However, memetic fitness wasn’t an (explici)t consideration.
Thanks that’s helpful
From what Daniel said I thought his median was 2028 when he started to write it? But that’s perhaps a bit nitpicky.
I think there might be a wider EA/Rationalist Comms issue here when communicating with the general public. Communicating projects like this isn’t just about whether it “feels” fine—I think its important to think about how it might come accross and future implications. To the general public, this scenario even in 2030 still feels mega-soon and sci-fi. The problem is if we go past 2027 now, many people will say “those tech-bro idiots they’re always wrong” and might miss the point o the thing
If anything I think here picking a more conservative, tail end of the timeline (2028-2030) would have been better, to keep it relevant for longer.
I agree not the biggest deal though.
To be clear, I agree that we should make comms decisions based on what we think the effects will be, I wasn’t using “feel” intending to mean otherwise.
I think they were optimizing for a combination of concreteness (so there’s an exact story to point to, where the 2027 story is “things go roughly as they expected” whereas 2028 and 2031 were pricing in different types of individually unexpected delays[1]), and for memetic value.
Compare: My best estimate is that this project will take me 6 months. However, if you ask me to write it out step-by-step, it’d take me 4 months.The 6 months include buffers for various delays, some expected and some unexpected.
I think for project time estimations as part of a larger plan, the 6 month reply is more useful. But for someone following along on my thinking process, or a manager/collegue/direct report trying to help me optimize, the 4 month step-by-step report might be easier to follow along and/or more useful to critique or improve.
The concreteness is fine makes sense for sure
Isn’t then somewhere between 2028 and 2031 really “things go roughly as expected” and 2027 is “things go faster than expected if every AI improvement rolls out without roadblocks?” I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose
“I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent.”
I see and respect that position, but you can imagine someone saying the opposite: “I feel like if you’re going to put something out there in the public sphere as a leader in AI, it’s probably prudent to warn people of significant risks that happens much sooner than people expect, even if you think it’s less than 50% likely to happen then.”