I’m still confused by why they picked 2027 even in 2025. Back when they made it, Daniel’s median forecast was 2028 and Eli’s 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the “most likely year for it to happen” still feels a bit disingenous to me.
I’m not sure why picking the mode feels disingenuous to you, it feels fine to me as long as it’s between roughly 15th and 85th percentiles and you are transparent about it.
The causual history of why it was 2027 is that this was Daniel’s median when we started writing it, and it would have been a lot of work to rewrite our near-final draft to make it 2028 after Daniel changed his view. The reason it was based off of Daniel’s view and not other authors’ is that AI 2027 was ultimately supposed to represent his view rather than amalgam that include others’ views. Giving a single person final say seems better than design by committee. That said, we had few strong disagreements.
The other authors considered 2027 plausible enough and close enough to a modal scenario that they (including me) felt happy to help with the project.
Edit 2: It probably would have been reasonable for me to push for the timelines to be in between our views rather than Daniel’s. I didn’t really consider it because I thought 2027 was plausible enough and Daniel was leading the project. I think I also gave some weight to Linch’s point about it being important to communicate that things could get crazy very soon, but I’m not sure if this was cruxy. However, memetic fitness wasn’t an (explici)t consideration.
From what Daniel said I thought his median was 2028 when he started to write it? But that’s perhaps a bit nitpicky.
I think there might be a wider EA/Rationalist Comms issue here when communicating with the general public. Communicating projects like this isn’t just about whether it “feels” fine—I think its important to think about how it might come accross and future implications. To the general public, this scenario even in 2030 still feels mega-soon and sci-fi. The problem is if we go past 2027 now, many people will say “those tech-bro idiots they’re always wrong” and might miss the point o the thing
If anything I think here picking a more conservative, tail end of the timeline (2028-2030) would have been better, to keep it relevant for longer.
To be clear, I agree that we should make comms decisions based on what we think the effects will be, I wasn’t using “feel” intending to mean otherwise.
I’m not sure why picking the mode feels disingenuous to you, it feels fine to me as long as it’s between roughly 15th and 85th percentiles and you are transparent about it.
The causual history of why it was 2027 is that this was Daniel’s median when we started writing it, and it would have been a lot of work to rewrite our near-final draft to make it 2028 after Daniel changed his view. The reason it was based off of Daniel’s view and not other authors’ is that AI 2027 was ultimately supposed to represent his view rather than amalgam that include others’ views. Giving a single person final say seems better than design by committee. That said, we had few strong disagreements.
The other authors considered 2027 plausible enough and close enough to a modal scenario that they (including me) felt happy to help with the project.
Edit: Daniel discusses his perspective here
Edit 2: It probably would have been reasonable for me to push for the timelines to be in between our views rather than Daniel’s. I didn’t really consider it because I thought 2027 was plausible enough and Daniel was leading the project. I think I also gave some weight to Linch’s point about it being important to communicate that things could get crazy very soon, but I’m not sure if this was cruxy. However, memetic fitness wasn’t an (explici)t consideration.
Thanks that’s helpful
From what Daniel said I thought his median was 2028 when he started to write it? But that’s perhaps a bit nitpicky.
I think there might be a wider EA/Rationalist Comms issue here when communicating with the general public. Communicating projects like this isn’t just about whether it “feels” fine—I think its important to think about how it might come accross and future implications. To the general public, this scenario even in 2030 still feels mega-soon and sci-fi. The problem is if we go past 2027 now, many people will say “those tech-bro idiots they’re always wrong” and might miss the point o the thing
If anything I think here picking a more conservative, tail end of the timeline (2028-2030) would have been better, to keep it relevant for longer.
I agree not the biggest deal though.
To be clear, I agree that we should make comms decisions based on what we think the effects will be, I wasn’t using “feel” intending to mean otherwise.