Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I’m not sure how much I should update rationally.
Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
Thanks Will!
My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I’m not sure how much I should update rationally.
Hi Will, really hope you can find time to engage. I think the points discussed are pretty cruxy for overall EA strategy!
3% chance of AI takeover, and 33% chance of TAI, by 2100, seems like it would put you in contention for winning your own FTX AI Worldview Prize[1] arguing for <7% chance of P(misalignment x-risk|AGI by 2070) (assuming ~2 of the 9% [3%/33%] risk is in the 2070-2100 window).
If you were eligible