Could you elaborate on the expected value of the future point? Specifically, it’s unclear to me how it should affect your credence of AI risk or AI timelines.
Yeah, the idea is that the lower the expected value of the future the less bad it is if AI causes existential catastrophes that don’t involve lots of suffering. So my wording was sloppy here; lower EV of the future perhaps decreases the importance of (existential catastrophe-preventing) AI risk but not my credence in it.
Could you elaborate on the expected value of the future point? Specifically, it’s unclear to me how it should affect your credence of AI risk or AI timelines.
Yeah, the idea is that the lower the expected value of the future the less bad it is if AI causes existential catastrophes that don’t involve lots of suffering. So my wording was sloppy here; lower EV of the future perhaps decreases the importance of (existential catastrophe-preventing) AI risk but not my credence in it.
Understood, thanks!