See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I’m no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that’s bent on proselytising both while not listening deeply enough to integrate other perspectives).
Frankly, because I’d want to profit from it.
The odds of 1:7 imply a 12.5% chance of a crash, and I think the chance is much higher (elsewhere I posted a guess of 40% for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it’s judged by a few measures, rather than my sense of “that looks like a crash”).
That percentage of 12.5% is far outside of the consensus on this Metaculus page. Though I notice that their criteria for a “bust or winter” are much stricter than where I’d set the threshold for a crash. Still that makes me wonder whether I should have selected an even lower odd ratio. Regardless, this month I’m prepared to take this bet.