See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I’m no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that’s bent on proselytising both while not listening deeply enough to integrate other perspectives).
I adjusted my guesstimate of winning down to a quarter.
I now guess it’s more like 1⁄8 chance (meaning that from my perspective Marcus will win this bet on expectation). It is pretty hard to imagine so many paying customers going away, particularly as revenues have been growing in the last year.
Marcus has thought this one through carefully, and I’m naturally sticking to the commitment. If we end up seeing a crash down the line, I invite all of you to consider with me how to make maximum use of that opportunity!
I still think a crash is fairly likely, but also that if there is a large slump in investment across the industry that most customers could end up continuing to pay for subscriptions.
The main problem I see is that OpenAI and Anthropic are losing money on products they are selling, which are facing commodification (i.e. downward pressure on prices). But unless investments run dry soon, they can continue for some years and eventually find ways to lock in customers (e.g. through personalisation) and monetisation (e.g. personalised ads).