My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn’t build it first), or that we should shut it all down entirely.
By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both.
As someone who does not accept these premises, this is somewhat frustrating to watch.
AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned.
Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.
Here I look at it from a purely memetic perspective—you can imagine thinking as a self-interested memplex. Note I’m not claiming this is the main useful perspective, or this should be the main perspective to take.
Basically, from this perspective
* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried—ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.
The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.
* but also from the opposite direction… : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes—powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward “being in this game”. Subjectively, it’s much better if you—the risk-aware, pro-humanity player—are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...
Empirically, the more people buy into the “single powerful AI systems are incredibly dangerous”, the more attention goes toward work on such system.
Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.
thanks for this post! I’m curious—can you explain this more?
My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn’t build it first), or that we should shut it all down entirely.
By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both.
As someone who does not accept these premises, this is somewhat frustrating to watch.
Maybe something like this: https://www.lesswrong.com/posts/KYzHzqtfnTKmJXNXg/the-toxoplasma-of-agi-doom-and-capabilities
Thanks, I was thinking about linking the same thing.
AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned.
Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.
Sorry for the delay in response.
Here I look at it from a purely memetic perspective—you can imagine thinking as a self-interested memplex. Note I’m not claiming this is the main useful perspective, or this should be the main perspective to take.
Basically, from this perspective
* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried—ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.
The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.
* but also from the opposite direction… : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes—powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward “being in this game”. Subjectively, it’s much better if you—the risk-aware, pro-humanity player—are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...
Empirically, the more people buy into the “single powerful AI systems are incredibly dangerous”, the more attention goes toward work on such system.
Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.