Here I look at it from a purely memetic perspective—you can imagine thinking as a self-interested memplex. Note I’m not claiming this is the main useful perspective, or this should be the main perspective to take.
Basically, from this perspective
* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried—ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.
The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.
* but also from the opposite direction… : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes—powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward “being in this game”. Subjectively, it’s much better if you—the risk-aware, pro-humanity player—are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...
Empirically, the more people buy into the “single powerful AI systems are incredibly dangerous”, the more attention goes toward work on such system.
Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.
Sorry for the delay in response.
Here I look at it from a purely memetic perspective—you can imagine thinking as a self-interested memplex. Note I’m not claiming this is the main useful perspective, or this should be the main perspective to take.
Basically, from this perspective
* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried—ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.
The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get.
* but also from the opposite direction… : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes—powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward “being in this game”. Subjectively, it’s much better if you—the risk-aware, pro-humanity player—are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point...
Empirically, the more people buy into the “single powerful AI systems are incredibly dangerous”, the more attention goes toward work on such system.
Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.