Intergenerational trauma impeding cooperative existential safety efforts

Epistemic status: personal judgements based on conversations with ~100 people aged 30+ who were worried about AI risk “before it was cool”, and observing their effects on a generation of worried youth, at a variety of EA-adjacent and rationality-community-adjacent events.

Summary: There appears to be something like inter-generational trauma among people who think about AI x-risk — including some of the AI-focussed parts of the EA and rationality communities — which is

  • preventing the formation of valuable high-trust relationships with newcomers that could otherwise be helpful to humanity collectively making better decisions about AI, and

  • feeding the formation of small pockets of people with a highly adversarial stance towards the rest of the world (and each other).

[This post is also available on LessWrong.]

Part 1 — The trauma of being ignored

You — or some of your close friends or colleagues — may have had the experience of fearing AI would eventually pose an existential risk to humanity, and trying to raise this as a concern to mainstream intellectuals and institutions, but being ignored or even scoffed at just for raising it. That sucked. It was not silly to think AI could be a risk to humanity. It can.

I, and around 100 people I know, have had this experience.

Experiences like this can easily lead to an attitude like “Screw those mainstream institutions, they don’t know anything and I can’t trust them.”

At least 30 people I’ve known personally have adopted that attitude in a big way, and I estimate many more. In the remainder of this post, I’d like to point out some ways this attitude can turn out to be a mistake.

Part 2 — Forgetting that humanity changes

Basically, as AI progresses, it becomes easier and easier to make the case that it could pose a risk to humanity’s existence. When people didn’t listen about AI risks in the past, that happened under certain circumstances, with certain AI capabilities at the forefront and certain public discourse surrounding them. These circumstances have changed, and will continued to change. It may not be getting easier as fast as one would ideally like, but it is getting easier. Like the stock market, it may be hard to predict how and when things will change, but they will.

If one forgets this, one can easily adopt a stance like “mainstream institutions will never care” or “the authorities are useless”. I think these stances are often exaggerations of the truth, and if one adopts them, one loses out on the opportunity to engage productively with the rest of humanity as things change.

Part 3 - Reflections on the Fundamental Attribution Error (FAE)

The Fundamental Attribution Error (wiki/​Fundamental_attribution_error) is a cognitive bias whereby you too often attribute someone else’s behavior to a fundamental (unchanging) aspect of their personality, rather than considering how their behavior might be circumstantial and likely to change. With a moment’s reflection, one can see how the FAE can lead to

  • trusting too much — assuming someone would never act against your interests because they didn’t the first few times, and also

  • trusting too little — assuming someone will never do anything good for you because they were harmful in the past.

The second reaction could be useful for getting out of abusive relationships. The risk of being mistreated over and over by someone is usually not worth the opportunity cost of finding new people to interact with. So, in personal relationships, it can be healthy to just think “screw this” and move on from someone when they don’t make a good first (or tenth) impression.

Part 4 — The FAE applied to humanity

If one has had the experience of being dismissed or ignored for expressing a bunch of reasonable arguments about AI risk, it would be easy to assume that humanity (collectively) can never be trusted to take such arguments seriously. But,

  1. Humanity has changed greatly over the course of history, arguably more than any individual has changed, so it’s suspect to assume that humanity, collectively, can never be rallied to take a reasonable action about AI.

  2. One does not have the opportunity to move on and find a different humanity to relate to. “Screw this humanity who ignores me, I’ll just imagine a different humanity and relate to that one instead” is not an effective strategy for dealing with the world.

Part 5 – What, if anything, to do about this

If the above didn’t resonate with you, now might be a good place to stop reading :) Maybe this post isn’t good advice for you to consider after all.

But if it did resonate, and you’re wondering what you may be able to do differently as a result, here are some ideas:

  • Try saying something nice and civilized about AI risk that you used to say 5-10 years ago, but which wasn’t well received. Don’t escalate it to something more offensive or aggressive; just try saying the same thing again. Someone new might take interest today, who didn’t care before. This is progress. This is a sign that humanity is changing, and adapting somewhat to the circumstances presented by AI development.

  • Try Googling a few AI-related topics that no one talked about 5-10 years ago to see if today more people are talking about one or more of those topics. Switch up the keywords for synonyms. (Maybe keep a list of search terms you tried so you don’t go in circles, and if you really find nothing, you can share the list and write an interesting LessWrong post speculating about why there are no results for it.)

  • Ask yourself if you or your friends feel betrayed by the world ignoring your concerns about AI. See if you have a “screw them” feeling about it, and if that feeling might be motivating some of your discussions about AI.

  • If someone older tells you “There is nothing you can do to address AI risk, just give up”, maybe don’t give up. Try to understand their experiences, and ask yourself seriously if those experiences could turn out differently for you.