Reasons for my negative feelings towards the AI risk discussion

I have been a member of my local EA chapter since it was founded by a group of my friends. At the beginning, I participated semi-regularly at meetings and events, but have long since stopped all participation even though my friends have instead changed their entire lives to align with the EA movement. I admit that a major reason why I have been alienated is the belief of AI as an existential risk held by some of my friends. Sometimes I think that they have lost the core idea of EA, making most effective change in the world as possible, to incoherent science fiction stories. Their actions pattern-match to a cult, making me think of Scientology more than a charity.

I recognize that some might find this opinion insulting. I want to make clear that it’s not my intention to insult. I’m fully aware that I might be wrong and their focus on AI might be perfectly justified. However, I have had these thoughts and I want to be honest about them. I believe based on my discussions with people that many others have similar feelings and it might affect the public image of EA, so it’s important to discuss them.

Issues I have with the idea of AI risk

In this section I will outline the main issues I have with the concept of AI risk.

My intuition of AI is in conflict with AI risk scenarios

I have some experience in AI: I have studied NLP models in various projects both at the university and at my workplace at a language technology company. AI at work is very different from AI in the context of EA: the former hardly even works, the latter is an incorporeal being independent of humans. Ada-Maaria Hyvärinen wrote recently a great post about similar feelings which I think describes them excellently.

With this background it’s natural that when I heard about the idea of AI as an existential risk, I was very sceptical. I have since been closely following the development of AI and noticed that while every now and then a new model comes out and does something incredible that now one could imagine, none of the new models are progressing towards the level of agency and awareness that an ASI would require.

Based on my experience and the studies I have read, there is no existential threat posed by the current AI systems, nor does it seem that those scenarios would become likely in near future.

“AI is an existential risk” is not a falsifiable statement.

When I discuss with my people about AI and reveal that I don’t believe it poses a significant risk, they often require me to proof my position. When I explain that the current technology doesn’t have potential for these risks, they counter me with the statement “It’s only a matter of time that the new technology is developed.”

The problem with this statement is, of course, that it’s not possible for me to prove that something doesn’t exist in the future. I can only say that it doesn’t exist now and it doesn’t seem likely to exist in near future. We know that it’s physically possible for ASIs to exist, so, in theory, it could be developed tomorrow.

However, is it rational to pour money into AI research based on this? AI is just one of the many possible dangers of the future. We cannot really know which of them are relevant and which are not. The principles of EA say that we should focus on areas that are neglected and have effective interventions. AI safety is not neglected: a lot of universities and companies that develop AI systems do safety research and ethical consideration. There also aren’t effective interventions: since ASI do not exist, it’s impossible to prove that the research done now even has effect on the future technology that might be based on entirely different principles than the ones being studied now. So while dangerously advanced AIs are not impossible, uncertainty around them prevents doing anything that is known to be effective.

“AI is an existential risk” resembles non-falsifiable statements made by religions and conspiracy theories. I cannot disprove the existence of god, and in the same way I cannot disprove the future existence of ASI. But I also cannot choose which god to believe in based on this knowledge, and I cannot know if my interventions will actually reduce the AI risk.

Lack of proper scientific study

What I would like to see that would change my opinions on this matter would a proper scientific research on the topic. It’s surprising how little peer-reviewed studies exist. This lack of academic involvement takes away a lot of credibility from the EA community.

When I recently talked to an EA active who works on AI safety research about why their company doesn’t publish their research scientifically, I got the following explanations:

  1. There are no suitable journals

  2. Peer-review is a too slow process

  3. The research is already conducted and evaluated by experts

  4. The scientific community would not understand the research

  5. It’s easier to conduct research with a small group

  6. It would be dangerous to publish the results

  7. Credibility is not important

These explanations, especially points 4–6 are again cult-like. As if AI risk is secret knowledge that only the enlightened understand and only the high-level members may even discuss. Even if these are opinions of just a small group of EA people, most people are still accepting the lack of scientific study. I think it’s a harmful attitude.

One of the most cited studies are AI expert surveys by Grace et al. In the latest survey, 20% of responders gave a probability of 0% to extinction due to AI, and another 20% gave a risk greater than 25% (the median being 5%). Since this question does not limit the time-period of extinction and thus speculation of very far-future events, it’s not useful for predicting near-future events which we can reliably influence with effective interventions. Those surveys aside, there is very little research on the evaluation of existential risks.

It seems that most other cited works are highly speculative with no widespread acceptance in the academia. In fact, according to my experience, most researchers I have met at the university are hostile towards the concept of AI risk. I remember that when I first started studying for my Bachelor’s thesis, one of the first lectures had the teacher explain how absurd the fear of AI was. This has been repeated throughout the courses I took. See for example this web course material provided by my university.

It seems weird to not care about the credibility of the claims in the eyes of the wider academic community. Some people view AI risk like some kind of alternative medicine: pseudo-scientific fiction, a way to scare people with an imagined illness and make them pay for a non-effective treatment, laughed at by all real scientists. Why should I trust my EA friends about this when the researchers I respect tell me to go as far away from them as possible?

Conclusions

I have outlined the most important reasons for my negative feelings towards the AI risk scene. First, it doesn’t seem likely that these risks would realize in the near future. Second, the discussion about these risks often revolves around speculative and non-falsifiable statements that are reminiscent of claims made by religions and conspiracy theories. Third, the lack of scientific study and interest towards it is bothering and eats the credibility of the claims.

I think it’s sad that EA is so involved with AI risk (and long-termism in general), since I believe in many of its core ideas like effective charities. This cognitive dissonance between aspects of AI that I perceive rational and irrational alienate me from the whole movement. I think it would be beneficial to separate near-termist and long-termist branches as clearly different ideologies with different basic beliefs instead of labeling them both under the EA umbrella.