Why the Orthogonality Thesis’s veracity is not the point:

2022 edit: this was my first post (as a newcomer to EA/​AI Safety), and I recommend strongly to read this better framed post of my old point, rather than this clumsy one.

When the topic of the possibility of AGI changing the world comes up, there are three usual opinions:

  • The first, and probably the most widely spread view, is that Humanity or Natural life has the General Intelligence monopoly.

  • The second says a real General Intelligence would not unfairly harm us, because its goals would be as intelligent as ours, even wiser.

  • The third, and least shared one, is called the Orthogonality Thesis. It is the idea that each level of intelligence is compatible with each objective, including very stupid objective from a human point of view like maximizing the number of paper clips in the universe.

In the EA environment, the classic response to the first opinion is, on the one hand, to emphasize that a growing body of evidence tends to show that it’s not so unrealistic, on the other hand, to debunk the implicit biases underlying such views. (From a Bayesian perspective, this debate approach is like trying to improve both likelihood and prior of the interlocutor.)

The key next argument is, if you agree to consider AGI as credible , you should worry about its consequences, because, by definition, AGI could have a major impact on humanity, even if you put on it only a 5% probability.

Thus this major impact -also known as technological Singularity- leads to choose between the second and the third view, that is, to deny or embrace the Orthogonality Thesis. This is why I’m writing this post: we don’t have to, and we shouldn’t- pick side about it.

It’s not THAT important to be right

Let’s imagine the futures according to these two hypotheses, if AGI became real:

  • If the Orthogonality Thesis is false, AGI would be a blessing for the world, because its objective would be at least as wise as ours. In this case, the highest EA priority is to preserve this wonderful outcome from humanity collapsing, so it’s to reduce existential risks (environmental issues, pandemics, not AGI threats...)

  • If the Orthogonality Thesis is true, the scenario by default is the objective of the AGI would be unaligned with ours, and not wise at all. Because of the instrumental convergence (which is that there are intermediate objectives useful for any general objective, such as the power quest or increasing one’s intelligence capacities), this would very probably lead to the fall of humanity—and possibly even that of life- viewed as an unuseful threat to the AI’s objective. In this hypothesis, the highest priority EA is to design AGI in a way which is robustly beneficial.

It is very important to note that in both cases, the question of existential risks is still relevant (just as important and just as likely). So this part of the fight is identical.

The difference is whether the prevention of existential risks is sufficient or not to ensure the emergence of a truly beneficial AGI. So, unless to be very confident about the falsity of the Orthogonality Thesis, we cannot ignore the scenario of dystopian AGI.

As you may have noticed, it is the same argument for not ignoring the emergence of AGI scenario which leads us not to ignore the plausibility of the Orthogonality Thesis, even without granting it lots of credit.

The communication about it

It seems to me that the Orthogonality Thesis is popular among EA people, and its popularity tends to grow. That’s good news. The more the issue is recognized, the more it is likely to be solved. An explaining factor to this could be that from an informatician or mathematical analysis, the Orthogonality Thesis seems to be a triviality, because we can put every objective function we want.

The real problem is not to be too confident about the Orthogonality Thesis, because this leads to the right mindset about it, but it is to show too confidence about it. I personally don’t put a very high -or very low- probability on the Orthogonality Thesis. A lot of people could have a similar opinion about it, and agree with the high importance of preventing the issue.

To such people, to consider the Orthogonality Thesis unlikely could lead them to reject to think about it, which is a loss for the EA, especially for its needs to recruit. And this loss is not even justified because there is no need to be fully convinced by the Orthogonality Thesis to see why it matters.

Without the habit of prudence, these people could infer a negative image of the defenders of the Orthogonality Thesis and of the EA movement in general. I think it’s particularly true for the non-scientific public.

Conclusion

True or not, the Orthogonality Thesis is a useful mindset and tool to discuss the AGI. However, to express only this thesis as a literal orthogonality with high confidence about it could lead to lose support from new people and harm EA unity.

Besides, to the question “Is the Orthogonality Thesis true ?”, there is not necessarily a 0-1 answer. Maybe there is truth on both sides, and a high intelligence could be positively correlated with beneficial goals, but only from an asymptotic point of view. Maybe not. The point is: it’s not the point.

No comments.