Google’s ethics is alarming

I published this video on my YouTube channel yesterday.

In short, Google is clearly investing massively in AI performance, and is currently also going after its employees that are raising concerns about the ethics of its algorithms. This is a huge deal, as the company is developing the most sophisticated AIs ever built (at least 6x times bigger than OpenAI’s), and these algorithms will likely be deployed at a global scale, before any internal safety or ethical tests, and without any possible external audit.

This is a worst-case scenario in terms of AI Safety governance. The AI race is creating huge pressures for performance over safety. More importantly, there is currently nowhere nearly enough social or legal pressure to slow down the race, and to promote safety and ethics instead. This issue seems to be vastly neglected even by EAs. Yet, we’re talking here about the ethics of the world’s most advanced AI company, with massive global-scale consequences, as Google’s algorithms have repeatedly been linked with serious national security (especially radicalization, as in the case of the Capitol riots), epistemic crisis and public health concerns.

You’ll find more information and resources in the video script, in this EA forum post and in this other EA forum post. Also, with colleagues, we are maintaining this Tournesol wiki to provide a global view of the problem and list resources, and to also documents our Tournesol project to solve the ethics of recommendation algorithms, which was discussed in this LW post.