The Power of Intelligence—The Animation

This video is an animation of The Power of Intelligence, by @Eliezer Yudkowsky

The Sorting Pebbles Into Correct Heaps video, coupled with this video, make a very basic case for the importance of AGI alignment.

Here’s most of the pinned comment under the video, also in the description:

The script used for this video is an essay published by Eliezer Yudkowsky in 2007.

Now, a few points:

Sorting Pebbles Into Correct Heaps was about the orthogonality thesis. A consequence of the orthogonality thesis is that powerful artificial intelligence will not necessarily share human values.

This new video is about just how powerful and dangerous intelligence is. These two insights put together are cause for concern.

If humanity doesn’t solve the problem of aligning AIs to human values, there’s a high chance we’ll not survive the creation of artificial general intelligence. This issue is known as “The Alignment Problem”. Some of you may be familiar with the paperclips scenario: an AGI created to maximize the number of paperclips uses up all the resources on Earth, and eventually outer space, to produce paperclips. Humanity dies early in this process. But, given the current state of research, even a simple goal such as “maximize paperclips” is already too difficult for us to program reliably into an AI. We simply don’t know how to aim AIs reliably at goals. If tomorrow a paperclip company manages to program a superintelligence, that superintelligence likely won’t maximize paperclips. We have no idea what it would do. It would be an alien mind pursuing alien goals. Knowing this, solving the alignment problem for human values in general, with all their complexity, appears like truly a daunting task. But we must rise to the challenge, or things could go very wrong for us.

You can read The Power of Intelligence and many other essays by Eliezer Yudkowsky on this website: https://​​www.readthesequences.com/​​

Crossposted from LessWrong (41 points, 3 comments)
No comments.