Great discussion! I appreciate your post, it helped me form a more nuanced view of AI risk rather than subscribing to full-on doomerism.
I would, however, like to comment on your statement—“this is human stupidity NOT AI super intelligence. And this is the real risk of AI!”
I agree with this assessment, moreover, it seems to me that this “human stupidity” problem of our inability to design sufficiently good goals for AI is what the Alignment field is trying to solve.
It is true that no computer program has its own will. And there is no reason to believe that some future superintelligent program will suddenly stop following its programming instructions. However, given our current models that optimize for a vague goal (like in the example below), we need to develop smart solutions to encode our “true intentions” correctly into these models.
I think it’s best explained with an example: GPT-based chatbots are simply trained to predict the next word in a sentence, and it is not clear at a technical level how we can modify such a simple and specific goal of next word prediction to also include broad, complex instructions like “don’t agree with someone suicidal”. Current alignment methods like RLHF help to some extent, but there are no existing methods that guarantee, for example, that a model will never agree with someone’s suicidal thoughts. Such a lack of guarantees and control in our current training algorithms, and therefore our models, is problematic. And it seems to me this is the problem that alignment research tries to solve.
The idea of ‘alignment’ presupposes that you cannot control the computer and that it has its own will so you need to ‘align it’ ie incentivise it. But this isn’t the case, we can control them.
It’s true that machine learning AIs can create their own instructions and perform tasks however we still maintain overall control. We can constraint both inputs and outputs. We can nest the ‘intelligent’ machine learning part of the system within constraints that prevent unwanted outcomes. For instance ask an AI a question about feeling suicidal now and you’ll probably get an answer that’s been written by a human. That’s what I got last time I checked and the conversation was abrupty ended.
Great discussion! I appreciate your post, it helped me form a more nuanced view of AI risk rather than subscribing to full-on doomerism.
I would, however, like to comment on your statement—“this is human stupidity NOT AI super intelligence. And this is the real risk of AI!”
I agree with this assessment, moreover, it seems to me that this “human stupidity” problem of our inability to design sufficiently good goals for AI is what the Alignment field is trying to solve.
It is true that no computer program has its own will. And there is no reason to believe that some future superintelligent program will suddenly stop following its programming instructions. However, given our current models that optimize for a vague goal (like in the example below), we need to develop smart solutions to encode our “true intentions” correctly into these models.
I think it’s best explained with an example: GPT-based chatbots are simply trained to predict the next word in a sentence, and it is not clear at a technical level how we can modify such a simple and specific goal of next word prediction to also include broad, complex instructions like “don’t agree with someone suicidal”. Current alignment methods like RLHF help to some extent, but there are no existing methods that guarantee, for example, that a model will never agree with someone’s suicidal thoughts. Such a lack of guarantees and control in our current training algorithms, and therefore our models, is problematic. And it seems to me this is the problem that alignment research tries to solve.
The idea of ‘alignment’ presupposes that you cannot control the computer and that it has its own will so you need to ‘align it’ ie incentivise it. But this isn’t the case, we can control them.
It’s true that machine learning AIs can create their own instructions and perform tasks however we still maintain overall control. We can constraint both inputs and outputs. We can nest the ‘intelligent’ machine learning part of the system within constraints that prevent unwanted outcomes. For instance ask an AI a question about feeling suicidal now and you’ll probably get an answer that’s been written by a human. That’s what I got last time I checked and the conversation was abrupty ended.