Well if you have it, I’ll take it. In the general scenario, a very powerful benevolent AI is left to do whatever it thinks is best. If the AI decides that freedom is one of humans top values, it will try to make the world better while optimizing human freedom. Giving humans more freedom in practice than the typical government is not a particularly high bar. Of course, plenty of people might want the AI micromanaging every detail of their life, the AI will do a really good job of it. But I would think ideally freedom should be there for those who want it.
Its also worth noting there is a fairly common belief that we are on a path to probable doom, and any AI that offered anything better than paperclips is work taking. So, even if your AI was much too controlling and humans would prefer a less controlling one, many EA would say ” best AI we are going to get”.
You know, as far as seeing ourselves on a path to doom, I don’t see why development of a superintelligent rogue AI isn’t treated like development of a superweapon.
If some Silicon Valley company were developing the next battlefield nuke, they’d either be a Pentagon contractor or get raided by the FBI.
But here they are making something with ability to quickly learn how to enter and take control of all computer systems, and possibly electromechanical systems, everywhere, as well as us, actually, and we’ve got charitable organizations worrying about getting someone into AI companies to get them thinking about safety a little.
It’s not well understood how to make an AGI safe, so obviously developing them should be taboo, if you care about existential risk.
An effort similar to what scientists do about nukes seems intuitive, keeping the doomsday clock, trying to stop nuclear proliferation, encouraging disarmament, etc.
The potential danger is easy to see. It never occurred to me before coming across EA discussions that the development of conscious AI would be a reason for anything but terror and panic. That’s why I asked my questions, actually.
I guess I have one last question for the forum on this topic.
You know, as far as seeing ourselves on a path to doom, I don’t see why development of a superintelligent rogue AI isn’t treated like development of a superweapon.
Because distinguishing it from benign, not superintelligent AI is really hard.
So you are the FBI, you have a big computer running some code. You can’t tell if its a rouge superintelligence or the next DALL-E by looking at the outputs. A rouge superintelligence will trick you until its too late. Once its run at all on a computer that isn’t in a sandboxed bunker its probably too late. So you have to notice people writing code, and read that code before its run. There are many smart people writing code all the time. That code is often illegible spaghetti. Maybe the person writing the code will know, or at least suspect, that it might be a rouge superintelligence. Maybe not.
Lots of computer scientists are in practice rushing to develop self driving cars, the next GPT. All sorts of AI services. The economic incentive is strong.
Well if you have it, I’ll take it. In the general scenario, a very powerful benevolent AI is left to do whatever it thinks is best. If the AI decides that freedom is one of humans top values, it will try to make the world better while optimizing human freedom. Giving humans more freedom in practice than the typical government is not a particularly high bar. Of course, plenty of people might want the AI micromanaging every detail of their life, the AI will do a really good job of it. But I would think ideally freedom should be there for those who want it.
Its also worth noting there is a fairly common belief that we are on a path to probable doom, and any AI that offered anything better than paperclips is work taking. So, even if your AI was much too controlling and humans would prefer a less controlling one, many EA would say ” best AI we are going to get”.
You know, as far as seeing ourselves on a path to doom, I don’t see why development of a superintelligent rogue AI isn’t treated like development of a superweapon.
If some Silicon Valley company were developing the next battlefield nuke, they’d either be a Pentagon contractor or get raided by the FBI.
But here they are making something with ability to quickly learn how to enter and take control of all computer systems, and possibly electromechanical systems, everywhere, as well as us, actually, and we’ve got charitable organizations worrying about getting someone into AI companies to get them thinking about safety a little.
It’s not well understood how to make an AGI safe, so obviously developing them should be taboo, if you care about existential risk.
An effort similar to what scientists do about nukes seems intuitive, keeping the doomsday clock, trying to stop nuclear proliferation, encouraging disarmament, etc.
The potential danger is easy to see. It never occurred to me before coming across EA discussions that the development of conscious AI would be a reason for anything but terror and panic. That’s why I asked my questions, actually.
I guess I have one last question for the forum on this topic.
Because distinguishing it from benign, not superintelligent AI is really hard.
So you are the FBI, you have a big computer running some code. You can’t tell if its a rouge superintelligence or the next DALL-E by looking at the outputs. A rouge superintelligence will trick you until its too late. Once its run at all on a computer that isn’t in a sandboxed bunker its probably too late. So you have to notice people writing code, and read that code before its run. There are many smart people writing code all the time. That code is often illegible spaghetti. Maybe the person writing the code will know, or at least suspect, that it might be a rouge superintelligence. Maybe not.
Lots of computer scientists are in practice rushing to develop self driving cars, the next GPT. All sorts of AI services. The economic incentive is strong.