Mark Solms thinks he understands how to make artificial consciousness (I think everything he says on the topic is wrong), and his book Hidden Spring has an interesting discussion (in chapter 12) on the “oh jeez now what” question. I mostly disagree with what he says about that too, but I find it to be an interesting case-study of someone grappling with the question.
In short, he suggests turning off the sentient machine, then registering a patent for making conscious machines, and assigning that patent to a nonprofit like maybe Future of Life Institute, and then
organise a symposium in which leading scientists and philosophers and other stakeholders are invited to consider the implications, and to make recommendations concerning the way forward, including whether and when and under what conditions the sentient machine should be switched on again – and possibly developed further. Hopefully this will lead to the drawing up of a set of broader guidelines and constraints upon the future development, exploitation and proliferation of sentient AI in general.
He also has a strongly-worded defense of his figuring out how consciousness works and publishing it, on the grounds that if he didn’t, someone else would.
Thanks for this book suggestion, it does seem like an interesting case study.
I’m quite sceptical any one person could reverse engineer consciousness and I don’t buy that it’s good reasoning to go ahead with publication simply because someone else might. I’ll have to look into Solms and return to this.
May I ask, what is your position on creating artificial consciousness? Do you see digital suffering as a risk? If so, should we be careful to avoid creating AC?
May I ask, what is your position on creating artificial consciousness? Do you see digital suffering as a risk? If so, should we be careful to avoid creating AC?
I think the word “we” is hiding a lot of complexity here—like saying “should we decommission all the world’s nuclear weapons?” Well, that sounds nice, but how exactly? If I could wave a magic wand and nobody ever builds conscious AIs, I would think seriously about it, although I don’t know what I would decide—it depends on details I think. Back in the real world, I think that we’re eventually going to get conscious AIs whether that’s a good idea or not. There are surely interventions that will buy time until that happens, but preventing it forever and ever seems infeasible to me. Scientific knowledge tends to get out and accumulate, sooner or later, IMO. “Forever” is a very very long time.
Yes. The main way I think about that is: I think eventually AIs will be in charge, so the goal is to wind up with AIs that tend to be nice to other AIs. This challenge is somewhat related to the challenge of winding up with AIs that are nice to humans. So preventing digital suffering winds up closely entangled with the alignment problem, which is my area of research. That’s not in itself a reason for optimism, of course.
We might also get a “singleton” world where there is effectively one and only one powerful AI in the world (or many copies of the same AI pursuing the same goals) which would alleviate some or maybe all of that concern. I currently think an eventual “singleton” world is very likely, although I seem to be very much in the minority on that.
Mark Solms thinks he understands how to make artificial consciousness (I think everything he says on the topic is wrong), and his book Hidden Spring has an interesting discussion (in chapter 12) on the “oh jeez now what” question. I mostly disagree with what he says about that too, but I find it to be an interesting case-study of someone grappling with the question.
In short, he suggests turning off the sentient machine, then registering a patent for making conscious machines, and assigning that patent to a nonprofit like maybe Future of Life Institute, and then
He also has a strongly-worded defense of his figuring out how consciousness works and publishing it, on the grounds that if he didn’t, someone else would.
Thanks for this book suggestion, it does seem like an interesting case study.
I’m quite sceptical any one person could reverse engineer consciousness and I don’t buy that it’s good reasoning to go ahead with publication simply because someone else might. I’ll have to look into Solms and return to this.
May I ask, what is your position on creating artificial consciousness?
Do you see digital suffering as a risk? If so, should we be careful to avoid creating AC?
I think the word “we” is hiding a lot of complexity here—like saying “should we decommission all the world’s nuclear weapons?” Well, that sounds nice, but how exactly? If I could wave a magic wand and nobody ever builds conscious AIs, I would think seriously about it, although I don’t know what I would decide—it depends on details I think. Back in the real world, I think that we’re eventually going to get conscious AIs whether that’s a good idea or not. There are surely interventions that will buy time until that happens, but preventing it forever and ever seems infeasible to me. Scientific knowledge tends to get out and accumulate, sooner or later, IMO. “Forever” is a very very long time.
The last time I wrote about my opinions is here.
Yes. The main way I think about that is: I think eventually AIs will be in charge, so the goal is to wind up with AIs that tend to be nice to other AIs. This challenge is somewhat related to the challenge of winding up with AIs that are nice to humans. So preventing digital suffering winds up closely entangled with the alignment problem, which is my area of research. That’s not in itself a reason for optimism, of course.
We might also get a “singleton” world where there is effectively one and only one powerful AI in the world (or many copies of the same AI pursuing the same goals) which would alleviate some or maybe all of that concern. I currently think an eventual “singleton” world is very likely, although I seem to be very much in the minority on that.