Thanks for responding! I think I now understand better what youâre getting at, though Iâm still a bit unsure about how much work each of these beliefs are doing:
We shouldnât build AGI.
We canât build AGI (because thereâs no coherent reward function we can give it, since many of the tasks itâd have to do have fuzzy success criteria).
We wonât build AGI (because the incentives mean narrow AI will be far more useful).
Could you clarify whether you agree with these and how important you think each point is? Or is it something else entirely thatâs key?
I think we could try to build AGI, but I am skeptical it could be anything useful or helpful (a broad alignment problem) because of vague or inapt success criteria, and because of the lack of embodiment of AGI (so it wonât get beat up on by the world generally or have emotional/âaffective learning). Because of these problems, I think we shouldnât try (1).
Further, I am trying this line of argument out to see if it will encourage (3) (not building AGI), because these concerns cast doubt on the value of AGI to us (and thus the incentives to build it).
This takes on additional potency if we embrace the shift to thinking about âshouldâ and not just âcanâ in scientific and technological development generally. So that brings us to the questions I think we should be asking, which is how to encourage a properly responsible approach to AI, rather than shifting credences on the Future Fundsâ propositions about.
I think we could try to build AGI, but I am skeptical it could be anything useful or helpful (a broad alignment problem) because of vague or inapt success criteria, and because of the lack of embodiment of AGI (so it wonât get beat up on by the world generally or have emotional/âaffective learning). Because of these problems, I think we shouldnât try (1).
Hmm, I guess I donât think lack of emotional/âaffective states is a problem for making useful AGIs. Obviously those are parts of how humans learn, but seems like a machine can learn with any reward functionâit just needs some way of mapping a world state to value.
Re success criteria, you could for example train an AI to improve a companyâs profit in a simulated environment. That task requires a broad set of capacities, including high-level ones like planning/âstrategising. If you do this for many things humans care about, youâll get a more general system, as with DeepMindâs GATO. But of course Iâm speculating.
Further, I am trying this line of argument out to see if it will encourage (3) (not building AGI), because these concerns cast doubt on the value of AGI to us (and thus the incentives to build it).
I suppose if you donât think thereâs any value for us in AGI, and if you donât think there are sufficient incentives for us to build it, thereâs no need to encourage not building it? Or is your concern more that weâre wasting energy and resources trying to build it, or even thinking about it?
This takes on additional potency if we embrace the shift to thinking about âshouldâ and not just âcanâ in scientific and technological development generally. So that brings us to the questions I think we should be asking, which is how to encourage a properly responsible approach to AI, rather than shifting credences on the Future Fundsâ propositions about.
The first propositionââConditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGIââseems directly linked to whether or not we should build AGI. If AGI carries a serious risk of catastrophe, we obviously shouldnât build it. So to me it looks like the Future Fund is already thinking about the âshouldâ question?
I can plausibly see such sensors for physical pain but not for emotional pain. Emotional pain is the far more potent teacher of what is valuable and what is not, what is important and what is not. Intelligence needs direction of this sort for learning.
So, can you build embodied AGI with emotional responses built inâthat last like emotions and so are suitable teachers like emotions? Building in empathy (both for happiness and suffering) and the pain of disapproval to AGI would be crucial.
Thanks for responding! I think I now understand better what youâre getting at, though Iâm still a bit unsure about how much work each of these beliefs are doing:
We shouldnât build AGI.
We canât build AGI (because thereâs no coherent reward function we can give it, since many of the tasks itâd have to do have fuzzy success criteria).
We wonât build AGI (because the incentives mean narrow AI will be far more useful).
Could you clarify whether you agree with these and how important you think each point is? Or is it something else entirely thatâs key?
I think we could try to build AGI, but I am skeptical it could be anything useful or helpful (a broad alignment problem) because of vague or inapt success criteria, and because of the lack of embodiment of AGI (so it wonât get beat up on by the world generally or have emotional/âaffective learning). Because of these problems, I think we shouldnât try (1).
Further, I am trying this line of argument out to see if it will encourage (3) (not building AGI), because these concerns cast doubt on the value of AGI to us (and thus the incentives to build it).
This takes on additional potency if we embrace the shift to thinking about âshouldâ and not just âcanâ in scientific and technological development generally. So that brings us to the questions I think we should be asking, which is how to encourage a properly responsible approach to AI, rather than shifting credences on the Future Fundsâ propositions about.
Does that make sense?
Hmm, I guess I donât think lack of emotional/âaffective states is a problem for making useful AGIs. Obviously those are parts of how humans learn, but seems like a machine can learn with any reward functionâit just needs some way of mapping a world state to value.
Re success criteria, you could for example train an AI to improve a companyâs profit in a simulated environment. That task requires a broad set of capacities, including high-level ones like planning/âstrategising. If you do this for many things humans care about, youâll get a more general system, as with DeepMindâs GATO. But of course Iâm speculating.
I suppose if you donât think thereâs any value for us in AGI, and if you donât think there are sufficient incentives for us to build it, thereâs no need to encourage not building it? Or is your concern more that weâre wasting energy and resources trying to build it, or even thinking about it?
The first propositionââConditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGIââseems directly linked to whether or not we should build AGI. If AGI carries a serious risk of catastrophe, we obviously shouldnât build it. So to me it looks like the Future Fund is already thinking about the âshouldâ question?
There are two ways to plausibly embody AGI.
as supervisors of dumb robot bodies, the AGI remotely controls a robot body, processing a portion or all of the robotâs sensor data.
as host of an AGI, the AGIâs hardware is resident in the robot body.
I can plausibly see such sensors for physical pain but not for emotional pain. Emotional pain is the far more potent teacher of what is valuable and what is not, what is important and what is not. Intelligence needs direction of this sort for learning.
So, can you build embodied AGI with emotional responses built inâthat last like emotions and so are suitable teachers like emotions? Building in empathy (both for happiness and suffering) and the pain of disapproval to AGI would be crucial.