This suggests affective empathy may not be strongly predictive of utilitarian motivations.
I can believe that if the population you are trying to predict for is just humans, almost all of whom have at least some affective empathy. But I’d feel pretty surprised if this were true in whatever distribution over unaligned AIs we’re imagining. In particular, I think if there’s no particular reason to expect affective empathy in unaligned AIs, then your prior on it being present should be near-zero (simply because there are lots of specific claims about unaligned AIs about that complicated most of which will be false). And I’d be surprised if “zero vs non-zero affective empathy” was not predictive of utilitarian motivations.
I definitely agree that AIs might feel pleasure and pain, though I’m less confident in it than you seem to be. It just seems like AI cognition could be very different from human cognition. For example, I would guess that pain/pleasure are important for learning in humans, but it seems like this is probably not true for AI systems in the current paradigm. (For gradient descent, the learning and the cognition happen separately—the AI cognition doesn’t even get the loss/reward equivalent as an input so cannot “experience” it. For in-context learning, it seems very unclear what the pain/pleasure equivalent would be.)
this line of argument would overlook the real possibility that unaligned AIs could [...] have an even stronger tendency towards utilitarian motivations.
I agree this is possible. But ultimately I’m not seeing any particularly strong reasons to expect this (and I feel like your arguments are mostly saying “nothing rules it out”). Whereas I do think there’s a strong reason to expect weaker tendencies: AIs will be different, and on average different implies fewer properties that humans have. So aggregating these I end up concluding that unaligned AIs will be less utilitarian in expectation.
(You make a bunch of arguments for why AIs might not be as different as we expect. I agree that if you haven’t thought about those arguments before you should probably reduce your expectation of how different AIs will be. But I still think they will be quite different.)
this line of argument would overlook the real possibility that unaligned AIs could be more conscious than humans,
I don’t see why it matters if AIs are more conscious than humans? I thought the relevant question we’re debating is whether they are more likely to be utilitarians. Maybe the argument is that if they are more conscious-in-the-sense-of-feeling-pleasure-and-pain they are more likely to be utilitarians? If so I might buy that but feel like it’s a weak effect.
As a consequence it’s hard for me to see a strong reason for preferring humans over AIs if you cared about pyramid-maximization.
Sure, but a big difference is that no human cares about pyramid-maximization, whereas some humans are utilitarians?
(Maybe some humans do care about pyramid-maximization? I’d need to learn more about those humans before I could have any guess about whether to prefer humans over AIs.)
Consciousness seems to be a fairly convergent function of intelligence
I would say “fairly convergent function of biologically evolved intelligence”. Evolution faced lots of constraints we don’t have in AI design. For example, cognition and learning had to be colocated in space and time (i.e. done in a single brain), whereas for AIs these can be (and are) separated. Seems very plausible that consciousness-in-the-sense-of-feeling-pleasure-and-pain is a solution needed under the former constraint but not the latter. (Maybe I’m at 20% chance that something in this vicinity is right, though that is a very made-up number.)
Here are a few (long, but high-level) comments I have before responding to a few specific points that I still disagree with:
I agree there are some weak reasons to think that humans are likely to be more utilitarian on average than unaligned AIs, for basically the reasons you talk about in your comment (I won’t express individual agreement with all the points you gave that I agree with, but you should know that I agree with many of them).
However, I do not yet see any strong reasons supporting your view. (The main argument seems to be: AIs will be different than us. You label this argument as strong but I think it is weak.) More generally, I think that if you’re making hugely consequential decisions on the basis of relatively weak intuitions (which is what I believe many effective altruists do in this context), you should be very cautious. The lack of robust evidence for your position seems sufficient, in my opinion, for the main thesis of my original post to hold. (I think I was pretty careful in my language not to overstate the main claims.)
I suspect you may have an intuition that unaligned AIs will be very alien-like in certain crucial respects, but I predict this intuition will ultimately prove to be mistaken. In contrast, I think the fact that these AIs will be trained on human-generated data and deliberately shaped by humans to fulfill human-like functions and to be human-compatible should be given substantial weight. These factors make it quite likely, in my view, that the resulting AI systems will exhibit utilitarian tendencies to a significant degree, even if they do not share the preferences of either their users or their creators (for instance, I would guess that GPT-4 is already more utilitarian than the average human, in a meaningful sense).
There is a strong selection pressure for AIs to display outward behaviors that are not overly alien-like. Indeed, the pressure seems to be for AIs to be inhumanly altruistic and kind in their actions. I am not persuaded by the idea that it’s probable for AIs to be entirely human-compatible on the surface while being completely alien underneath, even if we assume they do not share human preferences (e.g., the “shoggoth” meme).
I disagree with the characterization that my argument relies primarily on the notion that “you can’t rule out” the possibility of AIs being even more utilitarian than humans. In my previous comment, I pointed out that AIs could potentially have a higher density of moral value per unit of matter, and I believe there are straightforward reasons to expect this to be the case, as AIs could be optimized very efficiently in terms of physical space. This is not merely a “you can’t rule it out” type of argument, in my view.
Similarly, in the post, I pointed out that humans have many anti-utilitarian intuitions and it seems very plausible that AIs would not share (or share fewer of) these intuitions. To give another example (although it was not prominent in the post), in a footnote I alluded to the idea that AIs might care more about reproduction than humans (who by comparison, seem to want to have small population sizes with high per-capita incomes, rather than large population sizes with low per capita incomes as utilitarianism would recommend). This too does not seem like a mere “you cannot rule it out” argument to me, although I agree it is not the type of knockdown argument you’d expect if my thesis were stated way stronger than it actually was.
I think you may be giving humans too much credit for being slightly utilitarian. To the extent that there are indeed many humans who are genuinely obsessed with actively furthering utilitarian objectives, I agree that your argument would have more force. However, I think that this is not really what we actually observe in the real world to a large degree. I think it’s exaggerated at least; even within EA I think that’s somewhat rare.
I suspect there is a broader phenomenon at play here, whereby people (often those in the EA community) attribute a wide range of positive qualities to humans (such as the idea that our values converge upon reflection, or the idea that humans will get inherently kinder as they get wealthier) which, in my opinion, do not actually reflect the realities of the world we live in. These ideas seem (to me) to be routinely almost entirely disconnected from any empirical analysis of actual human behavior, and they sometimes appear to be more closely related to what the person making the claim wishes to be true in some kind of idealized, abstract sense (though I admit this sounds highly uncharitable).
My hypothesis is that this tendency can maybe perhaps be explained by a deeply ingrained intuition that identifies the species boundary of “humans” as being very special, in the sense that virtually all moral value is seen as originating from within this boundary, sharply distinguishing it from anything outside this boundary, and leading to an inherent suspicion of non-human entities. This would explain, for example, why there is so much focus on “human values” (and comparatively little on drawing the relevant “X values” boundary along different lines), and why many people seem to believe that human emulations would be clearly preferable to de novo AI. I do not really share this intuition myself.
I can believe that if the population you are trying to predict for is just humans, almost all of whom have at least some affective empathy. But I’d feel pretty surprised if this were true in whatever distribution over unaligned AIs we’re imagining.
My basic thoughts here are: on the one hand we have real world data points which can perhaps relevantly inform the degree to which affective empathy actually predicts utilitarianism, and on the other hand we have an intuition that it should be predictive across beings of very different types. I think the real world data points should epistemically count for more than the intuitions? More generally, I think it is hard to argue about what might be true if real world data counts for less than intuitions.
Maybe the argument is that if they are more conscious-in-the-sense-of-feeling-pleasure-and-pain they are more likely to be utilitarians? If so I might buy that but feel like it’s a weak effect.
Isn’t this the effect you alluded to, when you named reasons why some humans are utilitarians?
In contrast, I think the fact that these AIs will be trained on human-generated data and deliberately shaped by humans to fulfill human-like functions and to be human-compatible should be given substantial weight.
… This seems to be saying that because we are aligning AI, they will be more utilitarian. But I thought we were discussing unaligned AI?
I agree that the fact we are aligning AI should make one more optimistic. Could you define what you mean by “unaligned AI”? It seems quite plausible that I will agree with your position, and think it amounts to something like “we were pretty successful with alignment”.
The lack of robust evidence for your position seems sufficient, in my opinion, for the main thesis of my original post to hold.
I agree with theses like “it tentatively appears that the normative value of alignment work is very uncertain, and plausibly approximately neutral, from a total utilitarian perspective”, and would go further and say that alignment work is plausibly negative from a total utilitarian perspective.
I disagree with the implied theses in statements like “I’m not very sympathetic to pausing or slowing down AI as a policy proposal.”
If you wrote a post that just said “look, we’re super uncertain about things, here’s your reminder that there are worlds in which alignment work is negative”, I’d be on board with it. But it feels like a motte-and-bailey to write a post that is clearly trying to cause the reader to feel a particular way about some policy, and then retreat to “well my main thesis was very weak and unobjectionable”.
Some more minor comments:
You label this argument as strong but I think it is weak
Well, I can believe it’s weak in some absolute sense. My claim is that it’s much stronger than all of the arguments you make put together.
There is a strong selection pressure for AIs to display outward behaviors that are not overly alien-like. Indeed, the pressure seems to be for AIs to be inhumanly altruistic and kind in their actions.
This is a pretty good example of something I’d call different! You even use the adjective “inhumanly”!
To the extent your argument is that this is strong evidence that the AIs will continue to be altruistic and kind, I think I disagree, though I’ve now learned that you are imagining lots of alignment work happening when making the unaligned AIs, so maybe I’d agree depending on the specific scenario you’re imagining.
I disagree with the characterization that my argument relies primarily on the notion that “you can’t rule out” the possibility of AIs being even more utilitarian than humans.
Sorry, I was being sloppy there. My actual claim is that your arguments either:
Don’t seem to bear on the question of whether AIs are more utilitarian than humans, OR
Don’t seem more compelling than the reversed versions of those arguments.
I pointed out that AIs could potentially have a higher density of moral value per unit of matter, and I believe there are straightforward reasons to expect this to be the case, as AIs could be optimized very efficiently in terms of physical space. This is not merely a “you can’t rule it out” type of argument, in my view.
I agree that there’s a positive reason to expect AIs to have a higher density of moral value per unit of matter. I don’t see how this has any (predictable) bearing on whether AIs will be more utilitarian than humans.
Similarly, in the post, I pointed out that humans have many anti-utilitarian intuitions and it seems very plausible that AIs would not share (or share fewer of) these intuitions.
Applying the reversal test:
Humans have utilitarian intuitions too, and it seems very plausible that AIs would not share (or share fewer of) these intuitions.
I don’t especially see why one of these is stronger than the other.
(And if the AI doesn’t share any of the utilitarian intuitions, it doesn’t matter at all if it also doesn’t share the anti-utilitarian intuitions; either way it still won’t be a utilitarian.)
To give another example [...] AIs might care more about reproduction than humans (who by comparison, seem to want to have small population sizes with high per-capita incomes, rather than large population sizes with low per capita incomes as utilitarianism would recommend)
Applying the reversal test:
AIs might care less about reproduction than humans (a large majority of whom will reproduce at least once in their life).
Personally I find the reversed version more compelling.
I think you may be giving humans too much credit for being slightly utilitarian. [...] people (often those in the EA community) attribute a wide range of positive qualities to humans [...]
Fwiw my reasoning here mostly doesn’t depend on facts about humans other than binary questions like “do humans ever display property X”, since by and large my argument is “there is quite a strong chance that unaligned AIs do not have property X at all”.
Though again this might change depending on what exactly you mean by “unaligned AI”.
(I don’t necessarily disagree with your hypotheses as applied to the broader world—they sound plausible, though it feels somewhat in conflict with the fact that EAs care about AI consciousness a decent bit—I just disagree with them as applied to me in this particular comment thread.)
I think the real world data points should epistemically count for more than the intuitions?
I don’t buy it. The “real world data points” procedure here seems to be: take two high-level concepts (e.g. affective empathy, proclivity towards utilitarianism), draw a line between them, extrapolate way way out of distribution. I think this procedure would have a terrible track record when applied without the benefit of hindsight.
I expect my arguments based on intuitions would also have a pretty bad track record, but I do think they’d outperform the procedure above.
More generally, I think it is hard to argue about what might be true if real world data counts for less than intuitions.
Yup, this is an unfortunate fact about domains where you don’t get useful real world data. That doesn’t mean you should start using useless real world data.
Isn’t this the effect you alluded to, when you named reasons why some humans are utilitarians?
Yes, but I think the relevance is mostly whether or not the being feels pleasure or pain at all, rather than the magnitude with which it feels it. (Probably the magnitude matters somewhat, but not very much.)
Among humans I would weakly predict the opposite effect, that people with less pleasure-pain salience are more likely to be utilitarian (mostly due to a predicted anticorrelation with logical thinking / decoupling / systemizing nature).
Just a quick reply (I might reply more in-depth later but this is possibly the most important point):
I agree that the fact we are aligning AI should make one more optimistic. Could you define what you mean by “unaligned AI”? It seems quite plausible that I will agree with your position, and think it amounts to something like “we were pretty successful with alignment”.
In my post I talked about the “default” alternative to doing lots of alignment research. Do you think that if AI alignment researchers quit tomorrow, engineers would stop doing RLHF etc. to their models? That they wouldn’t train their AIs to exhibit human-like behaviors, or to be human-compatible?
It’s possible my language was misleading by giving an image of what unaligned AI looks like that isn’t actually a realistic “default” in any scenario. But when I talk about unaligned AI, I’m simply talking about AI that doesn’t share the preferences of humans (either its creator or the user). Crucially, humans are routinely misaligned in this sense. For example, employees don’t share the exact preferences of their employer (otherwise they’d have no need for a significant wage). Yet employees are still typically docile, human-compatible, and assimilated to the overall culture.
This is largely the picture I think we should imagine when we think about the “default” unaligned alternative, rather than imaging that humans will create something far more alien, far less docile, and therefore something with far less economic value.
(As an aside, I thought this distinction wasn’t worth making because I thought most readers would have already strongly internalized the idea that RLHF isn’t “real alignment work”. I suspect I was mistaken, and probably confused a ton of people.)
I disagree with the implied theses in statements like “I’m not very sympathetic to pausing or slowing down AI as a policy proposal.”
This overlooks my arguments in section 3, which were absolutely critical to forming my opinion here. My argument here can be summarized as follows:
The utilitarian arguments for technical alignment research seem weak, because AIs are likely to be conscious like us, and also share human moral concepts.
By contrast, technical alignment research seems clearly valuable if you care about humans who currently exist, since AIs will presumably be directly aligned to them.
However, pausing AI for alignment reasons seems pretty bad for humans who currently exist (under plausible models of the tradeoff).
I have sympathies to both utilitarianism and the view that current humans matter. The weak considerations favoring pausing AI on the utilitarian side don’t outweigh the relatively much stronger and clearer arguments against pausing for currently existing humans.
The last bullet point is a statement about my values. It is not a thesis independently of my values. I feel this was pretty explicit in the post.
If you wrote a post that just said “look, we’re super uncertain about things, here’s your reminder that there are worlds in which alignment work is negative”, I’d be on board with it. But it feels like a motte-and-bailey to write a post that is clearly trying to cause the reader to feel a particular way about some policy, and then retreat to “well my main thesis was very weak and unobjectionable”.
I’m not just saying “there are worlds in which alignment work is negative”. I’m saying that it’s fairly plausible. I’d say greater than 30% probability. Maybe higher than 40%. This seems perfectly sufficient to establish the position, which I argued explicitly, that the alternative position is “fairly weak”.
It would be different if I was saying “look out, there’s a 10% chance you could be wrong”. I’d agree that claim would be way less interesting.
I don’t think what I said resembles a motte-and-bailey, and I suspect you just misunderstood me.
[ETA:
Well, I can believe it’s weak in some absolute sense. My claim is that it’s much stronger than all of the arguments you make put together.
Part of me feels like this statement is an acknowledgement that you fundamentally agree with me. You think the argument in favor of unaligned AIs being less utilitarian than humans is weak? Wasn’t that my thesis? If you started at a prior of 50%, and then moved to 65% because of a weak argument, and then moved back to 60% because of my argument, then isn’t that completely consistent with essentially every single thing I said? OK, you felt I was saying the probability is like 50%. But 60% really isn’t far off, and it’s consistent with what I wrote (I mentioned “weak reasons” in the post). Perhaps like 80% of the reason why you disagree here is because you think my thesis was something else.
More generally I get the sense that you keep misinterpreting me as saying things that are different or stronger than what I intended. That’s reasonable given that this is a complicated and extremely nuanced topic. I’ve tried to express areas of agreement when possible, both in the post and in reply to you. But maybe you have background reasons to expect me to argue a very strong thesis about utilitarianism. As a personal statement, I’d encourage you to try to read me as saying something closer to the literal meaning of what I’m saying, rather than trying to infer what I actually believe underneath the surface.]
I have lots of other disagreements with the rest of what you wrote, although I probably won’t get around to addressing them. I mostly think we just disagree on some basic intuitions about how alien-like default unaligned AIs will actually be in the relevant senses. I also disagree with your reversal tests, because I think they’re not actually symmetric, and I think you’re omitting the best arguments for thinking that they’re asymmetric.
I can believe that if the population you are trying to predict for is just humans, almost all of whom have at least some affective empathy. But I’d feel pretty surprised if this were true in whatever distribution over unaligned AIs we’re imagining. In particular, I think if there’s no particular reason to expect affective empathy in unaligned AIs, then your prior on it being present should be near-zero (simply because there are lots of specific claims about unaligned AIs about that complicated most of which will be false). And I’d be surprised if “zero vs non-zero affective empathy” was not predictive of utilitarian motivations.
I definitely agree that AIs might feel pleasure and pain, though I’m less confident in it than you seem to be. It just seems like AI cognition could be very different from human cognition. For example, I would guess that pain/pleasure are important for learning in humans, but it seems like this is probably not true for AI systems in the current paradigm. (For gradient descent, the learning and the cognition happen separately—the AI cognition doesn’t even get the loss/reward equivalent as an input so cannot “experience” it. For in-context learning, it seems very unclear what the pain/pleasure equivalent would be.)
I agree this is possible. But ultimately I’m not seeing any particularly strong reasons to expect this (and I feel like your arguments are mostly saying “nothing rules it out”). Whereas I do think there’s a strong reason to expect weaker tendencies: AIs will be different, and on average different implies fewer properties that humans have. So aggregating these I end up concluding that unaligned AIs will be less utilitarian in expectation.
(You make a bunch of arguments for why AIs might not be as different as we expect. I agree that if you haven’t thought about those arguments before you should probably reduce your expectation of how different AIs will be. But I still think they will be quite different.)
I don’t see why it matters if AIs are more conscious than humans? I thought the relevant question we’re debating is whether they are more likely to be utilitarians. Maybe the argument is that if they are more conscious-in-the-sense-of-feeling-pleasure-and-pain they are more likely to be utilitarians? If so I might buy that but feel like it’s a weak effect.
Sure, but a big difference is that no human cares about pyramid-maximization, whereas some humans are utilitarians?
(Maybe some humans do care about pyramid-maximization? I’d need to learn more about those humans before I could have any guess about whether to prefer humans over AIs.)
I would say “fairly convergent function of biologically evolved intelligence”. Evolution faced lots of constraints we don’t have in AI design. For example, cognition and learning had to be colocated in space and time (i.e. done in a single brain), whereas for AIs these can be (and are) separated. Seems very plausible that consciousness-in-the-sense-of-feeling-pleasure-and-pain is a solution needed under the former constraint but not the latter. (Maybe I’m at 20% chance that something in this vicinity is right, though that is a very made-up number.)
Here are a few (long, but high-level) comments I have before responding to a few specific points that I still disagree with:
I agree there are some weak reasons to think that humans are likely to be more utilitarian on average than unaligned AIs, for basically the reasons you talk about in your comment (I won’t express individual agreement with all the points you gave that I agree with, but you should know that I agree with many of them).
However, I do not yet see any strong reasons supporting your view. (The main argument seems to be: AIs will be different than us. You label this argument as strong but I think it is weak.) More generally, I think that if you’re making hugely consequential decisions on the basis of relatively weak intuitions (which is what I believe many effective altruists do in this context), you should be very cautious. The lack of robust evidence for your position seems sufficient, in my opinion, for the main thesis of my original post to hold. (I think I was pretty careful in my language not to overstate the main claims.)
I suspect you may have an intuition that unaligned AIs will be very alien-like in certain crucial respects, but I predict this intuition will ultimately prove to be mistaken. In contrast, I think the fact that these AIs will be trained on human-generated data and deliberately shaped by humans to fulfill human-like functions and to be human-compatible should be given substantial weight. These factors make it quite likely, in my view, that the resulting AI systems will exhibit utilitarian tendencies to a significant degree, even if they do not share the preferences of either their users or their creators (for instance, I would guess that GPT-4 is already more utilitarian than the average human, in a meaningful sense).
There is a strong selection pressure for AIs to display outward behaviors that are not overly alien-like. Indeed, the pressure seems to be for AIs to be inhumanly altruistic and kind in their actions. I am not persuaded by the idea that it’s probable for AIs to be entirely human-compatible on the surface while being completely alien underneath, even if we assume they do not share human preferences (e.g., the “shoggoth” meme).
I disagree with the characterization that my argument relies primarily on the notion that “you can’t rule out” the possibility of AIs being even more utilitarian than humans. In my previous comment, I pointed out that AIs could potentially have a higher density of moral value per unit of matter, and I believe there are straightforward reasons to expect this to be the case, as AIs could be optimized very efficiently in terms of physical space. This is not merely a “you can’t rule it out” type of argument, in my view.
Similarly, in the post, I pointed out that humans have many anti-utilitarian intuitions and it seems very plausible that AIs would not share (or share fewer of) these intuitions. To give another example (although it was not prominent in the post), in a footnote I alluded to the idea that AIs might care more about reproduction than humans (who by comparison, seem to want to have small population sizes with high per-capita incomes, rather than large population sizes with low per capita incomes as utilitarianism would recommend). This too does not seem like a mere “you cannot rule it out” argument to me, although I agree it is not the type of knockdown argument you’d expect if my thesis were stated way stronger than it actually was.
I think you may be giving humans too much credit for being slightly utilitarian. To the extent that there are indeed many humans who are genuinely obsessed with actively furthering utilitarian objectives, I agree that your argument would have more force. However, I think that this is not really what we actually observe in the real world to a large degree. I think it’s exaggerated at least; even within EA I think that’s somewhat rare.
I suspect there is a broader phenomenon at play here, whereby people (often those in the EA community) attribute a wide range of positive qualities to humans (such as the idea that our values converge upon reflection, or the idea that humans will get inherently kinder as they get wealthier) which, in my opinion, do not actually reflect the realities of the world we live in. These ideas seem (to me) to be routinely almost entirely disconnected from any empirical analysis of actual human behavior, and they sometimes appear to be more closely related to what the person making the claim wishes to be true in some kind of idealized, abstract sense (though I admit this sounds highly uncharitable).
My hypothesis is that this tendency can maybe perhaps be explained by a deeply ingrained intuition that identifies the species boundary of “humans” as being very special, in the sense that virtually all moral value is seen as originating from within this boundary, sharply distinguishing it from anything outside this boundary, and leading to an inherent suspicion of non-human entities. This would explain, for example, why there is so much focus on “human values” (and comparatively little on drawing the relevant “X values” boundary along different lines), and why many people seem to believe that human emulations would be clearly preferable to de novo AI. I do not really share this intuition myself.
My basic thoughts here are: on the one hand we have real world data points which can perhaps relevantly inform the degree to which affective empathy actually predicts utilitarianism, and on the other hand we have an intuition that it should be predictive across beings of very different types. I think the real world data points should epistemically count for more than the intuitions? More generally, I think it is hard to argue about what might be true if real world data counts for less than intuitions.
Isn’t this the effect you alluded to, when you named reasons why some humans are utilitarians?
… This seems to be saying that because we are aligning AI, they will be more utilitarian. But I thought we were discussing unaligned AI?
I agree that the fact we are aligning AI should make one more optimistic. Could you define what you mean by “unaligned AI”? It seems quite plausible that I will agree with your position, and think it amounts to something like “we were pretty successful with alignment”.
I agree with theses like “it tentatively appears that the normative value of alignment work is very uncertain, and plausibly approximately neutral, from a total utilitarian perspective”, and would go further and say that alignment work is plausibly negative from a total utilitarian perspective.
I disagree with the implied theses in statements like “I’m not very sympathetic to pausing or slowing down AI as a policy proposal.”
If you wrote a post that just said “look, we’re super uncertain about things, here’s your reminder that there are worlds in which alignment work is negative”, I’d be on board with it. But it feels like a motte-and-bailey to write a post that is clearly trying to cause the reader to feel a particular way about some policy, and then retreat to “well my main thesis was very weak and unobjectionable”.
Some more minor comments:
Well, I can believe it’s weak in some absolute sense. My claim is that it’s much stronger than all of the arguments you make put together.
This is a pretty good example of something I’d call different! You even use the adjective “inhumanly”!
To the extent your argument is that this is strong evidence that the AIs will continue to be altruistic and kind, I think I disagree, though I’ve now learned that you are imagining lots of alignment work happening when making the unaligned AIs, so maybe I’d agree depending on the specific scenario you’re imagining.
Sorry, I was being sloppy there. My actual claim is that your arguments either:
Don’t seem to bear on the question of whether AIs are more utilitarian than humans, OR
Don’t seem more compelling than the reversed versions of those arguments.
I agree that there’s a positive reason to expect AIs to have a higher density of moral value per unit of matter. I don’t see how this has any (predictable) bearing on whether AIs will be more utilitarian than humans.
Applying the reversal test:
Humans have utilitarian intuitions too, and it seems very plausible that AIs would not share (or share fewer of) these intuitions.
I don’t especially see why one of these is stronger than the other.
(And if the AI doesn’t share any of the utilitarian intuitions, it doesn’t matter at all if it also doesn’t share the anti-utilitarian intuitions; either way it still won’t be a utilitarian.)
Applying the reversal test:
AIs might care less about reproduction than humans (a large majority of whom will reproduce at least once in their life).
Personally I find the reversed version more compelling.
Fwiw my reasoning here mostly doesn’t depend on facts about humans other than binary questions like “do humans ever display property X”, since by and large my argument is “there is quite a strong chance that unaligned AIs do not have property X at all”.
Though again this might change depending on what exactly you mean by “unaligned AI”.
(I don’t necessarily disagree with your hypotheses as applied to the broader world—they sound plausible, though it feels somewhat in conflict with the fact that EAs care about AI consciousness a decent bit—I just disagree with them as applied to me in this particular comment thread.)
I don’t buy it. The “real world data points” procedure here seems to be: take two high-level concepts (e.g. affective empathy, proclivity towards utilitarianism), draw a line between them, extrapolate way way out of distribution. I think this procedure would have a terrible track record when applied without the benefit of hindsight.
I expect my arguments based on intuitions would also have a pretty bad track record, but I do think they’d outperform the procedure above.
Yup, this is an unfortunate fact about domains where you don’t get useful real world data. That doesn’t mean you should start using useless real world data.
Yes, but I think the relevance is mostly whether or not the being feels pleasure or pain at all, rather than the magnitude with which it feels it. (Probably the magnitude matters somewhat, but not very much.)
Among humans I would weakly predict the opposite effect, that people with less pleasure-pain salience are more likely to be utilitarian (mostly due to a predicted anticorrelation with logical thinking / decoupling / systemizing nature).
Just a quick reply (I might reply more in-depth later but this is possibly the most important point):
In my post I talked about the “default” alternative to doing lots of alignment research. Do you think that if AI alignment researchers quit tomorrow, engineers would stop doing RLHF etc. to their models? That they wouldn’t train their AIs to exhibit human-like behaviors, or to be human-compatible?
It’s possible my language was misleading by giving an image of what unaligned AI looks like that isn’t actually a realistic “default” in any scenario. But when I talk about unaligned AI, I’m simply talking about AI that doesn’t share the preferences of humans (either its creator or the user). Crucially, humans are routinely misaligned in this sense. For example, employees don’t share the exact preferences of their employer (otherwise they’d have no need for a significant wage). Yet employees are still typically docile, human-compatible, and assimilated to the overall culture.
This is largely the picture I think we should imagine when we think about the “default” unaligned alternative, rather than imaging that humans will create something far more alien, far less docile, and therefore something with far less economic value.
(As an aside, I thought this distinction wasn’t worth making because I thought most readers would have already strongly internalized the idea that RLHF isn’t “real alignment work”. I suspect I was mistaken, and probably confused a ton of people.)
This overlooks my arguments in section 3, which were absolutely critical to forming my opinion here. My argument here can be summarized as follows:
The utilitarian arguments for technical alignment research seem weak, because AIs are likely to be conscious like us, and also share human moral concepts.
By contrast, technical alignment research seems clearly valuable if you care about humans who currently exist, since AIs will presumably be directly aligned to them.
However, pausing AI for alignment reasons seems pretty bad for humans who currently exist (under plausible models of the tradeoff).
I have sympathies to both utilitarianism and the view that current humans matter. The weak considerations favoring pausing AI on the utilitarian side don’t outweigh the relatively much stronger and clearer arguments against pausing for currently existing humans.
The last bullet point is a statement about my values. It is not a thesis independently of my values. I feel this was pretty explicit in the post.
I’m not just saying “there are worlds in which alignment work is negative”. I’m saying that it’s fairly plausible. I’d say greater than 30% probability. Maybe higher than 40%. This seems perfectly sufficient to establish the position, which I argued explicitly, that the alternative position is “fairly weak”.
It would be different if I was saying “look out, there’s a 10% chance you could be wrong”. I’d agree that claim would be way less interesting.
I don’t think what I said resembles a motte-and-bailey, and I suspect you just misunderstood me.
[ETA:
Part of me feels like this statement is an acknowledgement that you fundamentally agree with me. You think the argument in favor of unaligned AIs being less utilitarian than humans is weak? Wasn’t that my thesis? If you started at a prior of 50%, and then moved to 65% because of a weak argument, and then moved back to 60% because of my argument, then isn’t that completely consistent with essentially every single thing I said? OK, you felt I was saying the probability is like 50%. But 60% really isn’t far off, and it’s consistent with what I wrote (I mentioned “weak reasons” in the post). Perhaps like 80% of the reason why you disagree here is because you think my thesis was something else.
More generally I get the sense that you keep misinterpreting me as saying things that are different or stronger than what I intended. That’s reasonable given that this is a complicated and extremely nuanced topic. I’ve tried to express areas of agreement when possible, both in the post and in reply to you. But maybe you have background reasons to expect me to argue a very strong thesis about utilitarianism. As a personal statement, I’d encourage you to try to read me as saying something closer to the literal meaning of what I’m saying, rather than trying to infer what I actually believe underneath the surface.]
I have lots of other disagreements with the rest of what you wrote, although I probably won’t get around to addressing them. I mostly think we just disagree on some basic intuitions about how alien-like default unaligned AIs will actually be in the relevant senses. I also disagree with your reversal tests, because I think they’re not actually symmetric, and I think you’re omitting the best arguments for thinking that they’re asymmetric.
This, in addition to the comment I previously wrote, will have to suffice as my reply.