In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I’m still quite confused about why many people seem to disagree with the view I expressed, and I’m interested in engaging more to get a better understanding of their perspective.
At the least, I thought I’d write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective.
The core thesis that was trying to defend is the following view:
My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data.
Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI’s operator.
But beyond these two objections, which I feel I understand fairly well, there’s also significant disagreement about other questions. Based on my discussions, I’ve attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone’s views on this subject:
Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity’s modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences.
Again, I’m not sure if this summary accurately represents what people believe. However, it’s what some seem to be saying. I personally think this argument is weak. But I feel I’ve had trouble making my views very clear on this subject, so I thought I’d try one more time to explain where I’m coming from here. Let me respond to the two main parts of the argument in some amount of detail:
(i) “The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives.”
My response:
I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard.
I think the intuition behind the argument here is something like this:
In the future, it will become possible to create “hedonium”—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value.
The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe’s resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering.
Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects.
If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one’s inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons.
In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium.
In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn’t seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization.
To be clear, I think it’s totally possible that there’s something about this argument that I’m missing here. And there are a lot of potential objections I’m skipping over here. But on a basic level, I mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn’t well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here.
(ii) “At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity’s modest (but non-negligible) utilitarian tendencies.”
My response:
Since only a small portion of humanity is explicitly utilitarian, the argument’s own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity’s limited utilitarian impulses. While I agree we shouldn’t assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems “more utilitarian” than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism.
Moreover, as I’ve said previously, it’s probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences.
To see where I’m coming from, consider how humans routinely are “misaligned” with each other, in the sense of not sharing each other’s preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO.
Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I’m simply skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research.
Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than “if we fail to align AIs, then there’s a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead”. If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work.
While it’s appropriate to consider even low-probability risks when the stakes are high, I’m doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I’ve said before, I think it’s quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it’s perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular.
But if you’re a utilitarian, and not particularly attached to human preferences per se (i.e., you’re non-speciesist), I don’t think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.
My proposed counter-argument loosely based on the structure of yours.
Summary of claims
A reasonable fraction of computational resources will be spent based on the result of careful reflection.
I expect to be reasonably aligned with the result of careful reflection from other humans
I expect to be much less aligned with result of AIs-that-seize-control reflecting due to less similarity and the potential for AIs to pursue relatively specific objectives from training (things like reward seeking).
Many arguments that human resource usage won’t be that good seem to apply equally well to AIs and thus aren’t differential.
Full argument
The vast majority of value from my perspective on reflection (where my perspective on reflection is probably somewhat utilitarian, but this is somewhat unclear) in the future will come from agents who are trying to optimize explicitly for doing “good” things and are being at least somewhat thoughtful about it, rather than those who incidentally achieve utilitarian objectives. (By “good”, I just mean what seems to them to be good.)
At present, the moral views of humanity are a hot mess. However, it seems likely to me that a reasonable fraction of the total computational resources of our lightcone (perhaps 50%) will in expectation be spent based on the result of a process in which an agent or some agents think carefully about what would be best in a pretty delibrate and relatively wise way. This could involve eventually deferring to other smarter/wiser agents or massive amounts of self-enhancement. Let’s call this a “reasonably-good-reflection” process.
Why think a reasonable fraction of resources will be spent like this?
If you self-enhance and get smarter, this sort of reflection on your values seems very natural. The same for deferring to other smarter entities. Further, entities in control might live for an extremely long time, so if they don’t lock in something, as long as they eventually get around to being thoughtful it should be fine.
People who don’t reflect like this probably won’t care much about having vast amounts of resources and thus the resources will go to those who reflect.
The argument for “you should be at least somewhat thoughtful about how you spend vast amounts of resources” is pretty compelling at an absolute level and will be more compelling as people get smarter.
Currently a variety of moderately powerful groups are pretty sympathetic to this sort of view and the power of these groups will be higher in the singularity.
I expect that I am pretty aligned (on reasonably-good-reflection) with the result of random humans doing reasonably-good-reflection as I am also a human and many of the underlying arguments/intuitions I think seem important seem likely to seem important to many other humans (given various common human intuitions) upon those humans becoming wiser. Further, I really just care about the preferences of (post-)humans who end care most about using vast, vast amounts of computational resources (assuming I end up caring about these things on reflection), because the humans who care about other things won’t use most of the resources. Additionally, I care “most” about the on-reflection preferences I have which are relatively less contingent and more common among at least humans for a variety of reasons. (One way to put this is that I care less about worlds in which my preferences on reflection seem highly contingent.)
So, I’ve claimed that reasonably-good-reflection resource usage will be non-trivial (perhaps 50%) and that I’m pretty aligned with humans on reasonably-good-reflection. Supposing these, why think that most of the value is coming from something like reasonably-good-reflection prefences rather than other things, e.g. not very thoughtful indexical preferences (selfish) consumption? Broadly three reasons:
I expect huge returns to heavy optimization of resource usage (similar to spending altruistic resources today IMO and in the future we’ll we smarter which will make this effect stronger).
I don’t think that (even heavily optimized) not-very-thoughtful indexical preferences directly result in things I care that much about relative to things optimized for what I care about on reflection (e.g. it probably doesn’t result in vast, vast, vast amounts of experience which is optimized heavily for goodness/$).
Consider how billionaries currently spend money which doesn’t seem to have have much direct value, certainly not relative to their altruistic expenditures.
I find it hard to imagine that indexical self-ish consumption results in things like simulating 10^50 happy minds. See also my other comment. It seems more likely IMO that people with self-ish preferences mostly just buy positional goods that involve little to no experience (separately, I expect this means that people without self-ish preferences get more of the compute, but this is counted in my earlier argument, so we shouldn’t double count it.)
I expect that indirect value “in the minds of the laborers producing the goods for consumption” is also small relative to things optimized for what I care about on reflection. (It seems pretty small or maybe net-negative (due to factory farming) today (relative to optimized altruism) and I expect the share will go down going forward.)
(Aside: I was talking about not-very-thoughtful indexical-preferences. It’s likely to me that doing a reasonably good job reflecting on selfish preferences get back to something like de facto utilitarianism (at least as far as how you spend the vast majority of computational resources) because personal identity and indexical preferences don’t make much sense and the thing you end up thinking is more like “I guess I just care about experiences in general”.)
What about AIs? I think there are broadly two main reasons to expect that what AIs do on reasonably-good-reflection to be worse from my perspective than what humans do:
As discussed above, I am more similar to other humans and when I inspect the object level of how other humans think or act, I feel reasonably optimistic about the results of reasonably-good-reflection for humans. (It seems to me like the main thing holding me back from agreement with other humans is mostly biases/communication/lack of smarts/wisdom given many shared intuitions.) However, AIs might be more different and thus result in less value. Further, the values of humans after reasonably-good-reflection seem close to saturating in goodness from my perspective (perhaps 1⁄3 or 1⁄2 of the value of purely my values), so it seems hard for AI to do better.
To better understand this argument, imagine that instead of humanity the question was between identical clones of myself and AIs. It’s pretty clear I share the same values the clones, so the clones do pretty much strictly better than AIs (up to self-defeating moral views).
I’m uncertain about the degree of similarity between myself and other humans. But, mostly the underlying similarity uncertainties also applies to AIs. So, e.g., maybe I currently think on reasonably-good-reflection humans spend resources 1⁄3 as well as I would and AIs spend resources 1⁄9 as well. If I updated to think that other humans after reasonably-good-reflection only spend resources 1⁄10 as well as I do, I might also update to thinking AIs spend resources 1⁄100 as well.
In many of the stories I imagine for AIs seizing control, very powerful AIs end up directly pursuing close correlated of what was reinforced in training (sometimes called reward-seeking, though I’m trying to point at a more general notion). Such AIs are reasonably likely to pursue relatively obviously valueless-from-my-perspective things on reflection. Overall, they might act more like a ultra powerful corporation that just optimizes for power/money rather than our children (see also here). More generally, AIs might in some sense be subjected to wildly higher levels of optimization pressure than humans while being able to better internalize these values (lack of genetic bottleneck) which can plausibly result in “worse” values from my perspective.
Note that we’re conditioning on safety/alignment technology failing to retain human control, so we should imagine correspondingly less human control over AI values.
I think that the fraction of computation resources of our lightcone used based on the result of a reasonably-good-reflection process seems similar between human control and AI control (perhaps 50%). It’s possible to mess this up of course and either mess up the reflection or to lock-in bad values too early. But, when I look at the balance of arguments, humans messing this up seems pretty similar to AIs messing this up to me. So, the main question is what the result of such a process would be. One way to put this is that I don’t expect humans to differ substantially from AIs in terms of how “thoughtful” they are.
I interpret one of your arguments as being “Humans won’t be very thoughtful about how they spend vast, vast amounts of computational resources. After all, they aren’t thoughtful right now.” To the extent I buy this argument, I think it applies roughly equally well to AIs. So naively, it just divides by both sides rather than making AI look more favorable. (At least, if you accept that all most all of the value comes from being at least a bit thoughtful, which you also contest. See my arguments for that.)
In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium.
Suppose that a single misaligned AI takes control and it happens to care somewhat about its own happiness while not having any more “altruistic” tendencies that I would care about or you would care about. (I think misaligned AIs which seize control caring about their own happiness substantially seems less likely than not, but let’s suppose this for now.) (I’m saying “single misaligned AI” for simplicity, I get that a messier coalition might be in control.) It now has access to vast amounts of computation after sending out huge numbers of probes to take control over all available energy. This is enough computation to run absolutely absurd amounts of stuff.
What are you imagining it spends these resources on which is competitive with optimized goodness? Running >10^50 copies of itself which are heavily optimized for being as happy as possible while spending?
If a small number of agents have a vast amount of power, and these agents don’t (eventually, possibly after a large amount of thinking) want to do something which is de facto like the values I end up caring about upon reflection (which is probably, though not certainly, vaguely like utilitarianism in some sense), then from my perspective it seems very likely that the resources will be squandered.
If you’re imagining something like:
It thinks carefully about what would make “it” happy.
It realizes it cares about having as many diverse good experience moments as possible in a non-indexical way.
It realizes that heavy self-modification would result in these experience moments being better and more efficient, so it creates new versions of “itself” which are radically different and produce more efficiently good experiences.
It realizes it doesn’t care much about the notion of “itself” here and mostly just focuses on good experiences.
It runs vast numbers of such copies with diverse experiences.
Then this is just something like utilitarianism by another name via a differnet line of reasoning.
I thought your view was that step (2) in this process won’t go like this. E.g., currently self-ish entities will retain indexical preferences. If so, then I do see where the goodness can plausibly come from.
The fact that our current world isn’t well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here.
When I look at very rich people (people with >$1 billion), it seems like the dominant way they make the world better via spending money (not via making money!) is via thoughtful altuistic giving not via consumption.
Perhaps your view is that with the potential for digital minds this situation will change?
(Also, it seems very plausible to me that the dominant effect on current welfare is driven mostly by the effect on factory farming and other animal welfare.)
I expect this trend to further increase as people get much, much wealthier and some fraction (probably most) of them get much, much smarter and wiser with intelligence augmentation.
In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I’m still quite confused about why many people seem to disagree with the view I expressed, and I’m interested in engaging more to get a better understanding of their perspective.
At the least, I thought I’d write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective.
The core thesis that was trying to defend is the following view:
My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data.
Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI’s operator.
But beyond these two objections, which I feel I understand fairly well, there’s also significant disagreement about other questions. Based on my discussions, I’ve attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone’s views on this subject:
Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity’s modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences.
Again, I’m not sure if this summary accurately represents what people believe. However, it’s what some seem to be saying. I personally think this argument is weak. But I feel I’ve had trouble making my views very clear on this subject, so I thought I’d try one more time to explain where I’m coming from here. Let me respond to the two main parts of the argument in some amount of detail:
(i) “The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives.”
My response:
I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard.
I think the intuition behind the argument here is something like this:
In the future, it will become possible to create “hedonium”—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value.
The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe’s resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering.
Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects.
If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one’s inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons.
In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium.
In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn’t seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization.
To be clear, I think it’s totally possible that there’s something about this argument that I’m missing here. And there are a lot of potential objections I’m skipping over here. But on a basic level, I mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn’t well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here.
(ii) “At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity’s modest (but non-negligible) utilitarian tendencies.”
My response:
Since only a small portion of humanity is explicitly utilitarian, the argument’s own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity’s limited utilitarian impulses. While I agree we shouldn’t assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems “more utilitarian” than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism.
Moreover, as I’ve said previously, it’s probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences.
To see where I’m coming from, consider how humans routinely are “misaligned” with each other, in the sense of not sharing each other’s preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO.
Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I’m simply skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research.
Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than “if we fail to align AIs, then there’s a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead”. If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work.
While it’s appropriate to consider even low-probability risks when the stakes are high, I’m doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I’ve said before, I think it’s quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it’s perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular.
But if you’re a utilitarian, and not particularly attached to human preferences per se (i.e., you’re non-speciesist), I don’t think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.
My proposed counter-argument loosely based on the structure of yours.
Summary of claims
A reasonable fraction of computational resources will be spent based on the result of careful reflection.
I expect to be reasonably aligned with the result of careful reflection from other humans
I expect to be much less aligned with result of AIs-that-seize-control reflecting due to less similarity and the potential for AIs to pursue relatively specific objectives from training (things like reward seeking).
Many arguments that human resource usage won’t be that good seem to apply equally well to AIs and thus aren’t differential.
Full argument
The vast majority of value from my perspective on reflection (where my perspective on reflection is probably somewhat utilitarian, but this is somewhat unclear) in the future will come from agents who are trying to optimize explicitly for doing “good” things and are being at least somewhat thoughtful about it, rather than those who incidentally achieve utilitarian objectives. (By “good”, I just mean what seems to them to be good.)
At present, the moral views of humanity are a hot mess. However, it seems likely to me that a reasonable fraction of the total computational resources of our lightcone (perhaps 50%) will in expectation be spent based on the result of a process in which an agent or some agents think carefully about what would be best in a pretty delibrate and relatively wise way. This could involve eventually deferring to other smarter/wiser agents or massive amounts of self-enhancement. Let’s call this a “reasonably-good-reflection” process.
Why think a reasonable fraction of resources will be spent like this?
If you self-enhance and get smarter, this sort of reflection on your values seems very natural. The same for deferring to other smarter entities. Further, entities in control might live for an extremely long time, so if they don’t lock in something, as long as they eventually get around to being thoughtful it should be fine.
People who don’t reflect like this probably won’t care much about having vast amounts of resources and thus the resources will go to those who reflect.
The argument for “you should be at least somewhat thoughtful about how you spend vast amounts of resources” is pretty compelling at an absolute level and will be more compelling as people get smarter.
Currently a variety of moderately powerful groups are pretty sympathetic to this sort of view and the power of these groups will be higher in the singularity.
I expect that I am pretty aligned (on reasonably-good-reflection) with the result of random humans doing reasonably-good-reflection as I am also a human and many of the underlying arguments/intuitions I think seem important seem likely to seem important to many other humans (given various common human intuitions) upon those humans becoming wiser. Further, I really just care about the preferences of (post-)humans who end care most about using vast, vast amounts of computational resources (assuming I end up caring about these things on reflection), because the humans who care about other things won’t use most of the resources. Additionally, I care “most” about the on-reflection preferences I have which are relatively less contingent and more common among at least humans for a variety of reasons. (One way to put this is that I care less about worlds in which my preferences on reflection seem highly contingent.)
So, I’ve claimed that reasonably-good-reflection resource usage will be non-trivial (perhaps 50%) and that I’m pretty aligned with humans on reasonably-good-reflection. Supposing these, why think that most of the value is coming from something like reasonably-good-reflection prefences rather than other things, e.g. not very thoughtful indexical preferences (selfish) consumption? Broadly three reasons:
I expect huge returns to heavy optimization of resource usage (similar to spending altruistic resources today IMO and in the future we’ll we smarter which will make this effect stronger).
I don’t think that (even heavily optimized) not-very-thoughtful indexical preferences directly result in things I care that much about relative to things optimized for what I care about on reflection (e.g. it probably doesn’t result in vast, vast, vast amounts of experience which is optimized heavily for goodness/$).
Consider how billionaries currently spend money which doesn’t seem to have have much direct value, certainly not relative to their altruistic expenditures.
I find it hard to imagine that indexical self-ish consumption results in things like simulating 10^50 happy minds. See also my other comment. It seems more likely IMO that people with self-ish preferences mostly just buy positional goods that involve little to no experience (separately, I expect this means that people without self-ish preferences get more of the compute, but this is counted in my earlier argument, so we shouldn’t double count it.)
I expect that indirect value “in the minds of the laborers producing the goods for consumption” is also small relative to things optimized for what I care about on reflection. (It seems pretty small or maybe net-negative (due to factory farming) today (relative to optimized altruism) and I expect the share will go down going forward.)
(Aside: I was talking about not-very-thoughtful indexical-preferences. It’s likely to me that doing a reasonably good job reflecting on selfish preferences get back to something like de facto utilitarianism (at least as far as how you spend the vast majority of computational resources) because personal identity and indexical preferences don’t make much sense and the thing you end up thinking is more like “I guess I just care about experiences in general”.)
What about AIs? I think there are broadly two main reasons to expect that what AIs do on reasonably-good-reflection to be worse from my perspective than what humans do:
As discussed above, I am more similar to other humans and when I inspect the object level of how other humans think or act, I feel reasonably optimistic about the results of reasonably-good-reflection for humans. (It seems to me like the main thing holding me back from agreement with other humans is mostly biases/communication/lack of smarts/wisdom given many shared intuitions.) However, AIs might be more different and thus result in less value. Further, the values of humans after reasonably-good-reflection seem close to saturating in goodness from my perspective (perhaps 1⁄3 or 1⁄2 of the value of purely my values), so it seems hard for AI to do better.
To better understand this argument, imagine that instead of humanity the question was between identical clones of myself and AIs. It’s pretty clear I share the same values the clones, so the clones do pretty much strictly better than AIs (up to self-defeating moral views).
I’m uncertain about the degree of similarity between myself and other humans. But, mostly the underlying similarity uncertainties also applies to AIs. So, e.g., maybe I currently think on reasonably-good-reflection humans spend resources 1⁄3 as well as I would and AIs spend resources 1⁄9 as well. If I updated to think that other humans after reasonably-good-reflection only spend resources 1⁄10 as well as I do, I might also update to thinking AIs spend resources 1⁄100 as well.
In many of the stories I imagine for AIs seizing control, very powerful AIs end up directly pursuing close correlated of what was reinforced in training (sometimes called reward-seeking, though I’m trying to point at a more general notion). Such AIs are reasonably likely to pursue relatively obviously valueless-from-my-perspective things on reflection. Overall, they might act more like a ultra powerful corporation that just optimizes for power/money rather than our children (see also here). More generally, AIs might in some sense be subjected to wildly higher levels of optimization pressure than humans while being able to better internalize these values (lack of genetic bottleneck) which can plausibly result in “worse” values from my perspective.
Note that we’re conditioning on safety/alignment technology failing to retain human control, so we should imagine correspondingly less human control over AI values.
I think that the fraction of computation resources of our lightcone used based on the result of a reasonably-good-reflection process seems similar between human control and AI control (perhaps 50%). It’s possible to mess this up of course and either mess up the reflection or to lock-in bad values too early. But, when I look at the balance of arguments, humans messing this up seems pretty similar to AIs messing this up to me. So, the main question is what the result of such a process would be. One way to put this is that I don’t expect humans to differ substantially from AIs in terms of how “thoughtful” they are.
I interpret one of your arguments as being “Humans won’t be very thoughtful about how they spend vast, vast amounts of computational resources. After all, they aren’t thoughtful right now.” To the extent I buy this argument, I think it applies roughly equally well to AIs. So naively, it just divides by both sides rather than making AI look more favorable. (At least, if you accept that all most all of the value comes from being at least a bit thoughtful, which you also contest. See my arguments for that.)
Suppose that a single misaligned AI takes control and it happens to care somewhat about its own happiness while not having any more “altruistic” tendencies that I would care about or you would care about. (I think misaligned AIs which seize control caring about their own happiness substantially seems less likely than not, but let’s suppose this for now.) (I’m saying “single misaligned AI” for simplicity, I get that a messier coalition might be in control.) It now has access to vast amounts of computation after sending out huge numbers of probes to take control over all available energy. This is enough computation to run absolutely absurd amounts of stuff.
What are you imagining it spends these resources on which is competitive with optimized goodness? Running >10^50 copies of itself which are heavily optimized for being as happy as possible while spending?
If a small number of agents have a vast amount of power, and these agents don’t (eventually, possibly after a large amount of thinking) want to do something which is de facto like the values I end up caring about upon reflection (which is probably, though not certainly, vaguely like utilitarianism in some sense), then from my perspective it seems very likely that the resources will be squandered.
If you’re imagining something like:
It thinks carefully about what would make “it” happy.
It realizes it cares about having as many diverse good experience moments as possible in a non-indexical way.
It realizes that heavy self-modification would result in these experience moments being better and more efficient, so it creates new versions of “itself” which are radically different and produce more efficiently good experiences.
It realizes it doesn’t care much about the notion of “itself” here and mostly just focuses on good experiences.
It runs vast numbers of such copies with diverse experiences.
Then this is just something like utilitarianism by another name via a differnet line of reasoning.
I thought your view was that step (2) in this process won’t go like this. E.g., currently self-ish entities will retain indexical preferences. If so, then I do see where the goodness can plausibly come from.
When I look at very rich people (people with >$1 billion), it seems like the dominant way they make the world better via spending money (not via making money!) is via thoughtful altuistic giving not via consumption.
Perhaps your view is that with the potential for digital minds this situation will change?
(Also, it seems very plausible to me that the dominant effect on current welfare is driven mostly by the effect on factory farming and other animal welfare.)
I expect this trend to further increase as people get much, much wealthier and some fraction (probably most) of them get much, much smarter and wiser with intelligence augmentation.