And there is distinction I haven’t seen you acknowledged: while high “quality” doesn’t require humans to be around, I ultimately judge quality by my values.
Is there any particular reason why you are partial towards humans generically controlling the future, relative to this particular current generation of humans? To me, it seems like being partial to one’s own values, one’s community, and especially one’s own life, generally leads to an even stronger argument for accelerationism, since the best way to advance your own values is generally to actually “be there” when AI happens.
In my opinion, the main relevant alternative to this view is to be partial to the human species, as opposed to being partial to either one’s current generation, or oneself. And I think the human species is kind of a weird category to be partial to, relative to those other things. Do you disagree?
In my opinion, the main relevant alternative to this view is to be partial to the human species, as opposed to being partial to either one’s current generation, or oneself. And I think the human species is kind of a weird category to be partial to, relative to those other things. Do you disagree?
I agree with this.
the best way to advance your own values is generally to actually “be there” when AI happens.
I (strongly) disagree with this. Me being alive is a relatively small part of my values. And since I am not the director of the world, me personally being around to influence things is unlikely to have a decisive impact on things I value.
In more detail: Sure, all else being equal, me being there when AI happens is mildly helpful. But the outcome of building AI seems to be a function of, among other things, (i) values of the people building it + (ii) how much reflection they can do on those values + (iii) the environment dynamics these people are subject to (e.g., the current race dynamics between AI companies). And over time, I expect the potential decrease in (i) to be far outweighed by gains in (ii) and (iii).
The first issue is about (i), that it is not actually me building the AGI, either now or in the future. But I am willing to grant that (all else being equal) current generation is more likely to have values closer to my values.
However, I expect that the factors are (ii) and (iii) are just as influential. Regarding (ii), it seems we keep making progress at philosophy, ethics, etc, and to me, this currently far outweighs the value drift in (i).
Regarding (iii), my impression is that the current situation is so bad that it can’t get much worse, and we might as well wait. This of course depends on how likely you think we are likely to get a bad outcome if we either (a) get superintelligence without additional progress on alignment or (b) get widespread human-level AI with no progress on alignment, institution design, etc.
Me being alive is a relatively small part of my values.
I agree some people (such as yourself) might be extremely altruistic, and therefore might not care much about their own life relative to other values they hold, but this position is fairly uncommon. Most people care a lot about their own lives (and especially the lives of their family and friends) relative to other things they care about. We can empirically test this hypothesis by looking at how people choose to spend their time and money; and the results are generally that people spend their money on themselves, their family and their friends.
since I am not the director of the world, me personally being around to influence things is unlikely to have a decisive impact on things I value.
You don’t need to be director of the world to have influence over things. You can just be a small part of the world to have influence over things that you care about. This is essentially what you’re already doing by living and using your income to make decisions, to satisfy your own preferences. I’m claiming this situation could and probably will persist into the indefinite future, for the agents that exist in the future.
I’m very skeptical that there will ever be a moment in time during which there will be a “director of the world”, in a strong sense. And I doubt the developer of the first AGI will become the director of the world, even remotely (including versions of them that reflect on moral philosophy etc.). You might want to read my post about this.
Is there any particular reason why you are partial towards humans generically controlling the future, relative to this particular current generation of humans? To me, it seems like being partial to one’s own values, one’s community, and especially one’s own life, generally leads to an even stronger argument for accelerationism, since the best way to advance your own values is generally to actually “be there” when AI happens.
In my opinion, the main relevant alternative to this view is to be partial to the human species, as opposed to being partial to either one’s current generation, or oneself. And I think the human species is kind of a weird category to be partial to, relative to those other things. Do you disagree?
I agree with this.
I (strongly) disagree with this. Me being alive is a relatively small part of my values. And since I am not the director of the world, me personally being around to influence things is unlikely to have a decisive impact on things I value.
In more detail: Sure, all else being equal, me being there when AI happens is mildly helpful. But the outcome of building AI seems to be a function of, among other things, (i) values of the people building it + (ii) how much reflection they can do on those values + (iii) the environment dynamics these people are subject to (e.g., the current race dynamics between AI companies). And over time, I expect the potential decrease in (i) to be far outweighed by gains in (ii) and (iii).
The first issue is about (i), that it is not actually me building the AGI, either now or in the future. But I am willing to grant that (all else being equal) current generation is more likely to have values closer to my values.
However, I expect that the factors are (ii) and (iii) are just as influential. Regarding (ii), it seems we keep making progress at philosophy, ethics, etc, and to me, this currently far outweighs the value drift in (i).
Regarding (iii), my impression is that the current situation is so bad that it can’t get much worse, and we might as well wait. This of course depends on how likely you think we are likely to get a bad outcome if we either (a) get superintelligence without additional progress on alignment or (b) get widespread human-level AI with no progress on alignment, institution design, etc.
I agree some people (such as yourself) might be extremely altruistic, and therefore might not care much about their own life relative to other values they hold, but this position is fairly uncommon. Most people care a lot about their own lives (and especially the lives of their family and friends) relative to other things they care about. We can empirically test this hypothesis by looking at how people choose to spend their time and money; and the results are generally that people spend their money on themselves, their family and their friends.
You don’t need to be director of the world to have influence over things. You can just be a small part of the world to have influence over things that you care about. This is essentially what you’re already doing by living and using your income to make decisions, to satisfy your own preferences. I’m claiming this situation could and probably will persist into the indefinite future, for the agents that exist in the future.
I’m very skeptical that there will ever be a moment in time during which there will be a “director of the world”, in a strong sense. And I doubt the developer of the first AGI will become the director of the world, even remotely (including versions of them that reflect on moral philosophy etc.). You might want to read my post about this.