An even deeper atheism
Joe_CarlsmithJan 11, 2024, 5:28 PM
26 points
2 comments15 min readEA link9 votes
Overall karma indicates overall quality.
Double-click for strong vote.
NIL
Joe_CarlsmithJan 11, 2024, 5:28 PM
26 points
2 comments15 min readEA link9 votes
Overall karma indicates overall quality.
Double-click for strong vote.
4 votes
Overall karma indicates overall quality.
Total points: 2
Agreement karma indicates agreement, separate from overall quality.
I agree with the weak claim that if literally every powerful entity in the world is entirely indifferent to my welfare, it is unsurprising if I am treated poorly. But I suspect there’s a stronger claim underneath this thesis that seems more relevant to the debate, and also substantially false.
The stronger claim is: adding powerful entities to the world who don’t share our values is selfishly bad, and the more of such entities we add to the world, the worse our situation becomes (according to our selfish values). We know this stronger claim is likely false because—assuming we accept the deeper atheism claim that humans have non-overlapping utility functions—the claim would imply that ordinary population growth is selfishly bad. Think about it: by permitting ordinary population growth, we are filling the universe with entities who don’t share our values. Population growth, in other words, causes our relative power in the world to decline.
Yet, I think a sensible interpretation is that ordinary population growth is not bad on these grounds. I doubt it is better, selfishly, for the Earth to have 800 million people compared to 8 billion people, even though I would have greater relative power in the first world compared to the second. [ETA: see this comment for why I think population growth seems selfishly good on current margins.]
Similarly, I doubt it is better, selfishly, for the Earth to have 8 billion humans compared to 80 billion human-level agents, 90% of which are AIs. Likewise, I’m skeptical that it is worse for my values if there are 8 billion slightly-smarter-than human AIs who are individually, on average, 9 times more powerful than humans, living alongside 8 billion humans.
(This is all with the caveat that the details here matter a lot. If, for example, these AIs have a strong propensity to be warlike, or aren’t integrated into our culture, or otherwise form a natural coalition against humans, it could very well end poorly for me.)
If our argument for the inherent danger of AI applies equally to ordinary population growth, I think something has gone wrong in our argument, and we should probably reject it, or at least revise it.
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
Nice post, Joe!
In my mind, there is a sense in which this last question is analogous to Neanderthals[1] asking a few hundreds of thousands of years ago whether they would still be around now. They are not, but is this any significant evidence that the world has gone through a much less valuable trajectory? I do not think so. What arguably matters is whether there are still beings around with the desire and ability to increase welfare. So I would instead ask, “are all intelligent welfarists dead?”, where intelligent could be interpreted as sufficiently intelligent to eventually leverage (via successors or not) the cosmic endowment to increase welfare. My question is equivalent to yours nearterm, since humans are the only intelligent welfarists now, but the answers may come apart in the next few decades thanks to (even more) intelligent sentient AI. To the extent the answers to the 2 questions differ, it seems important to focus on the right one.
Or individuals of another species of the genus Homo. There are 12 besides Homo Sapiens!