If I had to pick a second consideration I’d go with:
After millions of years of life (or much more) and massive amounts of cognitive enhancement, the way post-humans might act isn’t clearly well predicted by just looking at their current behavior.
Again, I’d like to stress that my claim is:
Also, to be clear, none of the considerations I listed make a clear and strong case for unaligned AI being less morally valuable, but they do make the case that the relevant argument here is very different from the considerations you seem to be listing. In particular, I think value won’t be coming from incidental consumption.
If I had to pick a second consideration I’d go with:
After millions of years of life (or much more) and massive amounts of cognitive enhancement, the way post-humans might act isn’t clearly well predicted by just looking at their current behavior.
Again, I’d like to stress that my claim is: