Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I’m hypothetically in a position to exercise a lot of power over reproductive choices—perhaps by backing tax plans which either reward or punish having children. I think what you’re asking is “suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there’ll be so many miserable people that it’ll be better on utilitarian grounds”? The answer is no, I should not do that. I shouldn’t exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: “suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy.” My answer is, again, unsurprisingly, No. No, I shouldn’t use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism’s advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that’s where it broke down it’d be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).
Completely agree it is difficult to find “uniquely human” behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don’t rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don’t really satisfy what I think makes sense to call consciousness. I’m thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there’s conscious elements there.
IOW, I’m not a “scale is all you need” person—I don’t think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that’s just sort of another way of saying it isn’t. :-) The sort of “self-talk” modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.