I think this area may well be one of the most promising new cause areas for EA. Getting to general public’s consciousness the idea that aging may be [to an extent] treatable, as well as affecting the regulatory bodies (FDA, EMA and such) and professional community to make developing anti-aging treatments easier is both tractable, neglected, and if done successfully it can affect allocation of the huge governmental and private funds currently directed to drug discovery, thus creating a leverage effect. It’s very unfortunate this got as few votes as it did.
Alex P
[Question] Why AGIs utility can’t outweigh humans’ utility?
Ok, so here’s my take away from the answers so far:
Most flavors of utilitarianism (except for preference utilitarianism) don’t consider any goal-having agent achieving those goals as utility. Instead there assumed to be some metric of similarity between the goals and/or mental states of the agent and those of humans, and the agent’s achievement of its goals counts the less toward total utility the lower this similarity metric is, so completely alien agents achieving their alien goals and [non-]experiencing alien non-joy about it don’t register as adding utility.
How exactly this metric should be formulated is disputed and fuzzy, and quite often a lot of this fuzziness and uncertainty is swept under the rug with the word “sentience” (or something similar) written on it.
Additionally, the proportion of EAs who would seriously consider “all humans replaced by [particular kind of] AIs” as an acceptable outcome may be not as trivial as I assumed.
Please let me know if I’m grossly misunderstanding or misrepresenting something, and thank you everyone for your explanations!
>It’s hard to imagine AI systems having this
Why? As per instrumental convergence, any advanced AI is likely to have self-preservation and a negative reward signal it would receive upon a violation of such drive would be functionally very similar to pain (give or take the bodily component, but I don’t think it’s required? Otherwise simulate a million human minds in agony is OK, and I assume we agree it’s not). Likewise, any system with goal-directed agentic behavior would experience some reward from moving towards its goals, which seems functionally very similar to pleasure (or satisfaction or something along these lines).
So two questions (please also see my reply to HjalmarWijk for context)::
Do you on these grounds think that insect suffering (and everything more exotic) is meaningless? Because our last common ancestor with insects hardly have any neurons, and unsurprisingly our neuronal architecture is very different, so there isn’t many reasons to expect any isomorphism between our “mental” processes.
Assuming an AI is sentient (in whatever sense you put into this word) but otherwise not meaningfully isomorphic to humans. How do you define “positive” inner life in that case?
Not quite sure, but as far as I understand only the top 10 or so voted posts were getting any funding within this contest, and the contest is closed by now. There is definitely other ways to get funding from EA, but I’m one of the least qualified people on this forum to advice on those. Jack Harley (of LongevityWiki fame) is probably the right guy to ask about other avenues, he’s much more involved with the community and he’s working on a similar task—public engagement around longevity and anti-aging.
In the recent interview with Katja Grace referenced on ACX, she mentioned that many people may be opposed to slowing down AI progress because (I’m paraphrasing) they perceive AGI as a genie that will solve longevity and other problems and bring about the Cool Transhumanist Future, which they won’t see otherwise—due to their age and/or longer timelines. I have been hanging out in longevity/anti-aging spaces for a while and this perspective is exceedingly common there. People are very hyped about AI coming and curing aging, and dismiss any concerns about AI safety.
From this point of view, solving aging, or even just making a tangible progress towards LEV , will make those people less resistant to the notion of slowing/stopping AI progress. This is complementary to the often mentioned idea that longer life expectancy will cause people to care more about AGI (and other existential risks). This of course doesn’t imply that anti-aging is more important than direct work on AI alignment, but 1) it is likely more tractable and 2) not everyone can or want to work on AI alignment directly.