Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I think that AI welfare should be an EA priority, and I’m also working on it. I think this post is a good illustration of what that means, 5% seems reasonable to me. I also appreciate this post, as it has many of the core motivations for me. I recently spent several months thinking hard about the most effective philosophy PhD project I could work on, and ended up thinking that it was to work on AI consciousness.
Have you considered working on metaphilosophy / AI philosophical competence instead? Conditional on correct philosophy about AI welfare being important, most of future philosophical work will probably be done by AIs (to help humans / at our request, or for their own purposes). If AIs do that work badly and arrive at wrong conclusions, then all the object-level philosophical work we do now might only have short-term effects and count for little in the long run. (Conversely if we have wrong views now but AIs correct them later, that seems less disastrous.)
I hadn’t, that’s an interesting idea, thanks!
Thanks for letting me know! I have been wondering for a while why AI philosophical competence is so neglected, even compared to other subareas of what I call “ensuring a good outcome for the AI transition” (which are all terribly neglected in my view), and I appreciate your data point. Would be interested to hear your conclusions after you’ve thought about it.
Executive summary: The author argues that AI welfare is an important and neglected area that deserves more attention and resources, as decisions made now about AI systems could have enormous long-term consequences for the wellbeing of digital minds.
Key points:
AI welfare has direct longtermist relevance, with potential to significantly impact the moral value of the future.
Work on AI welfare may have synergies with other longtermist priorities like AI alignment and safety.
Near-term moral considerations support taking AI welfare seriously, as AI systems may soon surpass animals in ethical importance.
The field is highly neglected relative to its potential importance, especially from a practical perspective.
Progress on both philosophical and practical questions related to AI moral patienthood appears tractable.
The author recommends scaling up resources for AI welfare to ~5% of those going to AI safety/alignment.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Exciting! I’d be curious to hear more about your current projects and maybe help if possible! Is there a big-tent slack channel or something for work in this space?
↑ ≈ ✓