I’m currently working in technical AI safety, and I have two main thoughts on this: 1) We currently don’t have the ability to robustly imbue AI with ANY values, let alone values that include all animals. We need to get a lot farther with solving this technical problem (the alignment problem) before we can meaningfully take any actions which will improve the longterm future for animals. 2) The AI Safety community generally seems mostly on board with animal welfare, but it’s not a significant priority at all, and I don’t think they take seriously the idea that there are S-risks downstream of human values (e.g. locking in wild-animal suffering). I’m personally pretty worried about this, not because I have a strong take about the probability of S-risks like this, but because the general vibe is just so apathetic about this kind of thing that I don’t trust them to notice and take action if it were a serious problem.
Thanks for your comment. Are there any actions the EA community can take to help the AI Safety community prioritize animal welfare and take more seriously the idea that there are S-risks downstream or human values?
I’m currently working in technical AI safety, and I have two main thoughts on this:
1) We currently don’t have the ability to robustly imbue AI with ANY values, let alone values that include all animals. We need to get a lot farther with solving this technical problem (the alignment problem) before we can meaningfully take any actions which will improve the longterm future for animals.
2) The AI Safety community generally seems mostly on board with animal welfare, but it’s not a significant priority at all, and I don’t think they take seriously the idea that there are S-risks downstream of human values (e.g. locking in wild-animal suffering). I’m personally pretty worried about this, not because I have a strong take about the probability of S-risks like this, but because the general vibe is just so apathetic about this kind of thing that I don’t trust them to notice and take action if it were a serious problem.
Thanks for your comment. Are there any actions the EA community can take to help the AI Safety community prioritize animal welfare and take more seriously the idea that there are S-risks downstream or human values?