Incidentally, I work on AI alignment and strongly agree with your points here, especially “Wild animal welfare is downstream (upstream, I think you mean?) from ~every other cause area”
I also think Wild Animal Initiative R&D may eventually wind up being extremely impactful for AI alignment.
Since it’s so unbelievably neglected and potentially high impact, I view it as a fairly high EV neglected approach that could contribute enormously to AI alignment.
Additionally, and a bit more out there, but the more we invest in this today, the better it may be for us in acausal trade with future intelligences that we’d want to prioritize our wellbeing too.
Nice! And yeah, I shouldn’t have said downstream. I mean something like, (almost) every intervention has wild animal welfare considerations (because many things end up impacting wild animals), so if you buy that wild animal welfare matters, the complexity of solving WAW problems isn’t just a problem for WAI — it’s a problem for everyone.
Incidentally, I work on AI alignment and strongly agree with your points here, especially “Wild animal welfare is downstream (upstream, I think you mean?) from ~every other cause area”
I also think Wild Animal Initiative R&D may eventually wind up being extremely impactful for AI alignment.
Since it’s so unbelievably neglected and potentially high impact, I view it as a fairly high EV neglected approach that could contribute enormously to AI alignment.
Additionally, and a bit more out there, but the more we invest in this today, the better it may be for us in acausal trade with future intelligences that we’d want to prioritize our wellbeing too.
Nice! And yeah, I shouldn’t have said downstream. I mean something like, (almost) every intervention has wild animal welfare considerations (because many things end up impacting wild animals), so if you buy that wild animal welfare matters, the complexity of solving WAW problems isn’t just a problem for WAI — it’s a problem for everyone.