Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:
”How likely is it that a world where AI goes well for humans also goes well for other sentient beings?”
It could probably be much more precise and nuanced, but specifically, I would want to assess whether “trying to make AI go well for all sentient beings” is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures—the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me.
I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about—assuming that explicit marginal cost-effectiveness estimates aren’t really possible, this seems like the most common proxy I refer to that I am missing solid numbers on.
Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:
”How likely is it that a world where AI goes well for humans also goes well for other sentient beings?”
It could probably be much more precise and nuanced, but specifically, I would want to assess whether “trying to make AI go well for all sentient beings” is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures—the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me.
I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about—assuming that explicit marginal cost-effectiveness estimates aren’t really possible, this seems like the most common proxy I refer to that I am missing solid numbers on.