Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare.
Here’s a few initial thoughts from a non-professional (note I didn’t read the full post so I might have missed something):
I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not.
I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad. (Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)
It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future.
I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven’t thought that much about this.
Anyways, I wouldn’t be surprised if you already have considered all of these arguments.
I’m really looking forward to your follow-up post :)
Hey Jens, thanks a lot for your comment! I agree that moral circle expansion efforts and direct WAW interventions seem like really important elements of a portfolio of actions to bring about a net-positive future.
In terms of the unreliability of AI-specific animal advocacy actions given the uncertainty of the future: I guess there could be some pretty broad actions that would apply across various areas and scenarios, like lobbying governments to ensure that animal interests are mentioned in any cross-cutting national/​international standards and regulations on responsible AI use. The best bet might be a balance of those sorts of actions with more targeted, industry-specific actions (like engaging with regulators to ensure that AI systems used in e.g. intensive chicken farming are sensitive enough to genuinely detect most welfare concerns).
Thanks for those resources on gene drives, I’ll check them out!
I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering.
Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare.
Here’s a few initial thoughts from a non-professional (note I didn’t read the full post so I might have missed something):
I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not.
I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad.
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)
It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future.
I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven’t thought that much about this.
Anyways, I wouldn’t be surprised if you already have considered all of these arguments.
I’m really looking forward to your follow-up post :)
Hey Jens, thanks a lot for your comment! I agree that moral circle expansion efforts and direct WAW interventions seem like really important elements of a portfolio of actions to bring about a net-positive future.
In terms of the unreliability of AI-specific animal advocacy actions given the uncertainty of the future: I guess there could be some pretty broad actions that would apply across various areas and scenarios, like lobbying governments to ensure that animal interests are mentioned in any cross-cutting national/​international standards and regulations on responsible AI use. The best bet might be a balance of those sorts of actions with more targeted, industry-specific actions (like engaging with regulators to ensure that AI systems used in e.g. intensive chicken farming are sensitive enough to genuinely detect most welfare concerns).
Thanks for those resources on gene drives, I’ll check them out!
And cheers, watch this space! :-)
I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering.