My impression is that the Animal Welfare Fund essentially focuses on āneartermistā animal welfare issues. (I mean this mostly in the sense of āthe intrinsic/āterminal goals you target are those which occur in the coming decades, rather than the longer-term futureā. But it also seems true in the sense of empirical and epistemological aspects of the AWFās āworldviewā seeming closer to those standard among āneartermistsā rather than longtermistsāe.g., Iād be surprised to hear that the Animal Welfare Fund made a grant partly based on how valuable a project would be for animals within 30 years if AGI was developed during that time.)
Do you think that that impression is correct?
If so, is that more because thatās the view the fund managers themselves hold, or more to have a clearer distinction in scope from the Long-Term Future Fund, or more because thatās the view for which there was more ādemandā for an EA Fund, or something else?
How open would you be to funding projects focused on improving outcomes for non-humans in the very long-term future, and/āor in the sort of relatively āspeculativeā scenarios that attract the most attention among longtermists?
Another example might be some of the Sentience Instituteās work
(These examples are just for illustration. And I donāt plan to do any of this work myself; Iām just wondering.)
(By the way, this isnāt at all intended to persuade or criticise. I think it would be reasonable to have an EA Fund thatās ājustā doing neartermist animal welfare, since that by itself is a big and seemingly important area. Iām just curious.)
I donāt think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and thereās no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think itās fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, Iāll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I donāt have super strong views on this issue though so take everything I say here to be stated with uncertainty.
Iām somewhat skeptical, to varying degrees, about the practical ability to test peopleās attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if thatās a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I donāt know what impact that would have on human responses to wild animal welfare or insects and I wouldnāt place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this donāt apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think itās definitely worth considering seriously.
For what itās worth, I think there is a good case to be made that WAI is somewhere between a neartermist and longtermist organization (mediumtermist?) ā e.g. this research and similar seem to be from a relatively longtermist perspective. Though Iām biased because I know that I am sympathetic to some aspects of a longtermist worldview (though obviously no longer work there), and that several of the staff there are also somewhat sympathetic to longtermism. These views might be separated from the work of the organization. And they received around 25% of the total made in this grant cycle.
From my limited knowledge of WAI, I think Iād say that that research you link to is indeed from a long-termist perspective, but most of the other stuff seems either targeted mostly at the next 5-60 years, or perhaps targeted at long-term futures that look much more like the present world than I expect (which would then go with the empirical/āepistemological views that seem more āneartermistā). Or maybe itās also partly that the work could plausibly be top priority from a longtermist perspective, but I havenāt seen/āheard WAI framing or justifying the work that way.
But this is just based on reading a handful of posts a while ago, watching some talks, etc. - I havenāt looked very closely.
(Iām also not necessarily saying I think WAI should change its priorities or how it frames/ājustifies them.)
As an organization, Wild Animal Initiative is committed to the position that animals matter equally regardless of when they exist.
That is, we exist to help as many wild animals as we can as much as we can. All else equal, it doesnāt matter to us whether that happens in our lifetimes or in the long-term future, because it feels the same to the animals in either case. Weāre not in the business of warm fuzziesādespite the warmth and fuzziness of many of our clients.
In practice, because there are so many wild animals in the far future, that leads us to think about the far future a lot. Itās the main reason weāre laser-focused on supporting the growth of a self-sustaining academic field dedicated to improving wild animal welfare. As far as we can tell, that currently seems like the most reliable vehicle for institutionalizing an ethical and scientific framework capable of continuously serving wild animalsā interests.
Several of our staff also believe that our decisions should primarily work backward from what we think would be best ~1000+ years from now. But we havenāt committed to that as an organization.
This position has been called āstrong longtermism.ā Itās something we plan to consider further.
Even though itās not our official position, strong longtermists might still choose to donate to WAIābecause they believe we have the most promising theory of change, because they believe weāre the most funding-constrained of available longtermist projects, or for other reasons.
In the meantime, Iād love to hear from anyone who has ideas on what we might do differently if we were to adopt a strong longtermist position.
Relatedly, Iām vaguely curious as to whether you have any thoughts on the longtermist case for working on farmed animals that I sketched in that post.
(To be clear, as I say in the post, I donāt know if I endorse that case myself.)
My impression is that the Animal Welfare Fund essentially focuses on āneartermistā animal welfare issues. (I mean this mostly in the sense of āthe intrinsic/āterminal goals you target are those which occur in the coming decades, rather than the longer-term futureā. But it also seems true in the sense of empirical and epistemological aspects of the AWFās āworldviewā seeming closer to those standard among āneartermistsā rather than longtermistsāe.g., Iād be surprised to hear that the Animal Welfare Fund made a grant partly based on how valuable a project would be for animals within 30 years if AGI was developed during that time.)
Do you think that that impression is correct?
If so, is that more because thatās the view the fund managers themselves hold, or more to have a clearer distinction in scope from the Long-Term Future Fund, or more because thatās the view for which there was more ādemandā for an EA Fund, or something else?
How open would you be to funding projects focused on improving outcomes for non-humans in the very long-term future, and/āor in the sort of relatively āspeculativeā scenarios that attract the most attention among longtermists?
An example would be a project to research some of the questions I sketched here: On the longtermist case for working on farmed animals [Uncertainties & research ideas]
Another example might be some of the Sentience Instituteās work
(These examples are just for illustration. And I donāt plan to do any of this work myself; Iām just wondering.)
(By the way, this isnāt at all intended to persuade or criticise. I think it would be reasonable to have an EA Fund thatās ājustā doing neartermist animal welfare, since that by itself is a big and seemingly important area. Iām just curious.)
I donāt think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and thereās no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think itās fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, Iāll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I donāt have super strong views on this issue though so take everything I say here to be stated with uncertainty.
Iām somewhat skeptical, to varying degrees, about the practical ability to test peopleās attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if thatās a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I donāt know what impact that would have on human responses to wild animal welfare or insects and I wouldnāt place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this donāt apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think itās definitely worth considering seriously.
For what itās worth, I think there is a good case to be made that WAI is somewhere between a neartermist and longtermist organization (mediumtermist?) ā e.g. this research and similar seem to be from a relatively longtermist perspective. Though Iām biased because I know that I am sympathetic to some aspects of a longtermist worldview (though obviously no longer work there), and that several of the staff there are also somewhat sympathetic to longtermism. These views might be separated from the work of the organization. And they received around 25% of the total made in this grant cycle.
From my limited knowledge of WAI, I think Iād say that that research you link to is indeed from a long-termist perspective, but most of the other stuff seems either targeted mostly at the next 5-60 years, or perhaps targeted at long-term futures that look much more like the present world than I expect (which would then go with the empirical/āepistemological views that seem more āneartermistā). Or maybe itās also partly that the work could plausibly be top priority from a longtermist perspective, but I havenāt seen/āheard WAI framing or justifying the work that way.
But this is just based on reading a handful of posts a while ago, watching some talks, etc. - I havenāt looked very closely.
(Iām also not necessarily saying I think WAI should change its priorities or how it frames/ājustifies them.)
Hi Michael and Abraham!
The answer depends on which type of longtermism weāre talking about.
As an organization, Wild Animal Initiative is committed to the position that animals matter equally regardless of when they exist.
That is, we exist to help as many wild animals as we can as much as we can. All else equal, it doesnāt matter to us whether that happens in our lifetimes or in the long-term future, because it feels the same to the animals in either case. Weāre not in the business of warm fuzziesādespite the warmth and fuzziness of many of our clients.
In practice, because there are so many wild animals in the far future, that leads us to think about the far future a lot. Itās the main reason weāre laser-focused on supporting the growth of a self-sustaining academic field dedicated to improving wild animal welfare. As far as we can tell, that currently seems like the most reliable vehicle for institutionalizing an ethical and scientific framework capable of continuously serving wild animalsā interests.
Several of our staff also believe that our decisions should primarily work backward from what we think would be best ~1000+ years from now. But we havenāt committed to that as an organization.
This position has been called āstrong longtermism.ā Itās something we plan to consider further.
Even though itās not our official position, strong longtermists might still choose to donate to WAIābecause they believe we have the most promising theory of change, because they believe weāre the most funding-constrained of available longtermist projects, or for other reasons.
In the meantime, Iād love to hear from anyone who has ideas on what we might do differently if we were to adopt a strong longtermist position.
Relatedly, Iām vaguely curious as to whether you have any thoughts on the longtermist case for working on farmed animals that I sketched in that post.
(To be clear, as I say in the post, I donāt know if I endorse that case myself.)