I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.