For what it’s worth, I think it’s plausible that some interventions chosen for their short-term effects may be promising candidates for longtermist interventions. If you thought that s-risks were important and that larger moral circles mitigate s-risks, then plant-based and cultured animal product substitutes might be promising, since these seem most likely to shift attitudes towards animals the most and fastest, and this would (hopefully) help make the case for wild animals and artificial sentience mext. Maybe direct advocacy for protections for artificial sentience would be best, though, but I wouldn’t be surprised if you’d still at least want to target animals somewhat, since this seems more incremental and the step to artificial sentience is greater.
That being said, depending on how urgent s-risks are, how exactly we should approach animal product subsititutes may be different for the short term and long term. Longtermist-focused animal adocacy might be different in other ways, too; see this post.
Furthermore, if growth is fast enough in the future (exponentially? EDIT: It might hit a cubic limit due to physical constraints), and the future growth rate can’t reliably be increased, then growth today may have a huge effect on wealth in the long term. The difference Xat−Yat goes to +/−∞ as t→∞, if X≠Y.
If our sphere of influence grows exponentially fast, and our moral circle expands gradually, then you can make a similar argument supporting expanding our moral circle more quickly now.
I think there’s a difference between the muddy concept of ‘cause areas’ and actual specific charities/interventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but I’d be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of animals in the present day. Tobias makes a virtually identical point in the post you link to, so we may not disagree, apart from perhaps thinking about the word ‘intervention’ differently.
Most animal advocacy efforts are focused on helping animals in the here and now. If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.5
Similarly, I could imagine a longtermist concluding that if you look back through history, attempts to e.g. prevent extinction directly or implement better governance seem like they would have been critically hamstrung by a lack of development in the relevant fields, e.g. economics, and the general difficulty of imagining the future. But attempts to grow the economy and advance science seem to have snowballed in a way that impacts the future and also incidentally benefits the present. So in that way you could end up with a longtermist-inspired focus on things like ‘speed up economic growth’ or ‘advance important research’ which arguably fall under the ‘near-term human-centric welfare’ area on some categorisations of causes. But you didn’t get there from that starting point, and again I expect your eventual specific area of focus to be quite different.
If our sphere of influence grows exponentially, and our moral circle expands gradually, then you can make a similar argument supporting expanding it more quickly now
Just want to check I understand what you’re saying here. Are you saying we might want to focus more on expanding growth today because it could have a huge effect on wealth in the long term, or are you saying that we might want to focus more on expanding the moral circle today because we want our future large sphere of influence to be used in a good way rather than a bad way?
The second, we want our future large sphere of influence to be used in a good way. If our sphere of influence grows much faster than our moral circle (and our moral circle still misses a huge number of whom we should rightfully consider significant moral patients; the moral circle could be different in different parts of the universe), then it’s possible that the number of moral patients we could have helped but didn’t to grow very quickly, too. Basically, expanding the moral circle would always be urgent, and sooner has a much greater payoff than later.
For what it’s worth, I think it’s plausible that some interventions chosen for their short-term effects may be promising candidates for longtermist interventions. If you thought that s-risks were important and that larger moral circles mitigate s-risks, then plant-based and cultured animal product substitutes might be promising, since these seem most likely to shift attitudes towards animals the most and fastest, and this would (hopefully) help make the case for wild animals and artificial sentience mext. Maybe direct advocacy for protections for artificial sentience would be best, though, but I wouldn’t be surprised if you’d still at least want to target animals somewhat, since this seems more incremental and the step to artificial sentience is greater.
That being said, depending on how urgent s-risks are, how exactly we should approach animal product subsititutes may be different for the short term and long term. Longtermist-focused animal adocacy might be different in other ways, too; see this post.
Furthermore, if growth is fast enough in the future (exponentially? EDIT: It might hit a cubic limit due to physical constraints), and the future growth rate can’t reliably be increased, then growth today may have a huge effect on wealth in the long term. The difference Xat−Yat goes to +/−∞ as t→∞, if X≠Y.
If our sphere of influence grows
exponentiallyfast, and our moral circle expands gradually, then you can make a similar argument supporting expanding our moral circle more quickly now.I think there’s a difference between the muddy concept of ‘cause areas’ and actual specific charities/interventions here. At the level of cause areas, there could be overlap, because I agree that if you think the Most Important Thing is to expand the moral circle, then there are things in the animal-substitute space that might be interesting, but I’d be surprised and suspicious (not infinitely suspicious, just moderately so) if the actual bottom-line charity-you-donate-to was the exact same thing as what you got to when trying to minimise the suffering of animals in the present day. Tobias makes a virtually identical point in the post you link to, so we may not disagree, apart from perhaps thinking about the word ‘intervention’ differently.
Similarly, I could imagine a longtermist concluding that if you look back through history, attempts to e.g. prevent extinction directly or implement better governance seem like they would have been critically hamstrung by a lack of development in the relevant fields, e.g. economics, and the general difficulty of imagining the future. But attempts to grow the economy and advance science seem to have snowballed in a way that impacts the future and also incidentally benefits the present. So in that way you could end up with a longtermist-inspired focus on things like ‘speed up economic growth’ or ‘advance important research’ which arguably fall under the ‘near-term human-centric welfare’ area on some categorisations of causes. But you didn’t get there from that starting point, and again I expect your eventual specific area of focus to be quite different.
Just want to check I understand what you’re saying here. Are you saying we might want to focus more on expanding growth today because it could have a huge effect on wealth in the long term, or are you saying that we might want to focus more on expanding the moral circle today because we want our future large sphere of influence to be used in a good way rather than a bad way?
The second, we want our future large sphere of influence to be used in a good way. If our sphere of influence grows much faster than our moral circle (and our moral circle still misses a huge number of whom we should rightfully consider significant moral patients; the moral circle could be different in different parts of the universe), then it’s possible that the number of moral patients we could have helped but didn’t to grow very quickly, too. Basically, expanding the moral circle would always be urgent, and sooner has a much greater payoff than later.
This is pretty much pure speculation, though.
That makes sense! It does seem particularly important to have expanded the moral circle a lot before we spread to the stars.