I don’t yet have a strong view on how plausible it is that animal advocacy is a priority for longtermism. However, I think it’s worth noting that, if it is, there are probably quite a few other sorts of projects that would qualify using exactly the same arguments.
For instance, at the Happier Lives Institute, we spend a lot of time thinking about best to measure well-being. There’s an analogous argument that, if governments had better measures of well-being - e.g. better than GDP—and used them to make public policy decisions, that would have enormously valuable consequences over the long-run. I won’t do it here, but the arguments are sufficiently analogous that, in Tobias’ post, you could replace “animal advocacy” with “well-being measurement”, keep the rest of the text the same and it would still make sense. So perhaps well-being measurement is a plausible longtermist priority too.
Other examples that might work include, just from the top of my head: “democratic institutions”, “peace building”, “education”.
It’s not clear to me if the right way to update is (a) all these ‘society change’ interventions are plausible longterm priorities or (b) none of them are. I lean toward (a), but I’m not very confident.
There’s an analogous argument that, if governments had better measures of well-being - e.g. better than GDP—and used them to make public policy decisions, that would have enormously valuable consequences over the long-run.
For human-centric concerns, this could be true, but my impression is that this kind of thing is more likely to happen eventually anyway in most human populations, because humans are both moral patients and moral agents; they will eventually create pressure for reform in this direction. On the other hand, s-risks often involve moral patients who aren’t (powerful) agents, so we need to rely on agents to take their interests seriously in order to avoid s-risks, and advocacy is one way we might hope to ensure this.
If we send out vessels with moral patients to colonize space, something which is hard to reverse, if these moral patients are not agents, then their situations may be essentially decided for them at the time they’re sent off and by the concern that decision-makers had for their welfare at the time, whereas if they are also agents (and motivated to improve their own welfare), then they can do more to improve their own welfare on their own.
I don’t yet have a strong view on how plausible it is that animal advocacy is a priority for longtermism. However, I think it’s worth noting that, if it is, there are probably quite a few other sorts of projects that would qualify using exactly the same arguments.
For instance, at the Happier Lives Institute, we spend a lot of time thinking about best to measure well-being. There’s an analogous argument that, if governments had better measures of well-being - e.g. better than GDP—and used them to make public policy decisions, that would have enormously valuable consequences over the long-run. I won’t do it here, but the arguments are sufficiently analogous that, in Tobias’ post, you could replace “animal advocacy” with “well-being measurement”, keep the rest of the text the same and it would still make sense. So perhaps well-being measurement is a plausible longtermist priority too.
Other examples that might work include, just from the top of my head: “democratic institutions”, “peace building”, “education”.
It’s not clear to me if the right way to update is (a) all these ‘society change’ interventions are plausible longterm priorities or (b) none of them are. I lean toward (a), but I’m not very confident.
For human-centric concerns, this could be true, but my impression is that this kind of thing is more likely to happen eventually anyway in most human populations, because humans are both moral patients and moral agents; they will eventually create pressure for reform in this direction. On the other hand, s-risks often involve moral patients who aren’t (powerful) agents, so we need to rely on agents to take their interests seriously in order to avoid s-risks, and advocacy is one way we might hope to ensure this.
If we send out vessels with moral patients to colonize space, something which is hard to reverse, if these moral patients are not agents, then their situations may be essentially decided for them at the time they’re sent off and by the concern that decision-makers had for their welfare at the time, whereas if they are also agents (and motivated to improve their own welfare), then they can do more to improve their own welfare on their own.