Here’s an argument I made in 2018 during my philosophy studies:
A lot of animal welfare work is technically “long-termist” in the sense that it’s not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People’s work typically takes longer to impact animal welfare.
For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.
But once you accept the moral goodness of that, there’s little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I’m not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I suspect many people instead work on effective animal advocacy because that’s where their emotional affinity lies and it’s become part of their identity, because they don’t like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don’t think it’s philosophically robust.
A lot of animal welfare work is technically “long-termist” in the sense that it’s not about helping already existing beings.
That doesn’t match the standard definition of longtermism: “positively influencing the long-term future is a key moral priority of our time”, it seems to me that it’s more about rejecting some narrow person-affecting views.
I suspect many people instead work on effective animal advocacy because that’s where their emotional affinity lies and it’s become part of their identity, because they don’t like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk.
I think it’s very tempting to assume that people who work on things that we don’t consider the most important things to work on are doing so because of emotional/irrational/social reasons.
I’m imagine that some animal welfare people (and sometimes myself) see people working on extremely fun and interesting problems in AI, while making millions of dollars, with extremely vague theories for why this might be making things better and not worse for people millions of years for now, and imagine that they’re doing so for non-philosophically-robust reasons. I currently believe that the social and economic incentives to work in AI are much greater than the incentives to work in animal welfare. But I don’t think this is a useful framing (as it’s too tempting and could explain anything), and we should instead weigh the arguments that people give for prioritizing one cause instead of another.
I think the tractability aspect of AI/s-risk work, and the fact that all previous attempts backfired (Singularity Institute, early MIRI, early DeepMind, early OpenAI, and we’ll see with Anthropic) is the single main reason why some people are not prioritizing work in AI/s-risk at the moment, and it’s not about extremely narrow person-affecting views (which I think are very rare).
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I’m not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I think those are different kinds of uncertainties, and it seems to me that they are both treated very seriously by people working in those fields.
You make a lot of good points—thank you for the elaborate response.
I do think you’re being a little unfair and picking only the worst examples. Most people don’t make millions working on AI safety, and not everything has backfired. AI x-risk is a common topic at AI companies, they’ve signed the CAIS statement that it should be a global priority, technical AI safety has a talent pipeline and is a small but increasingly credible field, to name a few. I don’t think “this is a tricky field to make a robustly positive impact so as a careful person I shouldn’t work on it” is a solid strategy at the individual level, let alone at the community level.
That said, I appreciate your pushback and there’s probably plenty of people working on either cause area for whom personal incentives matter more than philosophical ones.
Here’s an argument I made in 2018 during my philosophy studies:
A lot of animal welfare work is technically “long-termist” in the sense that it’s not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People’s work typically takes longer to impact animal welfare.
For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.
But once you accept the moral goodness of that, there’s little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I’m not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I suspect many people instead work on effective animal advocacy because that’s where their emotional affinity lies and it’s become part of their identity, because they don’t like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don’t think it’s philosophically robust.
That doesn’t match the standard definition of longtermism: “positively influencing the long-term future is a key moral priority of our time”, it seems to me that it’s more about rejecting some narrow person-affecting views.
I think it’s very tempting to assume that people who work on things that we don’t consider the most important things to work on are doing so because of emotional/irrational/social reasons.
I’m imagine that some animal welfare people (and sometimes myself) see people working on extremely fun and interesting problems in AI, while making millions of dollars, with extremely vague theories for why this might be making things better and not worse for people millions of years for now, and imagine that they’re doing so for non-philosophically-robust reasons. I currently believe that the social and economic incentives to work in AI are much greater than the incentives to work in animal welfare. But I don’t think this is a useful framing (as it’s too tempting and could explain anything), and we should instead weigh the arguments that people give for prioritizing one cause instead of another.
I think the tractability aspect of AI/s-risk work, and the fact that all previous attempts backfired (Singularity Institute, early MIRI, early DeepMind, early OpenAI, and we’ll see with Anthropic) is the single main reason why some people are not prioritizing work in AI/s-risk at the moment, and it’s not about extremely narrow person-affecting views (which I think are very rare).
I think those are different kinds of uncertainties, and it seems to me that they are both treated very seriously by people working in those fields.
You make a lot of good points—thank you for the elaborate response.
I do think you’re being a little unfair and picking only the worst examples. Most people don’t make millions working on AI safety, and not everything has backfired. AI x-risk is a common topic at AI companies, they’ve signed the CAIS statement that it should be a global priority, technical AI safety has a talent pipeline and is a small but increasingly credible field, to name a few. I don’t think “this is a tricky field to make a robustly positive impact so as a careful person I shouldn’t work on it” is a solid strategy at the individual level, let alone at the community level.
That said, I appreciate your pushback and there’s probably plenty of people working on either cause area for whom personal incentives matter more than philosophical ones.