Unable to work. Was community director of EA Netherlands, had to quit due to long covid. Everything written since 2021 with considerable brain fog, and bad at maintaining discussions since.
I have a background in philosophy, risk analysis, and moral psychology. I also did some x-risk research. Currently most worried about AI and US democracy. (Regarding the latter, I’m highly ranked on Manifold).
Here’s an argument I made in 2018 during my philosophy studies:
A lot of animal welfare work is technically “long-termist” in the sense that it’s not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People’s work typically takes longer to impact animal welfare.
For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings.
But once you accept the moral goodness of that, there’s little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely)
The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I’m not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare.
I suspect many people instead work on effective animal advocacy because that’s where their emotional affinity lies and it’s become part of their identity, because they don’t like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don’t think it’s philosophically robust.