To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terraforming it. This would enormously increase the amount of wild-animal suffering. Even if we never leave our solar system, terraforming just one body, let alone several, could near double the amount of wild-animal suffering. There’s also the possibility of bringing factory farms to Mars. I’m studying a PhD in space science and still get shut down when I try to say ‘hey lets maybe think about not bringing insects to Mars’. This is far off from being a practical concern (maybe 100-1000 years) but it’s never too early to start shifting social norms.
I’d call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.
I’d like to echo Peter’s point as well. We don’t ask these questions for a lot of other actions that would be unethical in the short term. There seems to be a bias in EA circles of asking this kind of question about non-human animal exploitation. I’m more arguing for consistency than saying we can’t argue that a short term good has a long term bad resulting in net bad.
I’d call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.
Aren’t those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?
I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don’t seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.
Self-plugging as I’ve written about animal suffering and longtermism in this essay:
http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/
To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terraforming it. This would enormously increase the amount of wild-animal suffering. Even if we never leave our solar system, terraforming just one body, let alone several, could near double the amount of wild-animal suffering. There’s also the possibility of bringing factory farms to Mars. I’m studying a PhD in space science and still get shut down when I try to say ‘hey lets maybe think about not bringing insects to Mars’. This is far off from being a practical concern (maybe 100-1000 years) but it’s never too early to start shifting social norms.
I’d call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.
I’d like to echo Peter’s point as well. We don’t ask these questions for a lot of other actions that would be unethical in the short term. There seems to be a bias in EA circles of asking this kind of question about non-human animal exploitation. I’m more arguing for consistency than saying we can’t argue that a short term good has a long term bad resulting in net bad.
Aren’t those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?
I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don’t seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.