Personally, I trust longtermists who don’t make any diet change for animals less with the future, although veganism seems farther than necessary. I think people should be sensitive to ongoing moral catastrophes like factory farming, and an astronomical number of future moral patients (including artificial sentience) could be vulnerable and have limited agency like today’s farmed animals. In expectation, I think changing your diet increases the concern you have for future moral patients with limited agency by increasing their salience and reducing cognitive dissonance, allowing you to weigh their interests more fairly. So, basically virtue consequentialist reasons.
Veganism in the longtermist community also increases the salience of future moral patients with limited agency.
Agency adds overhead and can become a barrier for optimizing a mind for value or disvalue since they can decide to do otherwise and even undermine a powerful agent’s efforts generally, so it’s pretty plausible the most efficient instantiations of value and disvalue will have (and be designed with) limited agency and that they will dominate future value/disvalue.
Personally, I trust longtermists who don’t make any diet change for animals less with the future, although veganism seems farther than necessary. I think people should be sensitive to ongoing moral catastrophes like factory farming, and an astronomical number of future moral patients (including artificial sentience) could be vulnerable and have limited agency like today’s farmed animals. In expectation, I think changing your diet increases the concern you have for future moral patients with limited agency by increasing their salience and reducing cognitive dissonance, allowing you to weigh their interests more fairly. So, basically virtue consequentialist reasons.
Veganism in the longtermist community also increases the salience of future moral patients with limited agency.
Agency adds overhead and can become a barrier for optimizing a mind for value or disvalue since they can decide to do otherwise and even undermine a powerful agent’s efforts generally, so it’s pretty plausible the most efficient instantiations of value and disvalue will have (and be designed with) limited agency and that they will dominate future value/disvalue.