Great post! It really made me dive into the perspective of that era and empathize with how such a future was quite inscrutable from their point of view at the time.
Reflecting on it, at least for known historical cases, it seems that moral catastrophes like these happen at the intersection of three concomitant elements: the needs/interests of the dominant group (cheap meat, safe drugs, scientific progress), the advent of new techniques/technologies to satisfy those needs (factory farming, animal testing), and a specific moral framework (the common view that animal suffering is not morally salient).
This strikes me as potentially interesting because, while the first two factors might be particularly hard to predict or act upon a priori, the third could perhaps offer a more stable and tractable lever. Improving our ethical framework could be seen as a default, ‘evergreen’ strategy to mitigate the risk of future catastrophes from a long-term perspective. By mainstreaming the idea that avoiding the suffering of any sentient being is of great moral importance, we might build a general defense against new forms of moral catastrophes.
Of course, this wouldn’t exclude other approaches; rather, they could work in parallel. While moral progress might serve as the long-term foundation, we could then evaluate on a case-by-case basis whether technological fixes or other targeted interventions would be more effective for specific risks as they emerge. As you mentioned, staying alert to how new needs and technologies evolve remains crucial.
Great post! It really made me dive into the perspective of that era and empathize with how such a future was quite inscrutable from their point of view at the time.
Reflecting on it, at least for known historical cases, it seems that moral catastrophes like these happen at the intersection of three concomitant elements: the needs/interests of the dominant group (cheap meat, safe drugs, scientific progress), the advent of new techniques/technologies to satisfy those needs (factory farming, animal testing), and a specific moral framework (the common view that animal suffering is not morally salient).
This strikes me as potentially interesting because, while the first two factors might be particularly hard to predict or act upon a priori, the third could perhaps offer a more stable and tractable lever. Improving our ethical framework could be seen as a default, ‘evergreen’ strategy to mitigate the risk of future catastrophes from a long-term perspective. By mainstreaming the idea that avoiding the suffering of any sentient being is of great moral importance, we might build a general defense against new forms of moral catastrophes.
Of course, this wouldn’t exclude other approaches; rather, they could work in parallel. While moral progress might serve as the long-term foundation, we could then evaluate on a case-by-case basis whether technological fixes or other targeted interventions would be more effective for specific risks as they emerge. As you mentioned, staying alert to how new needs and technologies evolve remains crucial.