Personally I still wouldn’t consider it ethically acceptable to, say, create a being experiencing a −100-intensity torturous life provided that a life with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn’t seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don’t find stronger suffering-focused views compelling would be sympathetic to a view like this one—and the implications for prioritizing s-risks versus extinction risks seem significant.
Personally I still wouldn’t consider it ethically acceptable to, say, create a being experiencing a −100-intensity torturous life provided that a life with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn’t seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don’t find stronger suffering-focused views compelling would be sympathetic to a view like this one—and the implications for prioritizing s-risks versus extinction risks seem significant.