Brian Tomasik is ab self-described “negative-leaning” hedonic utilitarian who is a prominent thinker for effective altruism. He’s written about how humanity might have values which lead us to generating much suffering in the future, but also worries a machine superintelligence might end up doing the same. They’re myriad reasons he thinks this I can’t do justice to here. I believe right now he thinks the best course of action is to try steering values of present-day humanity, as much of it or as crucially an influential subset as possible, towards neglecting suffering less. He also believes doing foundational research into ascertaining better the chances of a singleton to promulgate suffering throughout space in the future. To this end he both does research with and funds colleagues at the Foundational Research Institute.
His whole body of work concerning future suffering is referred to as “astronomical suffering” considerations, sort of complementary utilitarian consideration to Dr
Bostrom’s astronomical waste argument. You can read more of Mr.
Tomasik’s work on the far future and related topics here. Note some of it is advanced and may require you to read beforehand to understand all premises in some of his essays, but he also usually provides citations for all this.
Brian Tomasik is ab self-described “negative-leaning” hedonic utilitarian who is a prominent thinker for effective altruism. He’s written about how humanity might have values which lead us to generating much suffering in the future, but also worries a machine superintelligence might end up doing the same. They’re myriad reasons he thinks this I can’t do justice to here. I believe right now he thinks the best course of action is to try steering values of present-day humanity, as much of it or as crucially an influential subset as possible, towards neglecting suffering less. He also believes doing foundational research into ascertaining better the chances of a singleton to promulgate suffering throughout space in the future. To this end he both does research with and funds colleagues at the Foundational Research Institute.
His whole body of work concerning future suffering is referred to as “astronomical suffering” considerations, sort of complementary utilitarian consideration to Dr Bostrom’s astronomical waste argument. You can read more of Mr. Tomasik’s work on the far future and related topics here. Note some of it is advanced and may require you to read beforehand to understand all premises in some of his essays, but he also usually provides citations for all this.
Worth noting that the negative-learning position is pretty fringe though, especially in mainstream philosophy. Personally, I avoid it.