Interestingly enough, I’d say that this reflects an interesting division between near-termists and long-termists (I really hope that divide doesn’t become like one of those political divides in the world.) From a short-term perspective, I’d agree that progress hasn’t happened or at least is more debatable than people think (For arguments on this topic, see the technological stagnation hypothesis here: https://rootsofprogress.org/a-new-philosophy-of-progress)
I’d argue that one of the major problems with studying progress is the Halo and Horn effects here. Another problem is the fact that discussions of progress rapidly turn political, destroying most of the value of discussion.
More specifically, the halo and horn effects mean that if a person has encountered one aspect of progress and it felt positive or negative, then all aspects will start to feel positive or negative. This is how affective death spirals are born here: https://www.lesswrong.com/posts/XrzQW69HpidzvBxGr/affective-death-spirals
The best argument for progress being negative is X-risk concerns, especially in AI X-risk.
Interestingly enough, I’d say that this reflects an interesting division between near-termists and long-termists (I really hope that divide doesn’t become like one of those political divides in the world.) From a short-term perspective, I’d agree that progress hasn’t happened or at least is more debatable than people think (For arguments on this topic, see the technological stagnation hypothesis here: https://rootsofprogress.org/a-new-philosophy-of-progress)
I’d argue that one of the major problems with studying progress is the Halo and Horn effects here. Another problem is the fact that discussions of progress rapidly turn political, destroying most of the value of discussion.
Halo Effect: https://www.wikiwand.com/en/Halo_effect#/overview
Horn Effect: https://www.wikiwand.com/en/Horn_effect
More specifically, the halo and horn effects mean that if a person has encountered one aspect of progress and it felt positive or negative, then all aspects will start to feel positive or negative. This is how affective death spirals are born here: https://www.lesswrong.com/posts/XrzQW69HpidzvBxGr/affective-death-spirals
The best argument for progress being negative is X-risk concerns, especially in AI X-risk.