I think the question, “is the world getting better?” isimportantfor effective altruists (soft pitch why is that it’s just a crucial consideration for decision-making).
IDK, quick take because I’m just thinking about the following links, and people’s perceptions around this question.
“We really are living in an era of negativity-poisoned discourse that is (*empirically*) historically unique.” (and this Atlantic article by the same author as the tweet discussing how America may be producing and exporting a lot of anxiety)
In particular, I thought this quote was funny too, and got me: “Anecdotally, this is also where a subset of rationalists appear to be inconsistent in their worldview. One moment they claim the majority of people are data illiterate, and are therefore unrealistically pessimistic, and in the next moment they will set p(doom) to 10%.”
And [I had more to say here, but I think I’ll just leave it to, another excerpt]: ”Like, yes; it’s fairly uncontroversial to say that the world and the economy is better than ever. Even the poorest among us have super computers in our pockets now capable of giving us a never-ending stream of high-quality videos, or the power to summon a car ride, some food, or an Amazon delivery at any given moment.
And yet, all of this growth and change and innovation and wealth has come at the cost of some underlying stability. For a lot of people, they feel like they’re no longer living on land; instead they’ve set sail on a vessel — and the never-ending swaying, however gentle it might feel, is leaving them seasick.”
And one last thought I have is that the incentives news has to be negative seem to be quite bad. If there were a tractable intervention to mitigate these incentives maybe that could do a lot of good.
Yeah, really interesting thanks for sharing. The incentive structure here seems to be a pretty nice clean loop where a better world model actually does predict more accurately something that matters (better financial news, benefits readers more directly—vs maybe the incentive with other news sources is more meta/abstract—agreeing with your community and being up to date)
And from the ronghosh article:
”… All we have to do is fix the loneliness crisis, the fertility crisis, the housing crisis, the obesity crisis, the opioid crisis, the meaning crisis, the meta crisis, the flawed incentives of the political system, the flawed incentives of social media, the flawed incentives of academia, the externalities leading to climate change, the soft wars with China and Iran, the hot war with Russia, income inequality, status inequality, racism, sexism, and every other form of bigotry.”
Of course, as someone who’s steeped in all the AI stuff, I can’t help but just think that A) AI is the most important thing to consider here (ha!), since B) it might allow us (‘alignment’ allowing) to scale the sort of sense-making and problem-solving cognition to help solve all the problems that we’re seemingly increasingly making for ourselves. And yeah this is reductionist and probably naive.
I think the question, “is the world getting better?” is important for effective altruists (soft pitch why is that it’s just a crucial consideration for decision-making).
IDK, quick take because I’m just thinking about the following links, and people’s perceptions around this question.
170 years of American news coverage:
https://x.com/DKThomp/status/1803766107532153119 (linked in Marginal Revolution)
“We really are living in an era of negativity-poisoned discourse that is (*empirically*) historically unique.”
(and this Atlantic article by the same author as the tweet discussing how America may be producing and exporting a lot of anxiety)
And I thought this piece lays out really quite well the points for both: things are better, things are worse, and introduced me to the neat term, “the vibecession”:
https://ronghosh.substack.com/p/the-stratification-of-gratification
(linked in r/slatestarcodex)
In particular, I thought this quote was funny too, and got me:
“Anecdotally, this is also where a subset of rationalists appear to be inconsistent in their worldview. One moment they claim the majority of people are data illiterate, and are therefore unrealistically pessimistic, and in the next moment they will set p(doom) to 10%.”
And [I had more to say here, but I think I’ll just leave it to, another excerpt]:
”Like, yes; it’s fairly uncontroversial to say that the world and the economy is better than ever. Even the poorest among us have super computers in our pockets now capable of giving us a never-ending stream of high-quality videos, or the power to summon a car ride, some food, or an Amazon delivery at any given moment.
And yet, all of this growth and change and innovation and wealth has come at the cost of some underlying stability. For a lot of people, they feel like they’re no longer living on land; instead they’ve set sail on a vessel — and the never-ending swaying, however gentle it might feel, is leaving them seasick.”
(Finally, I coincidentally also read recently a listicle of ordinary life improvements from Gwern)
And one last thought I have is that the incentives news has to be negative seem to be quite bad. If there were a tractable intervention to mitigate these incentives maybe that could do a lot of good.
Bloomberg/Thomson Reuters/etc generate more revenue than most newspapers providing market data (with seemingly better incentives): https://www.investopedia.com/articles/investing/052815/financial-news-comparison-bloomberg-vs-reuters.asp
Yeah, really interesting thanks for sharing. The incentive structure here seems to be a pretty nice clean loop where a better world model actually does predict more accurately something that matters (better financial news, benefits readers more directly—vs maybe the incentive with other news sources is more meta/abstract—agreeing with your community and being up to date)
And from the ronghosh article: ”… All we have to do is fix the loneliness crisis, the fertility crisis, the housing crisis, the obesity crisis, the opioid crisis, the meaning crisis, the meta crisis, the flawed incentives of the political system, the flawed incentives of social media, the flawed incentives of academia, the externalities leading to climate change, the soft wars with China and Iran, the hot war with Russia, income inequality, status inequality, racism, sexism, and every other form of bigotry.”
Of course, as someone who’s steeped in all the AI stuff, I can’t help but just think that A) AI is the most important thing to consider here (ha!), since B) it might allow us (‘alignment’ allowing) to scale the sort of sense-making and problem-solving cognition to help solve all the problems that we’re seemingly increasingly making for ourselves. And yeah this is reductionist and probably naive.