I live for a high disagree-to-upvote ratio
huw
Microsoft continue to pull back on their data centre plans, in a trend that’s been going on for the past few months, since before the tariff crash (Archive).
Frankly, the economics of this seem complex (the article mentions it’s cheaper to build data centres slowly, if you can), so I’m not super sure how to interpret this, beyond that this probably rules out the most aggressive timelines. I’m thinking about it like this:
Sam Altman and other AI leaders are talking about AGI 2027, at which point every dollar spent on compute yields more than a dollar of revenue, with essentially no limits
Their models are requiring exponentially more compute for training (ex. Grok 3, GPT-5) and inference (ex. o3), but producing… idk, models that don’t seem to be exponentially better?
Regardless of the breakdown in relationship between Microsoft and OpenAI, OpenAI can’t lie about their short- and medium-term compute projections, because Microsoft have to fulfil that demand
Even in the long term, Microsoft are on Stargate, so still have to be privy to OpenAI’s projections even if they’re not exclusively fulfilling them
Until a few days ago, Microsoft’s investors were spectacularly rewarding them for going all in on AI, so there’s little investor pressure to be cautious
So if Microsoft, who should know the trajectory of AI compute better than anyone, are ruling out the most aggressive scaling scenarios, what do/did they know that contradicts AGI by 2027?
Just wanted to say I thought the mental health report is really good! I think it does a great job of highlighting where mental health is at as an EA cause area and where the big challenges are. Thanks for shouting out Kaya Guides!
What evidence would you need to see to conclude that an Orbanisation of the US government is beginning, but still early enough to prevent it?
We’re probably already violating Forum rules by discussing partisan politics, but I’m curious to hear how you view Trump’s claim that he is “not joking” about a third term. Is this:
A lie (it cannot be hyperbole as the claim he made was very specifically framed)
Legal under the constitution, because he would do it via running for Vice President and having the elected President resign, and anything technically legal is not an ‘authoritarian takeover’
Illegal under the constitution, but he would legally amend the constitution to remove term limits
Something else?
And then, for whichever you believe, could you explain how it isn’t an authoritarian takeover?
(I choose this example because it’s relatively clear-cut, but we could point to Trump vs. United States, the refusal to follow court orders related to deportations, instructing the AG not to prosecute companies for unbanning Tik Tok, the attempts from his surrogates to buy votes, freezing funding for agencies established by acts of Congress, bombing Yemen without seeking approval from Congress, kidnapping and holding legal residents without due process, etc. etc. etc., I just think those have greyer areas)
Heya Vasco, I think I might be missing something here. I’m struggling to see the connection between this post and your recommendation to donate to WAI.
In the past, I’ve heard that wild animal suffering is probably not very tractable. Is that true for both insects and vertebrates? What about WAI sets them up for success here? (You mention they support research into pesticides, but not direct work?)
Richest 1% wealth share, US (admittedly, this has been flat for the last 20 years, but you can see the trend since 1980):
Pre-tax income shares, US:
A 3–4% change for most income categories isn’t anything to sneeze at (even if this is pre-tax).
You can explore the WID data through OWID to see the effect for other countries; it’s less pronounced for many but the broad trend in high-income neoliberalised countries is similar (as you’d expect to happen with lower taxation).
I think it’s tractable, right? The rich had a far greater hold over American politics in the early 1900s, and after financial devastation coupled with the threat of communism, the U.S. got the New Deal and a 90% marginal tax rate for 20 years following the war (well after the war effort had been fully paid off), during the most prosperous period in U.S. history. My sense of these changes is that widespread labour & political organisation threatened the government into a compromise in order to protect liberalism & capitalism from a near-total overthrow. It can be done.
But equally, that story suggests that things will probably have to get much worse before the political will is there to be activated. And there’s no guarantee that any money raised from taxation will be spent on the global poor!
My honest, loosely held opinion here is that EA/adjacent money could be used to build research & lobbying groups (rather than grassroots organising or direct political donations—too controversial and not EA’s strong suit), that would be ready for such a moment if/when it comes. They should be producing policy briefs and papers that, and possibly public-facing outputs, on the same level as the current YIMBY/abundance movement, who are far more developed than the redistributionists on these capabilities. When the backlash hits and taxes get raised, we should already have people well-placed to push for high redistribution on an international and non-speciesist level.
Surely it would be easier to just take the money from them, with taxes
No, but seriously—the U.S. presently has an extremely clear example of the excesses of oligarchy and low taxation. The idea that billionaires need less tax in order to invest more in the economy is laughable when Elon has used his excess money to essentially just enrich himself. I think it would be pretty high leverage to put money, time, and connections into this movement (if you can legally do so); and if the enemy is properly demarcated as oligarchy, it should result in reducing wealth inequality once its proponents take power.
Excuse the naïve question, but could far-UVC also reduce the cost of running high-level labs? If so, this could have transformational effects on medical development and cultured meat also
Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)
For example, I think I might make a decent enough AI Safety person and generally agree it could be a good idea, but I’ve explicitly chosen not to pursue it because (among other reasons) I’m pretty sure it would totally fry my nerves. The popularity of that LessWrong post suggests that I’m not alone, and also raises the interesting possibility that such people might end up actively detracting from the efforts of others, rather than just neutrally crashing out.
Here’s a much less intellectual podcast on the Rationalists, Zizians, and EA from TrueAnon, more on the dirtbag left side of things (for those who’re interested in how others see EA)
Would also be interested to hear from the realists: Do they believe they have discovered any of these moral truths themselves, or just that these truths are out there somewhere?
Here is their plot over time, from the Chapter 2 Appendix. I think these are the raw per-year scores, not the averages.
I find this really baffling. It’s probably not political; the Modi government took power in 2014 and only lost absolute majority in late 2024. The effects of COVID seem to be varied; India did relatively well in 2020 but got obliterated by the Delta variant in 2021. Equally, GDP per capita steadily increased over this time, barring a dip in 2020. Population has steadily increased, and growth has steadily decreased.
India have long had a larger residual value than others in the WHR’s happiness model; they’re much less happy than their model might predict.
Without access to the raw data, it’s hard to say if Gallup’s methodology has changed over this time; India is a huge and varied country, and it’s hard to tell if Gallup maintained a similar sample over time.
AIM seems to be doing this quite well in the GHW/AW spaces, but lacks the literal openness of the EA community-as-idea (for better or worse)
The World Happiness Report 2025 is out!
Finland leads the world in happiness for the eighth year in a row, with Finns reporting an average score of 7.736 (out of 10) when asked to evaluate their lives.
Costa Rica (6th) and Mexico (10th) both enter the top 10 for the first time, while continued upward trends for countries such as Lithuania (16th), Slovenia (19th) and Czechia (20th) underline the convergence of happiness levels between Eastern, Central and Western Europe.
The United States (24th) falls to its lowest-ever position, with the United Kingdom (23rd) reporting its lowest average life evaluation since the 2017 report.
I bang this drum a lot, but it does genuinely appear that once a country reaches the upper-middle income bracket, GDP doesn’t seem to matter much more.
Also featuring is a chapter from the Happier Lives Institute, where they compare the cost-effectiveness of improving wellbeing across multiple charities. They find that the top charities (including Pure Earth and Tamaika) might be 100x as cost-effective as others, especially those in high-income countries.
Reposting this from Daniel Eth:
On the one hand, this seems like not much (shouldn’t AGIs be able to hit ‘escape velocity’ and operate autonomously forever?), but on the other, being able to do a month’s worth of work coherently would surely get us close to recursive self-improvement.
Some general thoughts about India specifically:
The EA community is slowly developing, but the biggest obstacle is the lack of a clear hub city. Government is in Delhi, tech is in Bengaluru, many orgs are also in Pune or Mumbai (such as my own).
The philanthropic sector isn’t tuned to EA ideas just yet, but we think it might get more feasible to find local funding. Anecdotally, this seems to be easier in mental health, which is well-understood by the traditional philanthropic sector. Further development of EGIs and the local community will help here.
Anecdotally at EAGxIndia 2024, most younger attendees were interested in AI work, and far fewer into GHW/AW. There’s probably some bias here, since it was hosted in Bengaluru, which is heavier on tech. That is to say, I’m not convinced the talent pipeline for an India-based AIM-like org is quite there yet, although AIM could be nudged to incubate more often there.
On the other hand, legally operating in India is more complex than almost any other country AIM incubates into, and having India-specific expertise and operational support, while expensive, would pay dividends
Just wait until you see the PRs I wanna submit to the forum software 😛
FWIW the point that I was trying to make (however badly) was that the government clearly behaved in a way that had little regard for accuracy, and I don’t see incentives for them to behave any differently here
A few quotes I wanna speak on:
I think it’s heavy downplaying—potentially even disingenuous—to leave racism out of the discussion when talking about Hanania. It’s a demonstrable fact that he wrote for neo-Nazi and white supremacist organisations in the past, but when Austin talks about him ‘growing’, it’s not that he has denounced this work (FWIW, he has), but that he now supports animal welfare. It’s a bit of a non sequitur, nobody is arguing he used to be racist against shrimp?
The same goes for the other speakers. They aren’t controversial because of their opinions on embryo selection. They are controversial because they routinely endorse human biodiversity. Austin knows this, because all of the controversy around Manifest was about the topic of human biodiversity.
Evidently, Austin understands something about the dynamics here. But the language such as ‘people who are more sensitive to this’ feels indicative that he doesn’t believe that the racism is the problem; rather, it is the reactions of a particular profile of person.
I don’t feel like Austin has internalised that people aren’t merely offended or sensitive to racism; they are harmed by it, and want to both avoid spaces that cause them harm, and prevent future harm caused by spreading those ideas. The difference is that offence is a reaction that you can behaviourally train yourself out of, but harm is a thing that is done to you.
More broadly, Austin repeatedly speaks about trade-offs between ‘winning’ (success, sometimes framed as harmony) and ‘standing up for what’s right’, which is sometimes framed as a form of truth-seeking. But this implicitly frames inquiry into and discussion of human biodiversity as a form of truth-seeking. David Thorstad has already written at length about why that’s harmful, so I’ll defer to his work on that.