“Real gdp, adjusted for variable inflation, shows dead even growth. ” I asked about gdp per capita right now, not growth rates over time. Do you have a source showing that the US doesn’t actually have higher gdp per capita?
Inequality is probably part of the story, but I had a vague sense median real wages are higher in the US. Do you have a source saying that’s wrong? Or that it goes away when you adjust for purchasing power?
David Mathers🔸
Usually we are the ones accussed (not always unfairly to be honest given Yudkowsky’s TIME article) of being so fanatical we’d risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
Also, I don’t like Scott Alexander’s politics at all, but in the interests of strict accuracy I don’t think he is a monarchist, or particularly monarchism sympathetic (except insofar as he finds some individuals with far-right views who like monarchy kind of endearing.) If anything, I had the impression that whilst Scott has certainly been influenced by and promoted the far right in many ways, a view that monarchism is just really, really silly was one of the things that genuinely kept him from regarding himself as fully in sympathy with the neo-reactionaries.
“In response, Epoch AI created Frontier Math — a benchmark of insanely hard mathematical problems. The easiest 25% are similar to Olympiad-level problems. The most difficult 25% are, according to Fields Medalist Terence Tao, “extremely challenging,” and would typically need an expert in that branch of mathematics to solve them.
Previous models, including GPT-o1, could hardly solve any of these questions.[20] In December 2024, OpenAI claimed that GPT-o3 could solve 25%.”
I think if your going to mention the seemingly strong performance of GPT-o3 on Frontier Math, it’s worth pointing out the extremely poor performance of all LLMs including when they were given Math Olympiad questions more recently,. though they did use o3 mini rather than o3, so I guess it’s a not a direct comparison: https://garymarcus.substack.com/p/reports-of-llms-mastering-math-have
”The USA Math Olympiad is an extremely challenging math competition for the top US high school students; the top scorers get prizes and an invitation to the International Math Olympiad. The USAMO was held this year March 19-20. Hours after it was completed, so there could be virtually no chance of data leakage, a team of scientists gave the problems to some of the top large language models, whose mathematical and reasoning abilities have been loudly proclaimed: o3-Mini, o1-Pro, DeepSeek R1, QwQ-32B, Gemini-2.0-Flash-Thinking-Exp, and Claude-3.7-Sonnet-Thinking. The proofs output by all these models were evaluated by experts. The results were dismal: None of the AIs scored higher than 5% overall.”
What’s DEAM?
Another, very obvious reason is just that more EA people are near real power now than in 2018, and with serious involvement in power and politics comes tactical incentives to avoid saying what you actually think. I think that is probably a lot of what is going on with Anthropic people playing down their EA connections.
I don’t think it’s absolutely clear from the one-sentence quote alone that Amanda was claiming personal lack of knowledge of EA (which would absolutely be deceptive if she was obviously), though I agree that is one reasonable reading. She has her GWWC membership fairly prominently displayed on her personal website, so if she’s trying to hide being or having been EA, she’s not doing so very strongly.
Depends how far left. I’d say centre-left views would get less push back, but not necessarily further left ones. But yeah fair point that there is a standard set of views in the community that he is somewhat outside.
If productivity is so similar, how come the US is quite a bit richer per capita? Is that solely accounted for by workers working longer hours?
Just as a side point, I do not think Amanda’s past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It’s not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don’t have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn’t a public figure back then, but I’d be genuinely shocked to find out she didn’t have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I’m accusing her of dishonesty here or anything: she didn’t say that she wasn’t EA or that she had never been EA, just that Anthropic wasn’t an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
No, I don’t move in corporate circles.
“widely (and imo falsely) believed that the openai coup was for EA reasons”
False why?
Everything you say is correct I think, but I think in more normal circles, pointing out the inconsistency between someone’s wedding page and their corporate PR bullshit would seem a bit weird and obsessive and mean. I don’t find it so, but I think ordinary people would get a bad vibe from it.
Because you are so strongly pushing a particular political perspective on twitter-tech right=good roughly, I worry that your bounties are mostly just you paying people to say things you already believe about those topics. Insofar as you mean to persuade people on the left/centre of the community to change their views on these topics, maybe it would be better to do something like make the bounties conditional on people who disagree with your takes finding the investigations move their views in your direction.
I also find the use of the phrase “such controversial criminal justice policies” a bit rhetorical dark artsy and mildly incompatible with your calls for high intellectual integrity. It implies that a strong reason to be suspicious of Open Phil’s actions has been given. But you don’t really think the mere fact that a political intervention on an emotive, polarized topic is controversial is actually particularly informative about it. Everything on that sort of topic is controversial, including the negation of the Open Phil view on the US incarceration rate. The phrase would be ok if you were taking a very general view that we should be agnostic all political issues where smart, informed people disagree. But you’re not doing that, you take lots of political stances in the piece: de-regulatory libertarianism, the claim that environmentalism has been net negative and Dominic Cummings can all accurately be described as “highly controversial”.
Maybe I am making a mountain out of a molehill here. But I feel like rationalists themselves often catastrophise fairly minor slips into dark arts like this as strong evidence that someone lacks integrity. (I wouldn’t say anything as strong as that myself; everyone does this kind of thing sometimes.) And I feel like if the NYT referred to AI safety as “tied to the controversial rationalist community” or to “highly controversial blogger Scott Alexander” you and other rationalists would be fairly unimpressed.
More substantively (maybe I should have started with this as it is a more important point), I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn’t already. The entire techlash anti “surveillance” capitalism, “the algorithms push extremism” thing from left-leaning tech critics is ostensibly at least about the fact that a very small number of very big companies have acquired massive amounts of unaccountable power to shape political and economic outcomes. More generally, the American left has, I keep reading, been on a big anti-trust kick recently. The explicit point of anti-trust is to break up concentrations of power. (Regardless of whether you think it actually does that, that is how its proponents perceive it. They also tend to see it as “pro-market”; remember that Warren used to be a libertarian Republican before she was on the left.) In fact, Lina Khan’s desire to do anti-trust stuff to big tech firms was probably one cause of Silicon Valley’s rightward shift.
It is true that most people with these sort of views are currently very hostile to even the left-wing of AI safety, but lack of concern about X-risk from AI isn’t the same thing as lack of concern about AI concentrating power. And eventually the power of AI will be so obvious that even these people have to concede that it is not just fancy autocorrect.
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantir’s nat sec ties is a big theme for a lot of these people, and many of them don’t like the nat sec-y bits of the state very much either. Also a relatively prominent part of the left-wing critique of DOGE is the idea that it’s the beginning of an attempt by Elon to seize personal effective control of large parts of the US federal bureaucracy, by seizing the boring bits of the bureaucracy that actually move money around. In my view people are correct to be skeptical that Musk will ultimately choose decentralising power over accumulating it for himself.
Now strictly speaking none of this is inconsistent with your claim that the left-wing of AI safety lacks concern about concentration of power, since virtually none of these anti-tech people are safetyists. But I think it still matters for predicting how much the left wing of safety will actually concentrate power, because future co-operation between them and the safetyists against the tech right and the big AI companies is a distinct possibility.
- Mar 27, 2025, 9:45 PM; 0 points) 's comment on Third-wave AI safety needs sociopolitical thinking by (LessWrong;
Section 4 is completely over my head I have to confess.
Edit: But the abstract gives me what I wanted to know :) : “To quantify the capabilities of AI systems in terms of human capabilities, we propose a new metric: 50%-task-completion time horizon. This is the time humans typically take to complete tasks that AI models can complete with 50% success rate”
I don’t know of anything better right now.
It’s actually the majority view amongst academics who directly study the issue. (I’m probably an anti-realist though). https://survey2020.philpeople.org/survey/results/486
Thanks, that is reassuring.
I don’t quite get what that means. Do they really take exactly the same amount of time on all tasks for which they have the same success rate? Sorry, maybe I am being annoying here and this is all well-explained in the linked post. But I am trying to figure out how much this is creating the illusion that progress on it means a model will be able to handle all tasks that it takes normal human workers about that amount of time to do, when it really means something quite different.
Those are reasonable points, but I’m not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise.