“A fraudulent charity” does not sound to me much like “a charity that knowingly used a mildly overoptimistic figure for the benefits of one of its programs even after admitting under pressure it was wrong’. Rather, I think the rhetorical force of the phrase comes mostly from the fact that to any normal English speaker it conjures up the image of a charity that is a scam in the sense that it is taking money, not doing charitable work with it, and instead just putting it into the CEO’s (or whoever’s) personal bank account. My feeling on this isn’t really effected by whether the first thing meets the legal definition of fraud, probably it does. My guess is that many charities that almost no one would describe as “fraudulent organizations” have done something like this or equivalently bad at some point in their histories, probably including some pretty effective ones.
Not that I think that means Singeria have done nothing wrong. If they agree the figure is clearly overoptimistic they should change it. Not doing so is deceptive, and probably it is illegal. But I find it a bit irritating that you are using what seems to me to be somewhat deceptive rhetoric whilst attacking them for being deceptive.
David Mathers🔸
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow’s comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don’t think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
“Because once a country embraces Statism, it usually begins an irreversible process of turning into a “shithole country”, as Trump himself eloquently put it. “
Ignoring tiny islands (some of them with dubious levels of independence from the US), the 10 nations with the largest %s of GDP as government revenue include Finland, France, Belgium and Austria, although, also, yes, Libya and Lesotho. In general, the top of the list for government revenue as % of GDP seems to be a mixture of small islands, petro states, and European welfare state democracies, not places that are particularly impoverished or authoritarian: https://en.wikipedia.org/wiki/List_of_countries_by_government_spending_as_percentage_of_GDP#List_of_countries_(2024)
Meanwhile the countries with the low levels of government revenue as a % of GDP that aren’t currently having some kind of civil war are places like Bangladesh, Sri Lanka, Iran and (weirdly) Venezuela.
This isn’t a perfect proxy for “statism” obviously, but I think it shows that things are more complicated than simplistic libertarian analysis would suggest. Big states (in purely monetary) seem to often be a consequence of success. Maybe they also hold back further success of course, but countries don’t seem to actively degenerate once they arrive (i.e. growth might slow, but they are not in permanent recession.)
I’d distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they’ve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil’s website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn’t much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
It’s worth saying that the fact that most arrows go up on the OWiD chart could just point to two independent trends, one of growth rising almost everywhere and another of happiness rising almost everywhere, for two completely independent reasons. Without cases where negative or zero growth persists for a long time, it’s hard to rule this out.
Those are reasonable points, but I’m not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise.
“Real gdp, adjusted for variable inflation, shows dead even growth. ” I asked about gdp per capita right now, not growth rates over time. Do you have a source showing that the US doesn’t actually have higher gdp per capita?
Inequality is probably part of the story, but I had a vague sense median real wages are higher in the US. Do you have a source saying that’s wrong? Or that it goes away when you adjust for purchasing power?
Usually we are the ones accussed (not always unfairly to be honest given Yudkowsky’s TIME article) of being so fanatical we’d risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
Also, I don’t like Scott Alexander’s politics at all, but in the interests of strict accuracy I don’t think he is a monarchist, or particularly monarchism sympathetic (except insofar as he finds some individuals with far-right views who like monarchy kind of endearing.) If anything, I had the impression that whilst Scott has certainly been influenced by and promoted the far right in many ways, a view that monarchism is just really, really silly was one of the things that genuinely kept him from regarding himself as fully in sympathy with the neo-reactionaries.
“In response, Epoch AI created Frontier Math — a benchmark of insanely hard mathematical problems. The easiest 25% are similar to Olympiad-level problems. The most difficult 25% are, according to Fields Medalist Terence Tao, “extremely challenging,” and would typically need an expert in that branch of mathematics to solve them.
Previous models, including GPT-o1, could hardly solve any of these questions.[20] In December 2024, OpenAI claimed that GPT-o3 could solve 25%.”
I think if your going to mention the seemingly strong performance of GPT-o3 on Frontier Math, it’s worth pointing out the extremely poor performance of all LLMs including when they were given Math Olympiad questions more recently,. though they did use o3 mini rather than o3, so I guess it’s a not a direct comparison: https://garymarcus.substack.com/p/reports-of-llms-mastering-math-have
”The USA Math Olympiad is an extremely challenging math competition for the top US high school students; the top scorers get prizes and an invitation to the International Math Olympiad. The USAMO was held this year March 19-20. Hours after it was completed, so there could be virtually no chance of data leakage, a team of scientists gave the problems to some of the top large language models, whose mathematical and reasoning abilities have been loudly proclaimed: o3-Mini, o1-Pro, DeepSeek R1, QwQ-32B, Gemini-2.0-Flash-Thinking-Exp, and Claude-3.7-Sonnet-Thinking. The proofs output by all these models were evaluated by experts. The results were dismal: None of the AIs scored higher than 5% overall.”
What’s DEAM?
Another, very obvious reason is just that more EA people are near real power now than in 2018, and with serious involvement in power and politics comes tactical incentives to avoid saying what you actually think. I think that is probably a lot of what is going on with Anthropic people playing down their EA connections.
I don’t think it’s absolutely clear from the one-sentence quote alone that Amanda was claiming personal lack of knowledge of EA (which would absolutely be deceptive if she was obviously), though I agree that is one reasonable reading. She has her GWWC membership fairly prominently displayed on her personal website, so if she’s trying to hide being or having been EA, she’s not doing so very strongly.
Depends how far left. I’d say centre-left views would get less push back, but not necessarily further left ones. But yeah fair point that there is a standard set of views in the community that he is somewhat outside.
If productivity is so similar, how come the US is quite a bit richer per capita? Is that solely accounted for by workers working longer hours?
Just as a side point, I do not think Amanda’s past relationship with EA can accurately be characterized as much like Jonathan Blow, unless he was far more involved than just being an early GWWC pledge signatory, which I think is unlikely. It’s not just that Amanda was, as the article says, once married to Will. She wrote her doctoral thesis on an EA topic, how to deal with infinities in ethics: https://askell.io/files/Askell-PhD-Thesis.pdf Then she went to work in AI for what I think is overwhelmingly likely to be EA reasons (though I admit I don’t have any direct evidence to that effect) , given that it was in 2018 before the current excitement about generative AI, and relatively few philosophy PhDs, especially those who could fairly easily have gotten good philosophy jobs, made that transition. She wasn’t a public figure back then, but I’d be genuinely shocked to find out she didn’t have an at least mildly significant behind the scenes effect through conversation (not just with Will) on the early development of EA ideas.
Not that I’m accusing her of dishonesty here or anything: she didn’t say that she wasn’t EA or that she had never been EA, just that Anthropic wasn’t an EA org. Indeed, given that I just checked and she still mentions being a GWWC member prominently on her website, and she works on AI alignment and wrote a thesis on a weird, longtermism-coded topic, I am somewhat skeptical that she is trying to personally distance from EA: https://askell.io/
No, I don’t move in corporate circles.
“widely (and imo falsely) believed that the openai coup was for EA reasons”
False why?
I don’t find accusations of fallacy helpful here. The author’s say in the abstract explicitly that they estimated the probability of each step conditional on the previous ones. So they are not making a simple, formal error like multiplying a bunch of unconditional probabilities whilst forgetting that only works if the probabilities are uncorrelated. Rather, you and Richard Ngo think that they’re estimates for the explicitly conditional probabilities are too low, and you are speculating that this is because they are still really think of the unconditional probabilities. But I don’t think “you are committing a fallacy” is a very good or fair way to describe “I disagree with your probabilities and I have some unevidenced speculation about why you are giving probabilities that are wrong”.