Anything I write here is written purely on my own behalf, and does not represent my employer’s views (unless otherwise noted).
Erich_Grunewald
Attention on AI X-Risk Likely Hasn’t Distracted from Current Harms from AI
Doubts about Track Record Arguments for Utilitarianism
Risk of famine in Somalia
How many EA billionaires five years from now?
Can a Vegan Diet Be Healthy? A Literature Review
The Prospect of an AI Winter
See Response to Phil Torres’ ‘The Case Against Longtermism’ and Response to Recent Criticisms of Longtermism, including comments.
I for one really appreciate (1) that HLI has been producing these reports and generally directing EAs’ attention to well-being interventions, (2) the discussion this has generated (in particular OP and this critique), and (3) HLI’s willingness to respond to, and occasionally, as evidenced here, to some degree update based on, those critiques.
Tentative practical tips for using chatbots in research
Over the course of 2009, discussion on the Felicifia forum (archive) started to feel like an early EA community. For example, see thread on charity choice and the applied ethics and philanthropy boards.
It’s really interesting to see, in the thread on charity choice, EmbraceUnity describing their “Utility, Attainability, and Obscurity” framework (see also this blog post from 2008) four years before Holden Karnofsky wrote about the Importance, Tractability and Neglectedness framework. I guess this is a sign that, for some reason, many of the key pieces of EA just fell into place in different locations at around the same time.
Note that the English page was created in January of this year. The stuff on the Swedish page about Nordiska motståndsrörelsen and vaccination scepticisim and pseudoscience was added on September 14, after FLI signed the letter of intent.
Congratulations, I’m very excited about the work that CE is doing.
BTW, I’d be interested in reading a post of post-mortems on CE-incubated projects that didn’t pan out.
It’s embarassing for the EA movement, too. It’s another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.
Probably disagree? Hard to say for sure since we lack details, but it’s not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit’s mission, i.e., to ensure that AGI benefits all of humanity. (They do not have a duty to ensure OAI is valued at billions of dollars, except in so far as that helps further its mission.)
If the board members had reason to believe that Sam Altman was acting contrary to OAI’s mission of ensuring that AGI benefits all humanity, perhaps moving to fire him was the responsible thing to do (even if it turns out to be bad ex post), and what has been irresponsible are the efforts of investors and others to try to reinstate him. I guess we will know better within the next weeks, but I think it’s premature to say that the board acted irresponsibly right now.
Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
[I]n one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I don’t think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldn’t find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
Against LLM Reductionism
for an example of a different model, drew devault, who’s fairly well-known in the free software community, offered $20 to anyone who started a blog, with another $20 if there was an additional 3 posts in the next half year. it seems to have resulted in a number of new blogs, including several that are still active now, 2.5 years later.
He’s recently been vocal about AI X-Risk.
Yeah, but so have lots of people; it doesn’t mean they’re all longtermists. Same thing with Sam Altman—I haven’t seen any indication that he’s longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.
He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.
Do you have a source on Schmidt funding Carrick Flynn’s campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don’t think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.
His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
I know Schmidt Futures has “future” in its name, but as far as I can tell they’re not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word “future”.
Animal Testing Is Exploitative and Largely Ineffective
I think the idea is that lots of money is spent on treating diseases caused by aging, but little is spent on preventing aging in the first place. So I don’t see a contradiction.
I think there’s something to this, but:
My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
The October 7 controls have not “devastated critical supply chains”. The linked article gives no evidence for this claim. China has something like 10% or less of the chip market share, and the export controls don’t affect other countries’ abilities to produce chips (though they do prevent some chips from being sold to China). Most fabs right now have utilization rates well below 100%, meaning they produce fewer chips than they could due to weak demand.
The October 7 controls also have not “upset markets” globally, or at least the linked article gives no evidence for this claim. Memory chip-makers like Samsung have seen profits fall, but this seems to be a normal business cycle thing—semiconductors, and especially memory chips, are a cyclical industry, sensitive to consumer demand, and the current downturn is almost certainly related to the global financial downturn and associated reduction in consumer demand.
I think the October 7 controls have affected and will affect markets, but mostly by reducing profits of companies selling chips and equipment to China, and reducing the supply of some chips and equipment within China (their intended purpose). There’ll probably be other, indirect effects down the line, but it’s hard to say what those will be now.
I also note a tension between those two points—the first blames the October 7 controls for there being a chip supply shortage, and the second blames the controls for there being a chip oversupply. Neither is true.
I disagree with the claims that the October 7 controls have “failed spectacularly at achieving their stated ambitions” and that despite them “China’s AI research has managed to continue apace”.
I basically disagree with the linked article.
It states that Nvidia is releasing export-control-adapted versions of its chips with lower memory interconnect (to be below the export control thresholds) for the Chinese market. This is true, but the gap between the state of the art and what can be sold to China will grow.
It seems to suggest that compute will be less important in future. I think that’s unlikely, at least for developing frontier models.
Another purpose of the October 7 controls was to limit Chinese chip-makers’ access to equipment, materials and software, and it seems tentatively pretty successful at that (though time will tell).
I think the “increased West-China tensions” point is right though and fairly concerning.
I also think the “CSET was a major contributor to the October 7 controls” point is right, but whether this was ex ante good or bad probably depends on one’s views on AI x-risk.