I’d like to point out that Ajeya Cotra’s report was about “transformative AI”, which had a specific definition:
I define “transformative artificial intelligence” (transformative AI or TAI) as “software” (i.e. a computer program or collection of computer programs) that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did. This is adapted from a definition introduced by CEO Holden Karnofsky in a 2016 blog post.
How large is an impact “as profound as the Industrial Revolution”? Roughly speaking, over the course of the Industrial Revolution, the rate of growth in gross world product (GWP) went from about ~0.1% per year before 1700 to ~1% per year after 1850, a tenfold acceleration. By analogy, I think of “transformative AI” as software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere that it would be economically profitable to use it).
Currently, the world economy is growing at ~2-3% per year, so TAI must bring the growth rate to 20%-30% per year if used everywhere it would be profitable to use. This means that if TAI is developed in year Y, the entire world economy would more than double by year Y + 4. This is a very extreme standard—even 6% annual growth in GWP is outside the bounds of what most economists consider plausible in this century.
My personal belief is that a median timeline of ~2050 for this specific development is still reasonable, and I don’t think the timelines in the Bio Anchors report have been falsified. In fact, my current median timeline for TAI, by this definition, is around 2045.
I think the claim that Yudkowsky’s views on AI risk are meaningfully influenced by money is very weak. My guess is that he could easily find another opportunity unrelated to AI risk to make $600k per year if he searched even moderately hard.
The claim that my views are influenced by money is more plausible because I stand to profit far more than Yudkowsky stands to profit from his views. However, while perhaps plausible from the outside, this claim does not match my personal experience. I developed my core views about AI risk before I came into a position to profit much from them. This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals. As far as I remember, I had no intention to start an AI company until very shortly before the creation of Mechanize. Moreover, if I was engaging in motivated reasoning, I could have just stayed silent about my views. Alternatively, I could have started a safety-branded company that nonetheless engages in capabilities research—like many of the ones that already exist.
It seems implausible that spending my time writing articles advocating for AI acceleration is the most selfishly profitable use of my time. The direct impact of the time I spend building Mechanize is probably going to have a far stronger effect on my personal net worth than writing a blog post about AI doom. However, while I do not think writing articles like this one is very profitable for me personally, I do think it is helpful for the world because I see myself as providing a unique perspective on AI risk that is available almost nowhere else. As far as I can tell, I am one of only a very small number of people in the world who have both engaged deeply with the arguments for AI risk and yet actively and explicitly work toward accelerating AI.
In general, I think people overestimate how much money influences people’s views about these things. It seems clear to me that people are influenced far more by peer effects and incentives from the social group they reside in. As a comparison, there are many billionaires who advocate for tax increases, or vote for politicians who support tax increases. This actually makes sense when you realize that merely advocating or voting for a particular policy is very unlikely to create change that meaningfully impacts you personally. Bryan Caplan has discussed this logic in the context of arguments about incentives under democracy, and I generally find his arguments compelling.