I think the claim that Yudkowsky’s views on AI risk are meaningfully influenced by money is very weak. My guess is that he could easily find another opportunity unrelated to AI risk to make $600k per year if he searched even moderately hard.
The claim that my views are influenced by money is more plausible because I stand to profit far more than Yudkowsky stands to profit from his views. However, while perhaps plausible from the outside, this claim does not match my personal experience. I developed my core views about AI risk before I came into a position to profit much from them. This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals. As far as I remember, I had no intention to start an AI company until very shortly before the creation of Mechanize. Moreover, if I was engaging in motivated reasoning, I could have just stayed silent about my views. Alternatively, I could have started a safety-branded company that nonetheless engages in capabilities research—like many of the ones that already exist.
It seems implausible that spending my time writing articles advocating for AI acceleration is the most selfishly profitable use of my time. The direct impact of the time I spend building Mechanize is probably going to have a far stronger effect on my personal net worth than writing a blog post about AI doom. However, while I do not think writing articles like this one is very profitable for me personally, I do think it is helpful for the world because I see myself as providing a unique perspective on AI risk that is available almost nowhere else. As far as I can tell, I am one of only a very small number of people in the world who have both engaged deeply with the arguments for AI risk and yet actively and explicitly work toward accelerating AI.
In general, I think people overestimate how much money influences people’s views about these things. It seems clear to me that people are influenced far more by peer effects and incentives from the social group they reside in. As a comparison, there are many billionaires who advocate for tax increases, or vote for politicians who support tax increases. This actually makes sense when you realize that merely advocating or voting for a particular policy is very unlikely to create change that meaningfully impacts you personally. Bryan Caplan has discussed this logic in the context of arguments about incentives under democracy, and I generally find his arguments compelling.
Your calculation implicitly assumes that preventing AI takeover permanently secures human control over the universe for billions of years. In other words, you are treating the choice as one between two possible futures: a universe entirely colonized by humans versus a universe entirely colonized by AI. That assumption is what produces the enormous numbers in your estimate.
But, in my view, there is a more realistic way to model this. If preventing AI takeover today does not permanently secure human control over the universe, but instead merely delays the eventual loss of human control, then the actual effect of prevention is much smaller than your calculation suggests. Instead of the relevant outcome being the difference between a human-controlled universe and an AI-controlled universe over billions of years, the relevant outcome is extending human control over Earth for some additional period of time before control is eventually lost anyway. That period of time, however long it might be in human terms, is presumably extremely brief by astronomical standards.
When you model the situation this way, the numbers change dramatically. The expected value of preventing AI takeover drops by orders of magnitude compared to your original estimate, which directly undercuts the argument you are making.