I think the claim that Yudkowsky’s views on AI risk are meaningfully influenced by money is very weak. My guess is that he could easily find another opportunity unrelated to AI risk to make $600k per year if he searched even moderately hard.
The claim that my views are influenced by money is more plausible because I stand to profit far more than Yudkowsky stands to profit from his views. However, while perhaps plausible from the outside, this claim does not match my personal experience. I developed my core views about AI risk before I came into a position to profit much from them. This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals. As far as I remember, I had no intention to start an AI company until very shortly before the creation of Mechanize. Moreover, if I was engaging in motivated reasoning, I could have just stayed silent about my views. Alternatively, I could have started a safety-branded company that nonetheless engages in capabilities research—like many of the ones that already exist.
It seems implausible that spending my time writing articles advocating for AI acceleration is the most selfishly profitable use of my time. The direct impact of the time I spend building Mechanize is probably going to have a far stronger effect on my personal net worth than writing a blog post about AI doom. However, while I do not think writing articles like this one is very profitable for me personally, I do think it is helpful for the world because I see myself as providing a unique perspective on AI risk that is available almost nowhere else. As far as I can tell, I am one of only a very small number of people in the world who have both engaged deeply with the arguments for AI risk and yet actively and explicitly work toward accelerating AI.
In general, I think people overestimate how much money influences people’s views about these things. It seems clear to me that people are influenced far more by peer effects and incentives from the social group they reside in. As a comparison, there are many billionaires who advocate for tax increases, or vote for politicians who support tax increases. This actually makes sense when you realize that merely advocating or voting for a particular policy is very unlikely to create change that meaningfully impacts you personally. Bryan Caplan has discussed this logic in the context of arguments about incentives under democracy, and I generally find his arguments compelling.
I think the claim that Yudkowsky’s views on AI risk are meaningfully influenced by money is very weak.
To be clear, I agree. I also agree with your general point that other factors are often more important than money. Some of these factors include the allure of millennialism, or the allure of any sort of totalizing worldview or “ideology”.
I was trying to make a general point against accusations of motivated reasoning related to money, at least in this context. If two sets of people are each getting paid to work on opposite sides of an issue, why only accuse one side of motivated reasoning?
This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals.
Thanks for describing this history. Evidence of a similar kind lends strong credence to Yudkowsky forming his views independent from the influence of money as well.
My general view is that reasoning is complex, motivation is complex, people’s real psychology is complex, and that the forum-like behaviour of accusing someone of engaging in X bias is probably a misguided pop science simplification of the relevant scientific knowledge. For instance, when people engage in distorted thinking, the actual underlying reasoning often seems to be a surprisingly complicated multi-step sequence.
The essay above that you co-wrote is incredibly strong. I was the one who originally sent it to Vasco and, since he is a prolific cross-poster and I don’t like to cross-post under my name, encouraged him to cross-post it. I’m glad more people in the EA community have now read it. I think everyone in the EA community should read it. It’s regrettable that there’s only been one object-level comment on the substance of the essay so far, and so many comments about this (to me) relatively uninteresting and unimportant side point about money biasing people’s beliefs. I hope more people will comment on the substance of the essay at some point.
Thanks for this comment! I think your arguments about your own motivated reasoning are somewhat moot, since they seem more of an explanation that your behavior/public facing communication isn’t straightout deception (which seems right!). As I see it, motivated reasoning is to a large extent about deceiving yourself and maintaining a coherent self-narrative, so it’s perfectly plausible that one is willing to pay substantial cost in order to maintain this. (Speaking generally; I’m not very interested in discussing whether you’re doing it in particular.)
I think the claim that Yudkowsky’s views on AI risk are meaningfully influenced by money is very weak. My guess is that he could easily find another opportunity unrelated to AI risk to make $600k per year if he searched even moderately hard.
The claim that my views are influenced by money is more plausible because I stand to profit far more than Yudkowsky stands to profit from his views. However, while perhaps plausible from the outside, this claim does not match my personal experience. I developed my core views about AI risk before I came into a position to profit much from them. This is indicated by the hundreds of comments, tweets, in-person arguments, DMs, and posts from at least 2023 onward in which I expressed skepticism about AI risk arguments and AI pause proposals. As far as I remember, I had no intention to start an AI company until very shortly before the creation of Mechanize. Moreover, if I was engaging in motivated reasoning, I could have just stayed silent about my views. Alternatively, I could have started a safety-branded company that nonetheless engages in capabilities research—like many of the ones that already exist.
It seems implausible that spending my time writing articles advocating for AI acceleration is the most selfishly profitable use of my time. The direct impact of the time I spend building Mechanize is probably going to have a far stronger effect on my personal net worth than writing a blog post about AI doom. However, while I do not think writing articles like this one is very profitable for me personally, I do think it is helpful for the world because I see myself as providing a unique perspective on AI risk that is available almost nowhere else. As far as I can tell, I am one of only a very small number of people in the world who have both engaged deeply with the arguments for AI risk and yet actively and explicitly work toward accelerating AI.
In general, I think people overestimate how much money influences people’s views about these things. It seems clear to me that people are influenced far more by peer effects and incentives from the social group they reside in. As a comparison, there are many billionaires who advocate for tax increases, or vote for politicians who support tax increases. This actually makes sense when you realize that merely advocating or voting for a particular policy is very unlikely to create change that meaningfully impacts you personally. Bryan Caplan has discussed this logic in the context of arguments about incentives under democracy, and I generally find his arguments compelling.
To be clear, I agree. I also agree with your general point that other factors are often more important than money. Some of these factors include the allure of millennialism, or the allure of any sort of totalizing worldview or “ideology”.
I was trying to make a general point against accusations of motivated reasoning related to money, at least in this context. If two sets of people are each getting paid to work on opposite sides of an issue, why only accuse one side of motivated reasoning?
Thanks for describing this history. Evidence of a similar kind lends strong credence to Yudkowsky forming his views independent from the influence of money as well.
My general view is that reasoning is complex, motivation is complex, people’s real psychology is complex, and that the forum-like behaviour of accusing someone of engaging in X bias is probably a misguided pop science simplification of the relevant scientific knowledge. For instance, when people engage in distorted thinking, the actual underlying reasoning often seems to be a surprisingly complicated multi-step sequence.
The essay above that you co-wrote is incredibly strong. I was the one who originally sent it to Vasco and, since he is a prolific cross-poster and I don’t like to cross-post under my name, encouraged him to cross-post it. I’m glad more people in the EA community have now read it. I think everyone in the EA community should read it. It’s regrettable that there’s only been one object-level comment on the substance of the essay so far, and so many comments about this (to me) relatively uninteresting and unimportant side point about money biasing people’s beliefs. I hope more people will comment on the substance of the essay at some point.
Thanks for this comment!
I think your arguments about your own motivated reasoning are somewhat moot, since they seem more of an explanation that your behavior/public facing communication isn’t straightout deception (which seems right!). As I see it, motivated reasoning is to a large extent about deceiving yourself and maintaining a coherent self-narrative, so it’s perfectly plausible that one is willing to pay substantial cost in order to maintain this. (Speaking generally; I’m not very interested in discussing whether you’re doing it in particular.)