The kind of work MIRI is doing and the kind of experience Yudkowsky and Soares have isn’t really transferable to anything else.
Soares’ experience was a software engineer at Microsoft and Google before joining MIRI, and would trivially be able to rejoin industry after a few weeks of self-study to earn more money if for some reason he decided he wanted to do that. I won’t argue the point about EY—it seems obvious to me that his market value as a writer/communicator is well in excess of his 2023/2024 compensation, given his track record, but the argument here is less legible. Thankfully it turns out that somebody anticipated the exact same incentive problem and took action to mitigate it.
It’s interesting to claim that money stops being an incentive for people after a certain fixed amount well below $1 million/year. Let’s say that’s true — maybe it is true — then why do we treat people like Sam Altman, Dario Amodei, Elon Musk, and so on as having financial incentives around AI? Are we wrong to do so? (What about AI researchers and engineers who receive multi-million-dollar compensation packages? After the first, say, $5 million, are they free and clear to form unmotivated opinions?)
I think a very similar argument can be made about the Mechanize co-founders. They could make “enough” money doing something else — including their previous jobs — even if it’s less money than they might stand to gain from a successful AI capabilities startup. Should we then rule out money as an incentive?
To be clear, I don’t claim that Eliezer Yudkowsky, Nate Soares, others at MIRI, or the Mechanize co-founders are unduly motivated by money in forming their beliefs. I have no way of knowing that, and since there’s no way to know, I’m willing to give them all the benefit of the doubt. I’m saying I dislike accusations of motivated reasoning in large part because they’re so easy to level at people you disagree with, and it’s easy to overlook how the same argument could apply to yourself or people you agree with. I’m pointing out how a similar accusation could be levelled at Yudkowsky and Soares in order to illustrate this general point, specifically to challenge Nick Laing’s accusation against the Mechanize co-founders above.
I generally think that ideological motivation around AGI is a powerful motivator. I think the psychology around how people form their beliefs on AGI is complex and involves many factors (e.g. millennialist cognitive bias, to name just one).
It’s interesting to claim that money stops being an incentive for people after a certain fixed amount well below $1 million/year.
Where is this claim being made? I think the suggestion was that someone found it desirable to reduce the financial incentive gradient for EY taking any particular public stance, not some vastly general statement like what you’re suggesting.
Personally I don’t think Sam Altman is motivated by money. He just wants to be the one to build it.
I sense that Elon Musk and Dorio Amodei’s motivations are more complex than “motivated by money”, but I can imagine that the actual dollar amounts are more important to them than to Sma.
Soares’ experience was a software engineer at Microsoft and Google before joining MIRI, and would trivially be able to rejoin industry after a few weeks of self-study to earn more money if for some reason he decided he wanted to do that. I won’t argue the point about EY—it seems obvious to me that his market value as a writer/communicator is well in excess of his 2023/2024 compensation, given his track record, but the argument here is less legible. Thankfully it turns out that somebody anticipated the exact same incentive problem and took action to mitigate it.
It’s interesting to claim that money stops being an incentive for people after a certain fixed amount well below $1 million/year. Let’s say that’s true — maybe it is true — then why do we treat people like Sam Altman, Dario Amodei, Elon Musk, and so on as having financial incentives around AI? Are we wrong to do so? (What about AI researchers and engineers who receive multi-million-dollar compensation packages? After the first, say, $5 million, are they free and clear to form unmotivated opinions?)
I think a very similar argument can be made about the Mechanize co-founders. They could make “enough” money doing something else — including their previous jobs — even if it’s less money than they might stand to gain from a successful AI capabilities startup. Should we then rule out money as an incentive?
To be clear, I don’t claim that Eliezer Yudkowsky, Nate Soares, others at MIRI, or the Mechanize co-founders are unduly motivated by money in forming their beliefs. I have no way of knowing that, and since there’s no way to know, I’m willing to give them all the benefit of the doubt. I’m saying I dislike accusations of motivated reasoning in large part because they’re so easy to level at people you disagree with, and it’s easy to overlook how the same argument could apply to yourself or people you agree with. I’m pointing out how a similar accusation could be levelled at Yudkowsky and Soares in order to illustrate this general point, specifically to challenge Nick Laing’s accusation against the Mechanize co-founders above.
I generally think that ideological motivation around AGI is a powerful motivator. I think the psychology around how people form their beliefs on AGI is complex and involves many factors (e.g. millennialist cognitive bias, to name just one).
Where is this claim being made? I think the suggestion was that someone found it desirable to reduce the financial incentive gradient for EY taking any particular public stance, not some vastly general statement like what you’re suggesting.
Personally I don’t think Sam Altman is motivated by money. He just wants to be the one to build it.
I sense that Elon Musk and Dorio Amodei’s motivations are more complex than “motivated by money”, but I can imagine that the actual dollar amounts are more important to them than to Sma.