[I only read the excerpts quoted here, so apologies if this remark is addressed in the full post.]
I think there’s likely something about the author’s observation, and I appreciate their frankness about why they think they engage with rationalist content. (I’d also guess they’re far from alone in acting partly on this motivation.)
However, if we believe (as I think we should) that there is a non-negligible existential risk from AI this century, then the excerpt sounds too negative to me.
While the general idea of AI risk didn’t originate with them, my impression is that Yudkowsky and earlier rationalists had a significant counterfactual impact on the state of the AI alignment field. And not just by convincing others of “rationalism” or AI risk worries specifically (though I also don’t understand why the author discounts this type of ‘winning’), but also by contributing object-level ideas. Even people who today have high-level disagreements with MIRI on AI alignment often engaged with MIRI’s ideas, and may have developed their own thoughts partly in reaction against them. While far from clear how large or valuable this impact was, it seems at least plausible to me that without the work by early rationalists, the AI alignment field today wouldn’t just be smaller but also worse in terms of the quality of its content.
There also arguably are additional ‘rationalist winners’ behind the “people that jump to mind”. To give just one example, note that Holden Karnofsky (who the author named) cited Carl Shulman (arguably an early rationalist, though I don’t know if he identifies as such) in particular and various other parts of the rationalist community and rationalist thought more broadly in his document Some Key Ways In Which I’ve Changed My Mind. This change of mind was arguably worth billions by certain views, and significantly caused by people the author fails to mention.
Lastly, even from a very crude perspective that’s agnostic about AI issues, going from ‘a self-taught blogger’ to ‘senior researcher at a multi-million dollar research institute significantly inspired by their original ideas’ arguably looks pretty impressive.
(Actually, maybe you don’t need to believe in AI risk, as similar remarks apply to EA in general: While the momentum from GiveWell and the Oxford community may well have sufficed to get some sort of EA movement off the ground, it seems clear to me that the rationality community had a significant impact on EA’s trajectory. Again, it’s not obvious but at least plausibly there are some big wins hidden in that story.)
Are these ‘winners’ rare? Yes, but big wins are rare in general. Are ‘rationalist winners’ rarer then we’d predict based on some prior distribution of success for some reference population? I don’t know. Are there various ways the rationality community could improve to increase its chances of producing winners? Very likely yes, but again I think that’s the answer you should suspect in general, and my intuitive guess is that the rationality community tends to be worse-than-typical at some winning-relevant things (e.g. perhaps modeling and engaging in ‘political’/power dynamics) and better at others (e.g. perhaps anticipating low-probability catastrophes), and I feel fairly unsure how this comes out on net.
(For disclosure, I say all of this as someone who I suspect among EAs tends to be more skeptical/negative about the rationality community, and certainly is personally somewhat alienated and sometimes annoyed by parts of it.)
I like this comment. To respond to just a small part of it:
And not just by convincing others of “rationalism” or AI risk worries specifically (though I also don’t understand why the author discounts this type of ‘winning’)
I’ve also only read the excerpt, not the full post. There, the author seems to only exclude/discount as ‘winning’ convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to ‘win’ was by making one better at convincing others of rationalism, then that ‘win’ wouldn’t provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to ‘win’ in other ways as well, then convincing others of rationalism:
should also be counted as a ‘proper win’
would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme
[I only read the excerpts quoted here, so apologies if this remark is addressed in the full post.]
I think there’s likely something about the author’s observation, and I appreciate their frankness about why they think they engage with rationalist content. (I’d also guess they’re far from alone in acting partly on this motivation.)
However, if we believe (as I think we should) that there is a non-negligible existential risk from AI this century, then the excerpt sounds too negative to me.
While the general idea of AI risk didn’t originate with them, my impression is that Yudkowsky and earlier rationalists had a significant counterfactual impact on the state of the AI alignment field. And not just by convincing others of “rationalism” or AI risk worries specifically (though I also don’t understand why the author discounts this type of ‘winning’), but also by contributing object-level ideas. Even people who today have high-level disagreements with MIRI on AI alignment often engaged with MIRI’s ideas, and may have developed their own thoughts partly in reaction against them. While far from clear how large or valuable this impact was, it seems at least plausible to me that without the work by early rationalists, the AI alignment field today wouldn’t just be smaller but also worse in terms of the quality of its content.
There also arguably are additional ‘rationalist winners’ behind the “people that jump to mind”. To give just one example, note that Holden Karnofsky (who the author named) cited Carl Shulman (arguably an early rationalist, though I don’t know if he identifies as such) in particular and various other parts of the rationalist community and rationalist thought more broadly in his document Some Key Ways In Which I’ve Changed My Mind. This change of mind was arguably worth billions by certain views, and significantly caused by people the author fails to mention.
Lastly, even from a very crude perspective that’s agnostic about AI issues, going from ‘a self-taught blogger’ to ‘senior researcher at a multi-million dollar research institute significantly inspired by their original ideas’ arguably looks pretty impressive.
(Actually, maybe you don’t need to believe in AI risk, as similar remarks apply to EA in general: While the momentum from GiveWell and the Oxford community may well have sufficed to get some sort of EA movement off the ground, it seems clear to me that the rationality community had a significant impact on EA’s trajectory. Again, it’s not obvious but at least plausibly there are some big wins hidden in that story.)
Are these ‘winners’ rare? Yes, but big wins are rare in general. Are ‘rationalist winners’ rarer then we’d predict based on some prior distribution of success for some reference population? I don’t know. Are there various ways the rationality community could improve to increase its chances of producing winners? Very likely yes, but again I think that’s the answer you should suspect in general, and my intuitive guess is that the rationality community tends to be worse-than-typical at some winning-relevant things (e.g. perhaps modeling and engaging in ‘political’/power dynamics) and better at others (e.g. perhaps anticipating low-probability catastrophes), and I feel fairly unsure how this comes out on net.
(For disclosure, I say all of this as someone who I suspect among EAs tends to be more skeptical/negative about the rationality community, and certainly is personally somewhat alienated and sometimes annoyed by parts of it.)
I like this comment. To respond to just a small part of it:
I’ve also only read the excerpt, not the full post. There, the author seems to only exclude/discount as ‘winning’ convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to ‘win’ was by making one better at convincing others of rationalism, then that ‘win’ wouldn’t provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to ‘win’ in other ways as well, then convincing others of rationalism:
should also be counted as a ‘proper win’
would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme