[Link] “Where are all the successful rationalists?”
https://applieddivinitystudies.com/2020/09/05/rationality-winning (a)
Excerpt:
So where are all the winners?
The people that jump to mind are Nick Bostrom (Oxford Professor of Philosophy, author), Holden Karnofsky and Elie Hassenfeld (run OpenPhil and GiveWell, directing ~300M in annual donations) and Will MacAskill (Oxford Professor of Philosophy, author).
But somehow that feels like cheating. We know rationalism is a good meme, so it doesn’t seem fair to cite people whose accomplishments are largely built off of convincing someone else that rationalism is important. They’re successful, but at a meta-level, only in the same way Steve Bannon is successful, and to a much lesser extent.
And this, from near the end:
The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse. Talking to non-rationalists, I am perpetually holding myself back from saying “oh yes, that’s just the thing where no one has coherent meta-principles” or “that’s the thing where facts are purpose-dependent”. Talking to rationalists is not much better, since it feels less like a free exchange of ideas, and more like an exchange of “have you read post?”
There are some specific areas where rationality might help, like using Yudkowsky’s Inadequate Equilibria to know when it’s plausible to think I have an original insight that is not already “priced into the market”, but even here, I’m not convinced these beat out specific knowledge. If you want to start a defensible monopoly, reading about business strategy or startup-specific strategy will probably be more useful than trying to reason about “efficiency” in a totally abstract sense.
And yet, I will continue reading these blogs, and if Slate Star Codex ever releases a new post, I will likely drop whatever I am doing to read it. This has nothing to do with self-improvement or “systematized winning”.
It’s solely because weird blogs on the internet make me feel less alone.
The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren’t yet at the top of their fields but that’s unsurprising as most are 25-35.
The rationality community, inasmuch as it doesn’t overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinkers and writers who offer the world fresh ideas and a unique perspective on things. That does seems to be the comparative advantage of that group. So then it’s not so surprising that we don’t see lots of people e.g. getting rich. They mostly aren’t trying to. 🤷♂️
[I only read the excerpts quoted here, so apologies if this remark is addressed in the full post.]
I think there’s likely something about the author’s observation, and I appreciate their frankness about why they think they engage with rationalist content. (I’d also guess they’re far from alone in acting partly on this motivation.)
However, if we believe (as I think we should) that there is a non-negligible existential risk from AI this century, then the excerpt sounds too negative to me.
While the general idea of AI risk didn’t originate with them, my impression is that Yudkowsky and earlier rationalists had a significant counterfactual impact on the state of the AI alignment field. And not just by convincing others of “rationalism” or AI risk worries specifically (though I also don’t understand why the author discounts this type of ‘winning’), but also by contributing object-level ideas. Even people who today have high-level disagreements with MIRI on AI alignment often engaged with MIRI’s ideas, and may have developed their own thoughts partly in reaction against them. While far from clear how large or valuable this impact was, it seems at least plausible to me that without the work by early rationalists, the AI alignment field today wouldn’t just be smaller but also worse in terms of the quality of its content.
There also arguably are additional ‘rationalist winners’ behind the “people that jump to mind”. To give just one example, note that Holden Karnofsky (who the author named) cited Carl Shulman (arguably an early rationalist, though I don’t know if he identifies as such) in particular and various other parts of the rationalist community and rationalist thought more broadly in his document Some Key Ways In Which I’ve Changed My Mind. This change of mind was arguably worth billions by certain views, and significantly caused by people the author fails to mention.
Lastly, even from a very crude perspective that’s agnostic about AI issues, going from ‘a self-taught blogger’ to ‘senior researcher at a multi-million dollar research institute significantly inspired by their original ideas’ arguably looks pretty impressive.
(Actually, maybe you don’t need to believe in AI risk, as similar remarks apply to EA in general: While the momentum from GiveWell and the Oxford community may well have sufficed to get some sort of EA movement off the ground, it seems clear to me that the rationality community had a significant impact on EA’s trajectory. Again, it’s not obvious but at least plausibly there are some big wins hidden in that story.)
Are these ‘winners’ rare? Yes, but big wins are rare in general. Are ‘rationalist winners’ rarer then we’d predict based on some prior distribution of success for some reference population? I don’t know. Are there various ways the rationality community could improve to increase its chances of producing winners? Very likely yes, but again I think that’s the answer you should suspect in general, and my intuitive guess is that the rationality community tends to be worse-than-typical at some winning-relevant things (e.g. perhaps modeling and engaging in ‘political’/power dynamics) and better at others (e.g. perhaps anticipating low-probability catastrophes), and I feel fairly unsure how this comes out on net.
(For disclosure, I say all of this as someone who I suspect among EAs tends to be more skeptical/negative about the rationality community, and certainly is personally somewhat alienated and sometimes annoyed by parts of it.)
I like this comment. To respond to just a small part of it:
I’ve also only read the excerpt, not the full post. There, the author seems to only exclude/discount as ‘winning’ convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to ‘win’ was by making one better at convincing others of rationalism, then that ‘win’ wouldn’t provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to ‘win’ in other ways as well, then convincing others of rationalism:
should also be counted as a ‘proper win’
would be more like someone being genuinely good at running businesses as well as being good at getting money for writing about their good approaches to running businesses, rather than like a pyramid scheme
Might not count as winning in the sense of being extremely rich and successful by conventional standards, but I think people outside the forecasting space underestimates the degree to which superforecasters are disproportionately likely to be rationalist and rationalist adjacent.
Registering that I think the poll here is likely (~60%?) to end up being >25% for P(interacts with rationality | is a superforecaster), which is way above base rates.
Update: as an empirical matter, I most likely did not predict the poll correctly.
Here’s the poll results so far.
This post seems to fail to ask the fundamental question “winning at what?”. If you don’t want to become a leading politician or entrepeneur, then applying rationality skills obviously won’t help you get there.
The EA community (which is distinct from the rationality community, which the author fails to note) clearly has a goal however: doing a lot of good. How much money GiveWell has been able to move to AMF clearly has improved a lot over the past ten years, but as the author says, that only proves they have convinced others of rationality. We still need to check whether deaths from malaria have actually been going down a corresponding amount due to AMF doing more distributions. I am not aware of any investigations of this question.
Some people in the rationalist community likely only have ‘understand the world really well’ as their goal, which is hard to measure the success of, though better forecasts can be one example. I think the rationality community stocking up on food in February before it was sold out everywhere is a good example of a success, but probably not the sort of shining example the author might be looking for.
If your goal is to have a community where a specific rationalist-ish cluster of people shares ideas, it seems like the rationalist community has done pretty well.
[Edit: redacted for being quickly written, and in retrospective failing to engage with the author’s perspective and the rationality community’s stated goals]
I found Roko’s Twitter thread in response interesting, arguing that
being very successful requires very high conscientiousness, which is very rare, so no surprise that a small group hasn’t seen much of it
the rationalist community makes people focus less on the what their social peer groups consider appropriate/desireable, which is key to being supported by them
Personally what comes to mind here, I always felt uneasy about not having a semi-solid grasp of *everything* from the bottom up, and the rationalist project has been great for helping me in that regard.
The question from the title reminds me of Sarah Constantin’s 2017 blog post The Craft is not the Community, which I thought had some interesting related observations, analysis, and suggestions. (Though as an outsider of the Bay Area rationalist community I often can’t independently assess its accuracy.)
I’m reminded of Romeo’s comment about rationality attracting “the walking wounded” on a similar post from a couple years back.
I actually think rationality is doing pretty good all things considered, though I definitely resonate with Applied Divinity Studies’ viewpoint. Tsuyoku Naritai!