Read The Sequences
There is heavy overlap among the effective altruism and rationality communities but they are not the same thing. Within the effective altruism community, especially among those who are newer to the movement and were introduced to it through a university group, I’ve noticed some tension between the two. I often sense the vibe that sometimes people into effective altruism who haven’t read much of the canonical LessWrong content write off the rationalist stuff as weird or unimportant.
I think this is a pretty big mistake.
Lots of people doing very valuable work within effective altruism got interested in it via first interacting with rationalist content, in particular The Sequences and Harry Potter and the Methods of Rationality. I think that is for good reason. If you haven’t come across those writings before, here’s a nudge to give The Sequences a read.
The Sequences are a (really long) collection of blog posts written by Eliezer Yudkowsky on the science and philosophy of human rationality. They are divided into sequences—a list of posts on a similar topic. Most of the posts would have been pretty useful to me on their own but I also got more value from reading posts in a particular sequence to better internalise the concepts.
There are slightly fewer posts in The Sequences than there are days in the year so reading the whole thing is a very doable thing to do in the coming year! You can also read Highlights from the Sequences which cover 50 of the best essays.
Below, I’ll list some of the parts that I have found especially helpful and that I often try to point to when talking to people into effective altruism (things I wish they had read too).
Fake Beliefs is an excellent sequence if you already know a bit about biases in human thinking. The key insight there is about making beliefs pay rent (“don’t ask what to believe—ask what to anticipate”) and that sometimes your expectations can come apart from your professed beliefs (fake beliefs). The ideas were helpful for me noticing when that happens, for example when I believe I believe something but actually do not. It happens a bunch when I start talking about abstract, wordy things but forget to ask myself what I would actually expect to see in the world if the things I am saying were true.
Noticing Confusion is a cool sequence that talks about things like:
What is evidence? (“For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target”)
Your strength as a rationalist is your ability to be more confused by fiction than by reality—noticing confusion when something doesn’t check out and going EITHER MY MODEL IS FALSE OR THIS STORY IS WRONG
Absence of evidence is evidence of absence, and conservation of expected evidence (“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction”)
I am often surrounded by people who are very smart and say convincing-sounding things all the time. The ideas mentioned above have helped me better recognise when I’m confused and when a smooth-sounding argument doesn’t match up with how I think the world actually works.
Against rationalisation has things that are useful to remember:
Knowing about biases can hurt people. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarisation. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to some biases.
Not to avoid your belief’s real weak points. “Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind”
Motivated stopping and motivated continuation. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there’s a lot of fast cheap evidence you haven’t gathered yet. You should suspect motivated continuation when some evidence is leaning in a way you don’t like, but you decide that more evidence is needed—expensive evidence that you know you can’t gather anytime soon
Is that your true rejection? “Is that simple straightforward-sounding reason your true rejection [for a position you disagree with], or does it come from intuition-X or professional-zeitgeist-Y?”
I once facilitated an effective altruism intro fellowship. Sometimes, the participants in this fellowship would have criticisms or questions that I hadn’t thought of. Even so, my mind would quickly come up with a convincing-sounding response and I would feel very rational. That’s rationalisation. This also happens when I’m alone, in the privacy of my own mind: the urge to find a convincing argument against something I don’t want to believe and quickly move on, and the urge to put a lot of effort into gathering evidence for something I want to believe. Scary! But I notice it more now.
Cached Thoughts was useful for recognising when I am simply accepting and repeating ideas without actually evaluating or understanding them. Rohin Shah, an AI safety researcher, has previously mentioned that he estimates there are ~50 people in the world who can make a case for working on AI alignment that he wouldn’t consider clearly flawed. Lots of people would disagree with Rohin about what counts as a not-clearly-flawed argument but I think the general pattern of “there are way more people who think they know the arguments and can parrot them compared to people who can actually generate them” is true in lots of areas. This is one type of thing that the ideas in this post can help with:
What patterns are being completed, inside your mind, that you never chose to be there?
If this idea had suddenly occurred to you personally, as an entirely new thought, how would you examine it critically?
Try to keep your mind from completing the pattern in the standard, unsurprising, already-known way. It may be that there is no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in the answer automatically.
But is it true? Don’t let your mind complete the pattern! Think!
Every Cause Wants to Be a Cult points out something that happens because of human nature regardless of how worthy your cause is. It points out the need to actively push back against sliding into cultishness. It doesn’t just happen as a result of malevolence or stupidity but whenever you have a group of people with an unusual goal who aren’t always making a constant effort to resist the cult attractor. It doesn’t have suggestions on how to do this (there are other posts that cover that) but just points out that cultishness is the default unless you are actively making an effort to prevent it. For me, this is helpful to remember as I am often a part of groups with unusual goals.
Letting Go is a sequence on, well, how to let go of untrue beliefs when you change your mind instead of holding on. It has posts on The Importance of Saying “Oops”, on using the power of innocent curiosity, on leaving a line of retreat so that you can more easily evaluate the evidence for beliefs that make you uncomfortable, and how to stage a Crisis of Faith when there is a belief you have had for a while that is surrounded by a cloud of known arguments and refutations, that you have invested a lot in, and that has emotional consequences.
I first read these posts when I had doubts about my religious beliefs but they were still a huge part of my identity. The tools presented in the sequence made it easier for me to say “oops” and move on instead of just living with a cloud of doubts. I have found it useful to come back to these ideas when I start noticing uncomfortable doubts about a major belief where changing my mind on it would have emotional and social consequences.
Fake Preferences has some blog posts that I found valuable, especially Not For the Sake of Happiness Alone (helped me notice that my values aren’t reducible to just happiness), Fake Selfishness (people usually aren’t genuinely selfish—I do actually care about things outside myself), and Fake Morality (“The fear of losing a moral compass is itself a moral compass”).
The Quantified Humanism sequence has some bangers that have always been relevant for effective altruists and are especially relevant today. Ends Don’t Justify Means (Among Humans) and Ethical Injunctions caution against doing unethical things for the greater good because we run on corrupted hardware and having rules to not do certain things even if it feels like the right thing to do protects us from our own cleverness. “For example, you shouldn’t rob banks even if you plan to give the money to a good cause.”
The Challenging the Difficult sequence is about solving very difficult problems, to make an extraordinary effort to do the impossible. It’s not just inspiring but also helpful for looking at how I am approaching my goals, if I am actually doing what needs to be done and aiming to win or just acting out my role and trying to try.
In summary, I think the Sequences have lots of valuable bits for people aiming to have a positive impact. I have found them valuable for my thinking. If you haven’t encountered them before, I recommend giving them a try.
- Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by 27 Aug 2023 1:07 UTC; 44 points) (
- Smuggled assumptions in “declining epistemic quality” by 16 Jan 2023 17:26 UTC; 28 points) (
- Rationality Book Club: Book 2: Fake Beliefs by 10 Jan 2024 4:27 UTC; 5 points) (
- Rationality Book Club: Week 4 by 8 Feb 2024 11:47 UTC; 5 points) (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (
- Rationality Book Club: Intro to The Sequences by 6 Sep 2023 9:59 UTC; 4 points) (
- Rationality Book Club: Book 1: Predictably Wrong by 12 Oct 2023 9:39 UTC; 4 points) (
- Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by 27 Aug 2023 1:06 UTC; -25 points) (LessWrong;
To complement your recommendation, I would also add that Yudowsky’s Sequences end up transmitting a somewhat packaged worldview, and I think that there are some dangers in that.
I agree that their summarization work is valuable, but some more unmediated original sources which could transmit some of the same value might be:
Probability Theory: The Logic of Science, or some other probability theory textbook
An introduction to general semantics
An introduction to CBT, e.g., Feeling Good
Thinking Fast and Slow
How to Measure Anything
Surely You’re Joking Mr Feynman
Stranger in a Strange Land, David’s Sling, The Moon is a Harsh Mistress.
Antifragile/Black Swan
The Elephant in the Brain
Superforecasting, making 100 forecasts and keeping track each week.
The Rationality Quotient, or some other Keith Stanovich book.
Some intro to nonviolent communication
Because Yudkowsky’s sequences are so long, and because I think that there is more value in reading the original sources, I’d probably lean towards recommending those instead.
Huh, I think this list of books covers less than half of the ideas in the sequences, so I don’t really think this counts as “the original sources”. Topics that get pretty extensively covered in the sequences but are absent here:
Evolutionary biology + psychology
AI Existential Risk
Metaethics & fragility of value
Something about courage/willing to do hard things/Something to protect (a recurring theme in the sequences and absent in all of the above)
Decision theory
Lots of other stuff.
Like, I don’t know, let’s look at some randomly selected sequence on LessWrong:
This sequence is particularly focused on noticing confusion and modeling scientific progress. None of the books you list above really cover that at all (The Logic of Science maybe the most, but it’s really not its core focus, and is also very technical and has giant holes in the middle of it due to its unfinished nature).
I have read all of the books/content you link above, and I don’t think it really has that much overlap with the content of the sequences, and don’t expect that someone who has read them to really have gotten close to most of the value of reading the sequences, so I don’t currently think this is a good recommendation.
This comment made me more sceptical about reading the sequences. I don’t think I can view anyone as an expert on all these topics. Is there a “best of” selection of the sequences somewhere?
I can’t speak to Yudkowsky’s knowledge of physics, economics, psychology etc, but as someone who studies philosophy I can tell you his philosophical segments are pretty weak.
It’s clear that he hasn’t read a lot of philosophy and he is very dismissive of the field as a whole. He also has a tendency to reinvent the wheel (e.g his ‘Requiredism’ is what philosophers would call compatibilism).
When I read the sequences as a teenager I was very impressed by his philosophy, but as I got older and started reading more I realized how little he actually engaged with criticisms of his favorite theories, and when he did he often only engaged with weaker criticisms.
If you want some good introductory texts on philosophy as well as criticism/alternatives to some of his/rationalists most central beliefs e.g physicalism, correspondence theory, scientific realism, the normativity of classical logic (all of which I have rejected as of the moment of this writing) then I highly recommend the Stanford Encyclopedia of Philosophy.
In fairness, my memory of the philpapers survey is that there is more consensus amongst professional philosophers on scientific realism than on almost any other philosophical theory. (Though that’s going by the old survey, haven’t looked at the more recent one yet.) Although of course there are prominent philosophers of science who are anti-realist.
True, here are the results you’re talking about:
His views are moderately popular in general with:
51.37% accept or lean towards correspondence
51.93% accept or lean towards physicalism
30.56% accept or lean towards consequentialism
53.64% accept or lean towards classical logic (although that doesn’t tell us whether the philosophers think it has normative force).
I will say that PhilPapers has a rather small sample size and mostly collects data on english speaking philosophers, so I find it probable that these results are not representative of philosophers as a whole.
That’s true, I would only really trust the survey for what analytic philosophers think.
I did not say that the sequences cover all content in these books! I mean, they are quite long, so they cover a lot of adjacent topics, but I would not claim that the sequences are the canonical resource on all of these.
Eliezer isn’t (to my knowledge) an expert on, say, evolutionary biology. Reading the sequences will not make you an expert on evolutionary biology either.
They will, however, show you how to make a layman’s understanding of evolutionary biology relevant to your life.
I agree that my book list is incomplete, and it was aimed more at topics that the OP brought up.
For each of the additional topics you mentioned, it doesn’t seem like Yudkowsky’s Sequences are the best introduction. E.g., for decision theory I got more out of reading a random MIRI paper trying to formalize FDT. For AI x-risk in particular it would also surprise me if you would also recommend the sequences rather than some newer introduction.
Is this literally true? In particular, have you read David’s Sling?
Yeah, I think the best TDT/FDT/LDY material in-particular is probably MIRI papers. The original TDT paper is quite good, and I consider it kind of part of the sequences, since it’s written around the same time, and written in a pretty similar style.
Nope, still think the sequences are by far the best (and indeed most alignment conversations I have with new people who showed up in the last 5 years tend to consist of me summarizing sequences posts, which has gotten pretty annoying after a while). There is of course useful additional stuff, but if someone wanted to start working on AI Alignment, the sequences still seem by far the best large thing to read (there are of course individual articles that do individual things best, but there isn’t really anything else textbook shaped).
What are the core pieces about AI risk in the sequences? Looking through the list, I don’t see any sequence about AI risk. Yudkowsky’s account on the Alignment Forum doesn’t have anything more than six years old, aka nothing from the sequences era.
Personally I’d point to Joe Carlsmith’s report, Richard Ngo’s writeups, Ajeya Cotra’s writeup, some of Holden Karnofsky’s writing, Concrete Problems in AI Safety and Unsolved Problems in ML Safety as the best introductions to the topic.
The primary purpose of the sequences was to communicate the generators behind AI risk and to teach the tools necessary (according to Eliezer) to make progress on it, so references to it are all over the place, and it’s the second most central theme to the essays.
Later essays in the sequences tend to have more references to AI risk than earlier ones. Here is a somewhat random selection of ones that seemed crucial when looking over the list, though this is really very unlikely to be comprehensive:
Ghosts in the Machine
Optimization and the Intelligence Explosion
Belief in Intelligence
The Hidden Complexity of Wishes
That Alien Message (I think this one is particularly good)
Dreams of AI Design
Raised in Technophilia
Value is Fragile
There are lots more. Indeed, towards the latter half of the sequences it’s hard not to see an essay quite straightforwardly about AI Alignment every 2-3 essays.
My guess is that he meant the sequences convey the kind of more foundational epistemology which helps people people derive better models on subjects like AI Alignment by themselves, though all of the sequences in The Machine in the Ghost and Mere Goodness have direct object-level relevance.
Excepting Ngo’s AGI safety from first principles, I don’t especially like most of those resources as introductions exactly because they offer readers very little opportunity to test or build on their beliefs. Also, I think most of them are substantially wrong. (Concrete Problems in AI Safety seems fine, but is also skipping a lot of steps. I haven’t read Unsolved Problems in ML Safety.)
Out of curiosity, is this literally true? In particular, have you read David’s Sling?
I have read a good chunk of David’s Sling! Though it didn’t really click with me a ton, and I had already been spoiled on a good chunk of it because I had a bunch of conversations about it with friends, so I didn’t fully finish it.
For completeness sake, here is my reading state for all of the above:
Probability Theory: The Logic of Science, or some other probability theory textbook
Read
An introduction to general semantics
I read a bunch of general semantics stuff over the years, but I never really got into it, so a bit unclear.
An introduction to CBT, e.g., Feeling Good
Yep, read Feeling Good
Thinking Fast and Slow
Read
How to Measure Anything
Read
Surely You’re Joking Mr Feynman
Read
Stranger in a Strange Land, David’s Sling, The Moon is a Harsh Mistress.
All three read more than 30%. I think I finished Stranger in a Strange Land and Moon is a Harsh Mistress, but I honestly don’t remember.
Antifragile/Black Swan
Yep, read both
The Elephant in the Brain
Read like 50% of it, but got bored because a lot of it was covering Overcoming Bias stuff that I was familiar with.
Superforecasting, making 100 forecasts and keeping track each week.
Read superforecasting. Have made 100 forecasts, though haven’t been that great at keeping track.
The Rationality Quotient, or some other Keith Stanovich book.
Read 30% of it, then stopped because man, I think that book really was a huge disappointment. Would not recommend reading. See also this review by Stuart Ritchie: https://twitter.com/stuartjritchie/status/819140439827681280?lang=en
Some intro to nonviolent communication
Read Nonviolent Communication
Thanks for the comment and the list of recommendations. I have read most of the things on that list and the ones that I did read, I thought were great and have recommended a bunch to others, especially Probability Theory: The Logic of Science, The Elephant in the Brain, and forecasting practice. I agree that there are some dangers in recommending something that is pretty packaged but I think there is an obvious benefit in that it feels like a distilled version of reading a bunch of valuable things. Per unit time, I found reading the sequences more insightful/useful to me than the sources which it gets its ideas from (even if I read those things before the sequences, I am fairly confident).
I don’t want to oversell the sequences, I think the ideas in them have been mentioned in other places earlier. In my post, I mentioned specifically what ideas I found valuable so that people who are already familiar with them or think they are not that useful can decide not to read them. That isn’t rhetorical, I have some wise friends who are pretty well-read on philosophy and economics, and a lot of the things in the sequences I found novel, they were already familiar with.
My recommendation would look very different for someone who read the sequences and then made that their whole personality. I do know of some people who overrate them but in some of my specific circles (EA uni groups for eg), I think they are underrated/not given a chance which is why I wrote this post.
Would you recommend Probability Theory: The Logic of Science to people with little math background?
Well, if you are uncertain, note that experimentation here is very cheap, because you can download a copy of each book from an online library and quickly skim it to get a sense. So I’d recommend that.
I agree that the sequences have lots of interesting ideas and well-written articles.
However, I think it’s worth noting that they were primarily written by one guy. When one person is writing on such widely disparate topics as quantum physics, morality, decision theory, philosophy of science, future predictions, theory of mind, etc, without a formal education in any of these topics, it’s understandable that flawed ideas and errors will creep in. As an example, the article “I defy the data” gives a ridiculously incorrect picture of how actual modern scientists operate.
I think the sequences are okay intro points for a number of topics, but they should not be treated as the foundation of ones belief system, and should be supplemented with domain specific knowledge from experts in each of the fields mentioned, including critiques of the sequences and “rationalism” as a movements.
>I think the sequences are okay intro points for a number of topics, but they should not be treated as the foundation of ones belief system
I’d say the exact opposite—they are a great foundation that for the most part helps form a coherent world view rather than just getting bits and pieces from everywhere and not necesserily connecting them, but you can go explore further in many directions for a more in-depth (and sometimes more modern) perspective.
To be honest, I don’t really see the appeal of the “lesswrong worldview”. It just seems to be the scientific worldview with a bunch of extra ideas of varying and often dubious quality added on. It all comes from one guy with a fairly poor track record of correctness. It seems like a fun social/hobby group more than anything else.
I don’t want to be overly negative because I know LW played a big part in bringing EA up and did originate some of the ideas here. Unfortunately, I think that social dynamic has also probably led to the LW ideas being overrated in the EA community.
The post you linked to literally admits to cherry-picking negative examples only (see quote below), it should not be cited as evidence for a ‘fairly poor track record’.
It’s pretty ridiculous to expect someone to go through a complete accounting exercise of every statement someone has ever made before expressing an opinion like that, and I’m guessing it’s not a standard you hold for criticism of anyone else. The cited articles provided plenty of examples of yudkowsky being extremely wrong and refusing to acknowledge their mistakes, which matches with my experience of his writings after years of familiarity with it.
My main point is that I have no reason to hold the opinions of Yudkowsky in higher esteem than that of any other succesful pop-science writer like neil degrasse tyson or richard dawkins or whoever. I find it concerning and a little baffling how much influence this one guy has over EA.
If your headline claim is that someone has a “fairly poor track record of correctness”, then I think “using a representative set of examples” to make your case is the bare-minimum necessary for that to be taken seriously, not an isolated demand for rigor.
A lot of the people who built effective altruism see it as an extension of the LessWrong worldview, and think that that’s the reason why EA is useful to people where so many well-meaning projects are not.
Some random LessWrong things which I think are important (chosen because they come to mind, not because they’re the most important things):
The many people in EA who have read and understand Death Spirals (especially Affective Death Spirals and Evaporative Cooling of Group Beliefs) make EA feel safe and like a community I can trust (instead of feeling like a tiger I could choose to run from or ride, the way most large groups of humans feel to people like me) (the many (and counting) people in EA who haven’t read Death Spirals, make me nervous—we have something special here, most large groups are not safe).
The many people in EA who aim to explain rather than persuade, and who are clear about their epistemic status, make me feel like I can frictionlessly trust their work as much as they do, without being fast-talked into something the author is themself uncertain about (but failed to admit their uncertainty over because that’s not considered good writing).
(The post by Ben Garfinkel linked above (the one that admitted up front that it was trying to argue a position and was happy to distort and elide to that end, which was upvoted to +261) contributed to a growing sense of ill-ease. We have something special here, and I’d like to keep it.)
Thought experiments like true objections and least convenient possible worlds swimming around the local noosphere have made conversations about emotionally charged topics much more productive than they are in most corners of the world or internet.
...I was going to say something about noticing confusion and realized that it was already in Quadratic Reciprocity’s post that we are in the replies to. I think that the original post pretty well refutes the idea that the LessWrong mindset is just the default scientific mindset with relatively minor things of dubious usefulness taped on? So I’ll let you decide whether to respond to this before I write more in the same vein as the original post, if the original post was not useful for this purpose.
I’ve read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it’s seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reaction I have seen from most experts when lesswrong touches on their field. (I will save the extensive sourcing for these beliefs for a future post).
I think that EA as a movement has the potential to take the good parts of the lesswrong worldview while abandoning the harmful parts. Unfortunately, I believe too much of the latter still resides within the movement.
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it’s hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
Perhaps it has worked for you in reducing overconfidence, but it certainly hasn’t worked for yudkowsky. I already linked to you the list of failed prognostications, and he shows no sign of stopping, with the declaration that AI extinction has probability ~1.
I have my concerns about calibration in general. I think they let you get good at estimating short term, predictable events and toy examples, which then gives you overconfidence in your beliefs about long term, unpredictable beliefs and events.
I don’t expect you to dig up a million links when I’m not doing the same. I think it’s important to express these opinions out loud, lest we fall into a false impression of consensus on some of these matters. It is important to me… I simply don’t agree with you.
I don’t think my original post was good at conveying the important bits—in particular, I think I published it too quickly and missed out on elaborating on some parts that were more time-consuming to explain. I like your comment and would enjoy reading more
There is a good Cold Takes blog on the ‘Bayesian mindset’ - which gets at something related to this as ‘~20-minute read rather than the >1000 pages of Rationality: A-Z (aka The Sequences).’
Summary:
Hello there! I found your post and the comments really interesting (as soon as I finish writing this, I will be checking The Best Textbooks on Every Subject list in LW), but would like to contribute an outsider’s 2¢, as I have only recently discovered and started to take an interest in EA. The thing is, without trying to be disrespectful, that this Rationalist movement that possibly led many of you to EA feels really, really, really weird and alien on a first glance, like some kind of nerdy, rationalist religion with unconventional and controversial beliefs (polyamory or obsessing with AI) and a guru who does not appear to be well-known and respected as a scientist outside of his circle of followers, and whose main book seems to be a fanfiction-esque rewrite of a Harry Potter book with his ideas intertwined. I repeat that I do not mean this as an evaluation (it is probably ‘more wrong’, if you’ll allow the pun), but from an external perspective, it almost feels like some page from a book with entries on Scientology and Science Fiction. I feel that pushing the message that you have to be a Rationalist or Rationalist-adjacent as a prerequisite to really appreciate and value AE can very easily backfire.
Being the Sequences as long as you say, perhaps even a selection might not be the best way to get people interested if they aren’t already piqued or have a disproportionate amount of free time in their hands. Like, if a Marxist comes and tells you that you need to read through the three volumes of Capital and the Grundrisse before making up your mind on if the doctrine is interesting, personally relevant or a good or a bad thing, or if a theologian does the same move and points towards Thomas Aquinas’ very voluminous works, you would be justified in requiring first some short and convincing expository work with the core arguments and ideas to see if they look sufficiently appealing and worth engaging in. Is there something of the kind for Rationalism?
Best greetings.
M.
Hello! Welcome to the forum, I hope you make yourself at home.
In this comment Hauke Hillebrandt linked this essay of Holden Karnofsky’s: The Bayesian Mindset. It’s about a half-hour read and I think it’s a really good explainer.
Putanumonit has their own introduction to rationality—it’s less explicitly Bayesian, and somewhat more a paean to what Karnofsky calls “[emphasizing] various ideas and mental habits that are inspired by the abstract idea of [expected utility maximization]”: The Path to Reason
Other things which seem related:
Eliezer’s Sequences and Mainstream Academia: 4 minute read, connecting various Eliezerisms to the academic literature
Every Cause Wants to be a Cult: 3 minute read, essay by Eliezer on why Eliezer doesn’t want you to defer to Eliezer
Expecting Short Inferential Distances: 3 minute read, essay on why explaining things is sometimes hard
Some “rationality in action” type posts from a variety of authors, to demonstrate what it looks like when people try to use the Bayesian mindset (these posts are all Real Hipster, written by contrarians who were subsequently proven correct by common consensus; the purpose of rationality is to be as correct as possible as quickly as possible).
The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom: 14 minute read, about how Amanda Knox was innocent (written before she was imprisoned and later exonerated)
Efficient Charity: Do Unto Others: 8 minute read, about efficient charity (written before “effective altruism” was a term and some years before the Centre for Effective Altruism existed)
Seeing the Smoke: a post about Covid-19, written in February of 2020
Hope you find this useful!
Thanks for the recommendations! I wouldn’t have any issues either with a moderately-sized book (say, from 200-400 pages long).
Cheers.
M.
Many people stand by The Scout Mindset by Julia Galef (though I haven’t myself read it) (here’s a book review of it that you can read to decide whether you want to buy or borrow the book). I don’t know how many pages long it is exactly but am 85% sure it falls in your range.
On the nightstand next to me is Replacing Guilt by Nate Soares—it’s 202 pages long and they are all of them great. You can find much of the material online here, you could give the first few chapters a glance-through to see if you like them.
I’m interested to see which books other people recommend!
Endorsed. A bunch of my friends were recommending that I read the sequences for a while, and honestly I was skeptical it would be worth it, but I was actually quite impressed. There aren’t a ton of totally new ideas in it, but where it excels it honing in on specific, obvious-in-retrospect points about thinking well and thinking poorly, being clear engaging and catchy describing them, and going through a bit of the relevant research. In short, you come out intellectually with much of what you went in with, but with reinforcements and tags put in some especially useful places.
As a caveat I take issue with a good deal of the substantial material as well. Most notably I don’t think he describes those he disagrees with fairly sometimes, for instance David Chalmers, and I think “Purchase Fuzzies and Utilons Separately” injected a basically wrong and harmful meme into the EA community (I plan to write a post on this at some point when I get the chance). That said if you go into them with some skepticism of the substance, you will come out satisfied. You can also audiobook it here, which is how I read it.
I advise against trying to read 1/day blindly, since there are monsters like A Technical Explanation of Technical Explanation (nearly 17k words), one needs to set aside time for.
I understand the sequences are important to you folks, and I don’t want to seem disrespectful. I have browsed them, and think they contain some good information.
However, I’d recommend going back to books published at least 30 years ago for reads about:
critical thinking
scientific explanation
informal logic
formal logic
decision theory
cybernetics (Ashby, for the AI folks)
statistics and probability
knowledge representation
artificial intelligence
negotiation
linguistic pragmatics
psychology
journalism and research skills
rhetoric
economics
causal analysis
Visit a good used book store, or browse older books and print-only editions descriptions on the web, or get recommendations that you trust on older references in those areas. You’ll have to browse and do some comparing. Also get 1st editions wherever feasible.
The heuristics that this serves include:
good older books are shorter and smarter in the earlier editions, usually the 1st.
older books offer complete theoretical models and conceptual tools that newer books gloss over.
references from the 20th Century tend to contain information still tested and trusted now.
if you are familiar with newer content, you can notice how content progressed (or didn’t).
old abandoned theories can get new life decades later, it’s fun to find the prototypical forms.
most of the topics I listed have core information or skills developed in the 20th century.
it’s a nice reminder that earlier generations of researchers were very smart as well.
some types of knowledge are disappearing in the age of the internet and cellphone. Pre-internet sources still contain write-ups of that knowledge.
it’s reassuring that you’re learning something whose validity and relevance isn’t versioned out.
NOTE: Old books aren’t breathless about how much we’ve learned in the last 20 years, or how the internet has revolutionized something. I’d reserve belief in that for some hard sciences, and even there, if you want a theory introduction, an older source might serve you better.
If you don’t like print books, you can use article sources online and look at older research material. There are some books from the 90′s available on Kindle, hopefully, but I recommend looking back to the 70′s or even earlier. I prefer an academic writing style mostly available after the 70′s, I find older academic texts a bit hard to understand sometimes, but your experience could be different.
Visiting an old book store and reading random old books on topis is a horrible idea and I disrecommend it whole-heartedly. The vast majority of random books are crap, and old books are on average maybe even more crap (caveat that the good ones are the ones we still hear about more) since even more from them has been disproven/has been added to by now.
I doubt that most people would know who to look for that authored books written in the late 20th Century in those fields that I listed, particularly when the theory remains unchanged now, or as in the case of Ashby’s book, is not available in any modern textbook.
I believe that:
good material from decades ago doesn’t appear in new texts, necessarily.
new material isn’t always an improvement, particularly if it reflects “the internet era”.
what is offered as “new material” isn’t always so new.
One concern with modern textbooks is pedagogical fads (for example, teaching formal logic with a software program or math with a TI calculator). I support pen and paper approaches for basic learning over TI calculators and software packages. Older textbooks offer more theory than current ones. Older textbooks are usually harder. Dover math books are one example where unchanged theory written up in older texts is still appreciated now.
It doesn’t take a lot of learning to find useful 20th Century books about linguistics, scientific reasoning, rhetoric, informal logic, formal logic, and even artificial intelligence. Yes, there was AI material before neural networks and machine learning, and it still has utility.
For most people, a random search at a decent used bookstore can turn up popular titles with good information. A random search by topic in an academic library or in a used bookstore that accepts used academic titles, (which used to be common, but is becoming more rare), can turn up some amazing finds. I do recommend all those approaches, if you like books and are patient enough to to try it. Otherwise, I suggest you look into older journal articles available in PDF format online.
It’s just one approach, and takes some trial and error. You need to examine the books, read the recommendations, figure out who published it and why and get to know the author, read the preface and forward, so it takes some patience. It can help to start with an older book and then visit the new material. When I started doing that is when I noticed that the new material was sometimes lesser quality, derivative, or the same content as older material.
The most precious finds are the ones that are nowhere to be found now. Yes, sometimes that’s because they’re crap, but sometimes that’s because they’re really good and people ignored them in spite or, or because of, that.
EDIT: I also find that reading from a book offers a visceral experience and steadier pace that digital reading can lack.
What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?
I’ve read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his ‘Death with dignity’ post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn’t do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now—no idea where to even begin looking for a solution and only knowing which approaches don’t work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky’s way of thinking.
To be clear, I don’t have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky’s model is and I can understand why he is afraid, but I don’t know if he’s right because I can also sort of understand the models of those who are more optimistic. I’m pretty agnostic when it comes to this subject and I wouldn’t be particularly surprised by any specific outcome.
This post got some flak and I am not sure if it actually led to more EAs seriously considering engaging with the Sequences. However, I stand by the recommendation even more strongly now. If I were in a position to give reading recommendations to smart young people who wanted to do big, impactful things, I would recommend the Sequences (or HPMOR) over any of the EA writing.