Some preliminaries and a claim
November 2022 update: I wrote this post during a difficult period in my life. I still agree with the basic point I was gesturing towards, but regret some of the presentation decisions I made. I may make another attempt in the future.
“Oh, Master, make me chaste and celibate… but not yet!”
– Augustine of Hippo, Confessions
“Surely you will quote this proverb to me: ‘Physician, heal yourself!’ And you will tell me, ‘Do here in your hometown what we have heard that you did in Capernaum.’”
Consider reading these posts first before continuing on here (obviously this isn’t required reading in any sense, though I will be much more likely to seriously engage with feedback from people who seem to have read & reflected carefully about the following). In rough order of importance:
The purpose of this essay is to clarify to YOU, the mind reading this essay, your relationship to the intellectual communities of Effective Altruism and (Bay Area) Rationality, the characteristic blind spots that arise from participating in these communities, and what might be done about all of this.
I have an intuition that this will be a lengthy post [edit: actually this is probably going to become a sequence of posts, so stay tuned!], and I suspect that you probably feel like you don’t have time to read it carefully, let alone all the prerequisites I linked out to.
That’s interesting, if true… why do you feel like you don’t have time to carefully read this post? What’s going on there?
I can’t control what you do, and I don’t want to (I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place). I do have a humble request to make of you though… as you continue on, perhaps feeling as though you really don’t have time for this but might as well give it a quick skim anyhow to see what Milan has been up to, if as you do this you notice a desire to respond to something I say, please don’t until you actually have space to read this post and all the other posts I linked to above carefully (with care, with space...). If you don’t foresee being able to do this in the immediate future, that’s fine… I just ask that you don’t respond to this post in any way until you do.
Thank you for considering this request… I really appreciate it.
Okay, the preliminaries have been taken care of. Very good. What’s this about, anyway?
I’m going to ramble on for while, perhaps in follow-on posts, but this is the kernel of it:
I claim that the Effective Altruism and Bay Area Rationality communities (and especially their capital-allocating institutions (GiveWell, the Open Philanthropy Project (and its affiliates: 1, 2, 3), this Forum, the Centre for Effective Altruism, the Berkeley Existential Risk Initiative, and the Survival and Flourishing Fund for EA; LessWrong, CFAR, the Open Philanthropy Project, the Berkeley Existential Risk Initiative, and the Survival and Flourishing Fund for Bay Area Rationality)) have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact.
This is a deeply rooted mistake, and I have personally participated in making and propagating this mistaken view, largely through my participation in GiveWell, the Open Philanthropy Project, and to some extent on this Forum and the LessWrong Forum as well. I have done a teensy bit of the same thing on my personal blog too. I am sorry for all of this. I avow my participation in all of it, and I feel shame when I think back on how I acted. I am really, really sorry.
Happily, I am now mostly active on Twitter, where I no longer participate in this mistaken view, so you can communicate with me there with my thorough and complete guarantee that I will only communicate with you as a friend, that I have no ulterior motives when communicating with you there, that I will not infect you with any info hazards or mind viruses (intentionally or unintentionally), and I will not take anything from you (or try to do this, consciously or unconsciously) that is not freely given. That is my promise and commitment to you.
(Though we probably won’t have a very fruitful conversation on Twitter or anywhere else until you read all of the above, including the prerequisite posts, in a spacious, careful, open-minded way!)
- Why do so few EAs and Rationalists have children? by Mar 14, 2021, 5:05 AM; 9 points) (
- Ben Hoffman & Holden Karnofsky by Mar 20, 2021, 4:07 AM; 0 points) (
- Feedback from where? by Mar 11, 2021, 5:05 AM; -15 points) (
- EA capital allocation is an inner ring by Mar 19, 2021, 4:06 AM; -57 points) (
Just want to briefly join in with the chorus here: I’m tentatively sympathetic to the claim, but I think requiring people to spend several hours reading and meditating on a bunch of other content – without explaining why, or how each ties into your core claim – and then refusing to engage with anyone who hasn’t done so, is very bad practice. I might even say laziness. At the very least, wildly unrealistic – you are effectively filtering for people who are already familiar with all the content you linked to, which seems like a bad way to convince people of things.
Having skimmed the links, it is very non-obvious how many of them tie directly into your claim about the EA community’s relationship with feedback loops. Plausibly if I read and meditated on each of them carefully, I would spot the transcendent theme linking all of them together – but that is very costly, and I am a busy person with no particular ex ante reason to believe it would be a good use of scarce time.
If you want to convince us of something, paying the costs in time and thought and effort to connect those dots is your job, not ours.
I agree that it’s sorta lazy, but I strongly disagree that it is bad practice.
Well, of course you don’t think it’s bad practice, or you wouldn’t have done it.
The interesting question is why, and who’s right.
At the very least, I claim there’s decent evidence that it’s ineffective practice, in this venue: your post and comments here have been downvoted six ways from Sunday, which seems like a worse way to advocate for your claim than a different approach that got upvoted.
We are using very different metrics to track effectiveness.
I’m curious about why this feels very costly to you, and also about how that feeling is showing up for you in your body and in your mind.
I think people are over-downvoting this, but it does seem to carry a kinda unpleasant assertion that I’m making this claim because of some irrational emotional aversion, which I don’t especially appreciate.
I think the claim that it’s very costly is just clearly and straightforwardly true. You’re effectively asking me to commit several hours of my time to this post – an order of magnitude more time than I’d usually spend on even a very good EA Forum post. Free time is scarce and precious, and the opportunity cost of spending so much time on this is very high.
You might claim spending that much time on this is worth it, but the size of the benefit doesn’t change the fact that the demanded cost is very high.
I’m asking you to do this iff you yourself feel that you want to do it.
I’m not asserting that you’re acting from an irrational emotional aversion.
I was genuinely curious about how the very-costly feeling was showing up in your body and in your mind.
I am somewhat sympathetic to this complaint. However, I also think that many of the posts you linked are themselves phrased in terms of very high-level abstractions which aren’t closely coupled to reality, and in some ways exacerbate the sort of epistemic problems they discuss. So I’d rather like to see a more careful version of these critiques.
I feel very similarly FWIW.
From my perspective, the posts I linked out to more-or-less track reality.
I didn’t phrase this as clearly as I should have, but it seems to me that there are two separate issues here: firstly whether group X’s views are correct, and secondly whether group X uses a methodology that is tightly coupled to reality (in the sense of having tight feedback loops, or making clear predictions, or drawing on a lot of empirical evidence).
I interpret your critique of EA roughly as the claim that a lack of a tight methodological coupling to reality leads to a lack of correctness. My critique of the posts you linked is also that they lack tight methodological coupling to reality, in particular because they rely on high-level abstractions. I’m not confident about whether this means that they’re actually wrong, but it still seems like a problem.
Thanks… I think the first issue is much more important, and that’s what I want to focus on here.
The title of this post did not inform me about the claim “that EAs have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact -- [and] this is a deeply rooted mistake.”
I came very close to not actually reading what is an interesting claim I’d like to see explored because it came close to the end and there was no hint of it in the title or the start of the post. Since it is still relatively early in the life of this post you may want to consider revising the title and layout of the post to communicate more effectively.
I feel that you are choosing to not honor my request by writing this comment, please DM me if I’m mistaken about that:
I don’t think this is actually a reasonable request to make here?
Why do you think that it is an unreasonable request?
Not Jeff, but I agree with what he said, and here are my reasons:
The feedback Jpmos is giving you is time-sensitive (“Since it is still relatively early in the life of this post...”)
The feedback Jpmos is giving you is not actually about what you said. Rather, it’s simply about the way you’re communicating it, letting you know that, at least in Jpmos’s case, your chosen method of communication came close to not being effective (unless your goals in writing the post are significantly different than the usual goals of someone writing a post, i.e. to communicate and defend a claim. Admittedly, you say that “I want you to think carefully and spaciously for yourself about what is best, and then do the things that seem best as they come to you from that spacious place”, and maybe that is a significantly different goal from the usual one, but even so, there’s a higher likelihood of readers doing that if they get the claim, or at least the topic, of the post up front.)
This is my goal. This is what I want. This is my intention. (I tried to state it clearly in the post.)
Wouldn’t it be interesting if this goal were significantly different from how goals are usually used?
Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.
Good!
I’m curious about what being extremely confused feels like for you, in your body and in your mind.
https://twitter.com/codinghorror/status/1365404374009225216
The core thesis here seems to be:
There are different ways of unpacking this, so before I respond I want to disambiguate them. Here are four different unpackings:
Tight feedback loops are important, [cluster of organizations] could be doing a better job creating them, and this is a priority. (I agree with this. Reality doesn’t grade on a curve.)
Tight feedback loops are important, and [cluster of organizations] is doing a bad job of creating them, relative to organizations in the same reference class. (I disagree with this. If graded on a curve, we’re doing pretty well. )
Tight feedback loops are important, but [cluster of organizations] has concluded in their explicit verbal reasoning that they aren’t important. (I am very confident that this is false for at least some of the organizations named, where I have visibility into the thinking of decision makers involved.)
Tight feedback loops are important, but [cluster of organizations] is implicitly deprioritizing and avoiding them, by ignoring/forgetting discouraging information, and by incentivizing positive narratives over truthful narratives.
(4) is the interesting version of this claim, and I think there’s some truth to it. I also think that this problem is much more widespread than just our own community, and fixing it is likely one of the core bottlenecks for civilization as a whole.
I think part of the problem is that people get triggered into defensiveness; when they mentally simulate (or emotionally half-simulate) setting up a feedback mechanism, if that feedback mechanism tells them they’re doing the wrong thing, their anticipations put a lot of weight on the possibility that they’ll be shamed and punished, and not much weight on the possibility that they’ll be able to switch to something else that works better. I think these anticipations are mostly wrong; in my anecdotal observation, the actual reaction organizations get to poor results followed by a pivot is usually at least positive about the pivot, at least from the people who matter. But getting people who’ve internalized a prediction of doom and shame to surface those models, and do things that would make the outcome legible, is very hard.
(Meta: Before writing this comment I read your post in full. I have previously read and sat with most, but not all, of the posts linked to here. I did not reread them during the same sitting I read this comment.)
Thank you for this thoughtful reply! I appreciate it, and the disambiguation is helpful. (I would personally like to do as much thinking-in-public about this stuff as seems feasible.)
I mean a combination of (1) and (4).
I used to not believe that (4) was a thing, but then I started to notice (usually unconscious) patterns of (4) behavior arising in me, and as I investigated further I kept noticing more & more (4) behavior in me, so now I think it’s really a thing (because I don’t believe that I’m an outlier in this regard).
I agree with this. I think EA and Bay Area Rationality still have a plausible shot at shifting out of this equilibrium, whereas I think most communities don’t (not self-reflective enough, too tribal, too angry, etc...)
Yes, this is a good statement of one of the equilibria that it would be profoundly good to shift out of. Core transformation is one operationalization of how to go about this.
If you downvote (or upvote!) this post without first having read it (and the other posts I linked to) with care and attention, you are explicitly deciding to not honor my request:
And that’s fine, but please know that that is what you are doing.
I think this request undermines how karma systems should work on a website. ‘Only people who have engaged with a long set of prerequisites can decide to make this post less visible’ seems like it would systematically prevent posts people want to see less of from being downvoted.
My request applies to both positive and negative responses.
Right, but “people who are inclined to read many pages worth of material and think deeply on it” very much selects for people who have positive experiences with that material.
I agree!
This request is extremely unreasonable and I am downvoting and replying (despite agreeing with your core claim) specifically to make a point of not giving in to such unreasonable requests, or allowing a culture of making them with impunity to grow. I hope in the future to read posts about your ideas that make your points without such attempts to manipulate readers.
Could you say more about why this request seems extremely unreasonable to you?
Attempting here to respond to & engage with the request, not “this post” in any sense beyond the request itself.
I’m feel sad that you don’t want me to upvote this post. I would like to increase the number of people who have read & reflected carefully on the things you point to (even though I have not yet read all eight links), and upvoting this post seems like the easiest way for me to do that. Is there some other way I can do that?
If the posts feel juicy, take your time meandering through them (take however long it takes to do so in a fun, comfortable way… could be over the course of months or years).
Once you feel comfortable with the material, think about whether or not you want to engage further with me (and/or others) about it, or engage further plus do some other awesome thing.
And then do what you want to do! While taking your time, and being extremely honest (and gentle) with yourself about what you actually want.
From the post: