Hello, I’m Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.
Devin Kalish
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think this point is really important. Statements like those mentioned in the post are important, but now that FTX doesn’t look like it’s going to be funding anyone going forward, they are also clearly quite cheap. The discussion we should be having is the higher stakes one, where the rubber meets the road. If it turns out that this was fraudulent, but then SBF makes a few billion dollars some other way, do we refuse that money then? That is the real costly signal of commitment, the one that actually makes us trustworthy.
@throwaway151 I recommend editing this post to include a link to this comment in its body (and maybe change the title). At this point it seems like it’s Torres’ word against Cremer’s and I see no reason to default to Torres’ side/interpretation given this. For people who won’t read the comments that carefully this seems important,
especially since this post looks quiet enough now that it’s unlikely this comment will be upvoted to the top comment above one that has karma in the triple digits.On the last point, I stand corrected.
I really liked this one. Between this, the New Yorker piece, and Dylan Matthews’ Vox one, there’s been an unusual amount of nuanced, high-quality coverage of EA in mainstream outlets lately imo.
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
One theory that I’m fond of, both because it has some explanatory power, and because unlike other theories about this with explanatory power, it is useful to keep in mind and not based as directly on misconceptions, goes like this:
-A social group that has a high cost of exit, can afford to raise the cost of staying. That is, if it would be very bad for you to leave a group you are part of, the group can more successfully pressure you to be more conformist, work harder in service of it, and tolerate weird hierarchies.
-What distinguishes a cult, or at least one of the most important things that distinguishes it, is that it is a social group that manually raises the cost of leaving, in order to also raise the cost of staying. For instance it relocates people, makes them cut off other relationships, etc.
-Effective Altruism does not manually raise the cost of leaving for this purpose, and neither have I seen it really raise the cost of staying. Even more than most social groups I have been part of, being critical of the movement, having ideas that run counter to central dogmas, and being heavily involved in other competing social groups, are all tolerated or even encouraged. However,
-The cost of leaving for many Effective Altruists is high, much of this self-inflicted. Effective Altruists like to live with other Effective Altruists, make mostly Effective Altruist close friends, enter romantic relationships with other Effective Altruists, work at Effective Altruist organizations, and believe idiosyncratic ideas mostly found within Effective Altruism. Some of this is out of a desire to do good, speaking from experience, much of it is because we are weirdos who are most comfortable hanging out with people who are similar types of weirdos to us, and have a hard time with social interactions in general. Therefore,
-People looking in sometimes see things from point four, the things that contribute to the high cost of leaving, and even if they can’t put what’s cultish about it into words, are worried about possible cultishness, and don’t know the stuff in point three viscerally enough to be disuaded of this impression. Furthermore, even if EA isn’t a cult, point four is still important, because it increases the risk of cultishness creeping up on us.
Overall, I’m not sure what to do with this. I guess be especially vigilant, and maybe work a little harder to have as much of a life as possible outside of Effective Altruism. Anyway, that’s my take.
These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.
The point about ideologies being a minefield, with Nazis as an example, particularly stands out to me. I pattern match this to the parts of harsher critiques that go something like “look at where your precious ideology leads when taken to an extreme, this place is terrible!” Generally, the substantial mistake these make is just casting EA as ideologically purist and ignoring the centrality of projects like moral uncertainty and worldview diversification, as well as the limited willingness of EAs to bite bullets they in principle endorse much of the background logic of (see Pascal’s Mugging and Ajeya Cotra’s train to crazy town).
By not getting into telling us what terrible things we believe, but implying that we are at risk of believing terrible things, this piece is less unflattering, but is on shakier ground. It involves this same mistake about EA’s ideological purism, but on top of this has to defend this other higher level claim rather than looking at concrete implications.
Was the problem with the Nazis really that they were too ideologically pure? I find it very doubtful. The philosophers of the time attracted to them generally were weird humanistic philosophers with little interest in the types of purism that come from analytic ethics, like Heidegger. Meanwhile most philosophers closer to this type of ideological purity (Russell, Carnap) despised the Nazis from the beginning. The background philosophy itself largely drew from misreadings of people like Nietzsche and Hegel, popular anti-semitic sentiment, and plain old historical conspiracy theories. Even at the time intellectual critiques of Nazis often looked more like “they were mundane and looking for meaning from charismatic, powerful men” (Arendt) or “they aesthetisized politics” (Benjamin) rather than “they took some particular coherent vision of doing good too far”.
The truth is the lesson of history isn’t really “moral atrocity is caused by ideological consistency”. Occasionally atrocities are initiated by ideologically consistent people, but they have also been carried out casually by people who were quite normal for their time, or by crazy ideologues who didn’t have a very clear, coherent vision at all. The problem with the Nazis, quite simply, is that they were very very badly wrong. We can’t avoid making the mistakes they did from the inside by pattern matching aspects of our logic onto them that really aren’t historically vindicated, we have to avoid moral atrocity by finding more reliable ways of not winding up being very wrong.
I almost never engage in karma voting because I don’t really have a consistent strategy for it I’m comfortable with, but I just voted on this one. Karma voting in general has recently been kind of confusing to me, but I feel like I have noticed a significant amount of wagon circling recently, how critical a post was of EA didn’t used to be very predictive of its karma, but I’ve noticed that recently, since around the Bostrom email, it has become much more predictive. Write something defensive of EA, get mostly upvotes, potentially to the triple digits. Write something negative, very mixed to net negative voting, and if it reaches high enough karma, possibly even more comments. Hanania’s post on how EA should be anti-woke just got downvoted into the ground twice in a row, so I don’t think the voting reflects much ideological change by comparison (being very famous in EA is also moderately predictive, which is probably some part of the Aella post’s karma at least, and is a more mundane sort of bad I guess).
I’m still hopeful this will bounce back in a few hours, as I often see happen, but I still suspect the overall voting pattern will be a karmic tug of war at best. I’m not sure what to make of this, is it evaporative cooling? Are the same people just exhausted and taking it out on the bad news? Is it that the same people who were upvoting criticism before are exhausted and just not voting much at all, leaving the karma to the nay sayers (I doubt this one because of the voting patterns on moderately high karma posts of the tug of war variety, but it’s the sort of thing that makes me worry about my own voting, how I don’t even need to vote wrong to vote in a way that creates unreasonable disparities based on what I’m motivated to vote on at all, and just voting on everything is obviously infeasible). Regardless, I find it very disturbing, I’m used to EA being better than this.
I very rarely engage in karma voting, and didn’t do so for this comment either. That said, one relevant point is that the comment with the most karma gets to sit at the top of the comments section. That means that many people probably vote with an intention to functionally “pin” a comment, and it may not be so much that they think the comment should represent the most important reaction to a post, as that they think it provides crucial context for readers. I think this comment does provide context on the part of this otherwise very good and important post that made me most uncomfortable as stated. I also agree that Alexander’s tone isn’t great, though I read it in almost the opposite way from you (as an emotional reaction in defense of his friends who came forward about Forth).
Hello Will, I’m really enjoying the book so far (it hit shelves not strict-on-sale, so I got it a few days ago)! I have noticed that there’s been a big push from your team for large-scale media attention in the direction of positively voicing your views on longtermism. I was wondering if the team has a strategy for negative publicity as well? This has been something I’ve been worried about for a while, as I think that our movement is small enough that much of what people think about us will come from what outsiders decide to say about us, and my impression of EA’s media strategy recently has been that it rarely publishes response pieces to negative attention outside of twitter or the forum. I’m worried that this strategy is a mistake, but that it will be especially problematic in the wake of the massive media attention EA and longtermism is getting now. I’m wondering if there is any worked out strategy for this issue so far, and if so roughly what it is?
I’ve been wondering for years, ever since reading about Zakat, why there hasn’t been much in the way of EA outreach to the Muslim community. I’m thrilled to see it finally happening!
I disagree with this pretty strongly, and have been worried about this type of view in particular quite a bit recently. It seems as though a standard media strategy of EAs is, if someone publishes a hit piece on us somewhere, either sort of obscure or prominent, just ignore it and “respond” by presenting EA ideas better to begin with elsewhere. This is a way of being positive rather than negative in interactions, and avoiding signal-boosting bad criticisms. I don’t know how to explain how I have such a different impression, or why so many smart people seem to disagree with me, but this looks to me like an intuitively terrible, obvious mistake.
I don’t know how to explain why it feels to me so clear that if someone is searching around, finding arguments that EA is a robot cult, or secretly run by evil billionaires, or some other harsh misleading critique, and nothing you find in favor of EA written for a mainstream audience even acknowledges these critics, and instead just presents some seemingly innocuous face of EA, the net takeaway will tend towards “EA is a sinister group all of these people have been trying to blow the whistle on”. Basically all normal social movements have their harsh critics, and even if they don’t always respond well to them, they almost all respond to them as publicly as possible.
The excuse that the criticisms are so bad that they don’t deserve the signal (which to be clear isn’t one this particular post is arguing) also leads me to think this norm encourages bad epistemics, and provides a fully general excuse. I tend to think that bad criticisms of something obscure like EA are generally quite easy for EAs to write persuasive debunking pieces on, so either a public criticism is probably bad enough that publicly responding is worth the signal boost you give the original piece, or it is good enough that it deserves the signal. Surely there are some portion of criticisms that are neither, and that are hard to be persuasive against but are still bad, but we shouldn’t orient the movement’s entire media strategy around those. I wholeheartedly agree with this comment:
If some EA ever had the opportunity to write a high-quality response like Avital’s, or to be blunt almost any okay response, to the Torres piece in Aeon or Current Affairs, or for that matter in WSJ to their recent hit piece, I think it would be a really really good idea to do so, the EA forum is not a good enough media strategy. ACX is easy mode for this, Alexander himself is sympathetic to EA, so his main text isn’t going to be a hit piece, and the harsher points in the comments are ones people can respond to directly, and he will even directly signal boost the best of these counter-criticisms, as he did. I am very scared for the EA movement if even this looks like a scary amount of daylight.
This is something I’ve become so concerned about I’ve been strongly considering posting an edited trialogue I had with some other EAs about this on an EA chat where we tried to get to the bottom of these disagreements (though I’ve been too busy recently), but I just wanted to use this comment as a brief opportunity to register this concern a bit in advance as well. If I am wrong, please convince me, I would be happy to be dissuaded of this but it is a very strong intuition of mine that this strategy does not end well for either our community health or public perception.
- My (Lazy) Longtermism FAQ by 24 Oct 2022 16:44 UTC; 30 points) (
- 30 Jul 2022 23:58 UTC; 7 points) 's comment on EA in the mainstream media: if you’re not at the table, you’re on the menu by (
- 5 Dec 2022 23:17 UTC; 5 points) 's comment on Revisiting EA’s media policy by (
- 28 Jul 2022 21:27 UTC; 4 points) 's comment on Going Too Meta and Avoiding Controversy Carries Risks Too: A Case Study by (
I’m really heartened by this, especially some of the names on here I independently admired who haven’t been super vocal about the issue yet, like David Chalmers, Bill McKibben, and Audrey Tang. I also like certain aspects of this letter better than the FLI one. Since it focuses specifically on relevant public figures, rapid verification is easier and people are less overwhelmed by sheer numbers. Since it focuses on an extremely simple but extremely important statement it’s easier to get a broad coalition on board and for discourse about it to stay on topic. I liked the FLI one overall as well, I signed it myself and think it genuinely helped the discourse, but if nothing else this seems like a valuable supplement.
I think this is actually a central question that is relatively unresolved among philosophers, but it is my impression that philosophers in general, and EAs in particular, lean in the “making happy people” direction. I think of there as being roughly three types of reason for this. One is that views of the “making people happy” variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives. This avoids some problems, but leaves you with something very structurally weird or even absurd to some. I think Larry Temkin has a good quote about it something like “I will have the chocolate ice-cream, unless you have vanilla, in which case I will have strawberry”.
The second reason is the non-identity problem, formalized by Derek Parfit. Basically the issue this raises is that almost all of our decisions that impact the longer term future in some way also change who gets born, so a standard person affecting view seems to allow us to do almost anything to future generations. Use up all their resources, bury radioactive waste, you name it.
The third maybe connects more directly to why EAs in particular often reject these views. Most EAs subscribe to a sort of universalist, beneficent ethics, that seems to imply that if something is genuinely good for someone, then that something is good in a more impersonal sense that tugs on ethics for all. For those of us who live lives worth living, are glad we were born, and don’t want to die, it seems clear that existence is good for us. If this is the case, it seems like this presents a reason for action to anyone who can impact it if we accept this sort of universal form of ethics. Therefore, it seems like we are left with three choices. We can say that our existence actually is good for us, and so it is also good for others to bring it about, we can say that it is not good for others to bring it about, and therefore it is not actually good for us after all, or we can deny that ethics has this omnibenevolent quality. To many EAs, the first choice is clearly best.
I think here is where a standard person-affecting view might counter that it cares about all reasons that actually exist, and if you aren’t born, you don’t actually exist, and so a universal ethics on this timeline cannot care about you either. The issue is that without some better narrowing, this argument seems to prove too much. All ethics is about choosing between possible worlds, so just saying that a good only exists in one possible world doesn’t seem like it will help us in making decisions between these worlds. Arguably the most complete spelling out of a view like this looks sort of like “we should achieve a world in which no reasons for this world not to exist are present, and nothing beyond this equilibrium matters in the same way”. I actually think some variation of this argument is sometimes used by negative utilitarians and people with similar views. A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn’t brought about, so it doesn’t provide reasons to that world in the same way. Equilabrium is already adequetely reached when no one is badly off.
This is coherent, but again it proves much more than most people want to about what ethics should actually look like, so going down that route seems to require some extra work.
I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there’s actually a bit of an unfortunate fallacy here that it isn’t really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin’s relevant thoughts:
Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the “by-catch” phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:
Many people in Effective Altruism (I don’t think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.
Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your “anti-burnout icecream”. Worse, have a kid (if you, like in Nielson’s example, think this isn’t part of your best set of altruistic decisions), this is your “anti-burnout kid”.
It’s very hard to get around this one. Nielson’s preferred solution would clearly be that people just don’t buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn’t really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn’t an open avenue for us.
The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.
(edited on 10/24/22 to replace broken link)
Re: “In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows””
Yeees, this is such a common first reaction I have found in people first being introduced to Effective Altruism. I always really want to give some beginning of an answer but feel self-conscious that I can’t even give an honest best guess from what I know without sort of disgracing the usual standards of rigor of the movement, and misrepresenting its usual scope.
For what it’s worth I haven’t gotten around to reading a ton of your posts yet, but me and pretty much everyone I showed your blog to could tell pretty quickly that it was a cut above whatever I might picture just from the title. That said, I think all the changes are good ideas on the whole. Keep up the good work!
This comment captures a lot of my concerns about offsetting arguments in the context of veganism, as well as more generally. Spelled out a bit more, my worry for EAs is that we often:
1.Think we ought to donate a large amount
-
Actually donate some amount that is much smaller than this but much larger than most people
-
Discourage each other from sanctioning people who are donating much more than other people, for not donating enough
Offsetting bad acts can presumably fall into the same pool as other donations, which leads to the following issue:
let’s say that Jerry goes around kicking strangers, and also donates 20% of his income to charity, and let’s also stipulate that Jerry really thinks he ought to donate 80% of his income to charity, and that 10% of his income is enough to offset his stranger kicking. Now you might be tempted to criticize Jerry for kicking strangers, but hold on, 10 percentage points of his donations cancel out this stranger kicking, would we be criticizing Jerry for only donating 10% of his income to charity? If not, it seems we cannot criticize Jerry. But wait a minute, later we learn that Jerry actually would have donated 30% of his income to charity if he wasn’t stranger kicking, so we were wrong, his stranger kicking isn’t canceled out by his donations, it actually makes his donations worse!
Since many EAs have ideal donating thresholds much higher than they will ever reach, we don’t have a default standard to anchor their offsetting to, everything falls short by some significant amount. And since we discourage people from criticizing those who give a good deal but not enough, Jerry wouldn’t get sanctioned much more for donating 10% rather than 30%, the ethics just aren’t high enough resolution for that. The upshot is that Jerries can get away with doing almost arbitrary amounts of dickish things and not necessarily doing anything to compensate that we could hold them accountable to. Moral hazard and slippery slope arguments can be suspicious, but this is one I am fairly confident is a real problem with offsetting, at least for EAs.
-
This is a quick PSA, Emile Torres does think “Preventing AI from killing everyone is a real and important issue”. The last time this was pointed out to you (that I’m aware of) you clarified that Torres’ disagreement was basically with longtermism. Please, pleeease clarify this in the post, it isn’t remotely how this challenge comes off and is borderline spreading misinformation, which is especially bad for important coalition building.
I’m not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions—longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions—but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn’t seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.