Conferences

TagLast edit: 25 Apr 2022 8:26 UTC by

Posts with the conferences tag include discussions of EA conference organizing (application criteria, speaker lists, etc.) and reviews of/​lessons learned from past conferences, including but not limited to EA Global and EAGx.

EA Global Tips: Net­work­ing with oth­ers in mind

28 Oct 2021 7:47 UTC
125 points

How to Make Your First EAG a Success

6 Dec 2021 15:13 UTC
32 points

Tips for ask­ing peo­ple for things

5 Apr 2022 20:30 UTC
124 points

What I learnt from at­tend­ing EAGx Oxford (as some­one who’s new to EA)

2 Apr 2022 19:49 UTC
105 points

My clos­ing talk at EAGxSingapore

12 Sep 2022 11:24 UTC
94 points

[Question] Ad­vice for get­ting the most out of one-on-ones

21 Mar 2020 2:20 UTC
21 points

How to Get the Max­i­mum Value Out of Effec­tive Altru­ism Conferences

24 Apr 2019 7:57 UTC
83 points

Six Take­aways from EA Global and EA Retreats

16 Dec 2021 21:14 UTC
56 points

Do­ing 1-on-1s Bet­ter—EAG Tips Part II

24 Mar 2022 11:22 UTC
49 points

Save the Date: EAGx LatAm

8 Sep 2022 18:22 UTC
112 points

How to find good 1-1 con­ver­sa­tions at EAGx Virtual

12 Jun 2020 16:30 UTC
61 points

Why you should con­sider go­ing to EA Global

9 May 2017 14:31 UTC
24 points

Reflec­tions on EA Global from a first-time attendee

18 Sep 2016 13:38 UTC
33 points

Event Re­view: EA Global: Lon­don (2018)

17 Dec 2018 22:29 UTC
5 points
(butcantheysuffer.wordpress.com)

Things I Learned at the EA Stu­dent Summit

27 Oct 2020 19:03 UTC
149 points

12 Awe­some Things You Should Do After EA Global

24 Aug 2015 10:14 UTC
15 points

4 Mar 2022 21:09 UTC
59 points

What I learnt from twenty 1:1s at EAGxOxford

28 Mar 2022 14:18 UTC
82 points

Host an EAG(x) prepa­ra­tion evening for EAG Lon­don and EAGx Boston

28 Mar 2022 20:05 UTC
52 points

The Ul­ti­mate Guide to EA Con­fer­ences (Re­post)

29 Jul 2022 19:11 UTC
5 points

23 Dec 2019 20:20 UTC
55 points

Reflec­tions on EA Global Lon­don 2019 (Mri­nank Sharma)

29 Oct 2019 23:00 UTC
26 points
(mrinanksharma.github.io)

Re­view of EA Global 2016

20 Sep 2016 15:51 UTC
22 points

Thoughts about or­ga­niz­ing an EAGx Conference

27 Jun 2016 21:40 UTC
20 points

EAGxBerkeley 2016 Retrospective

11 Sep 2016 6:27 UTC
18 points

EAGx Bos­ton 2018 Postmortem

24 Jan 2019 23:34 UTC
31 points

Manag­ing COVID re­stric­tions for EA Global travel: My plans + re­quest for other examples

2 Oct 2021 3:14 UTC
41 points

Ques­tions That Lead to Im­pact­ful Conversations

24 Mar 2022 17:25 UTC
54 points

Post EAG(x): Mak­ing the most out of your connections

4 Aug 2022 1:08 UTC
4 points

Open EA Global

1 Sep 2022 4:26 UTC
400 points

The EA events ecosys­tem: How to get more in­volved (as a par­ti­ci­pant)

27 Jan 2020 11:51 UTC
26 points

Ap­ply Now to the EA Fel­low­ship Week­end! (March 26-28)

20 Feb 2021 0:27 UTC
33 points

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

26 Mar 2021 18:28 UTC
26 points

EAGxAsia-Pa­cific 2020 Ap­pli­ca­tions Now Open

30 Sep 2020 9:29 UTC
15 points

Ap­pli­ca­tions are open for EAGxVir­tual 2020

16 May 2020 7:56 UTC
24 points

Please post re­flec­tions on EA Global: Lon­don (500 bounty) 4 Dec 2021 12:41 UTC 25 points 2 comments1 min readEA link Help CEA plan fu­ture events and conferences 9 Dec 2021 19:54 UTC 23 points 16 comments1 min readEA link [Up­dated] EA con­fer­ences in 2022: save the dates 9 Feb 2022 16:36 UTC 122 points 30 comments1 min readEA link Ap­ply Now | EA for Chris­ti­ans An­nual Con­fer­ence | 23 April 2022 14 Mar 2022 19:47 UTC 26 points 2 comments1 min readEA link Free to at­tend: Cam­bridge Con­fer­ence on Catas­trophic Risk (19-21 April) 21 Mar 2022 13:23 UTC 19 points 2 comments1 min readEA link EAG is over, but don’t delete Swapcard 23 Apr 2022 16:45 UTC 54 points 7 comments1 min readEA link Three Reflec­tions from 101 EA Global Conversations 25 Apr 2022 22:02 UTC 121 points 18 comments6 min readEA link Prepar­ing for EAGxPrague 3 May 2022 16:21 UTC 10 points 5 comments1 min readEA link Death to 1 on 1s 21 May 2022 8:12 UTC 5 points 6 comments1 min readEA link CEA’s events team: ca­pac­ity build­ing and mistakes 3 Nov 2021 8:08 UTC 44 points 3 comments6 min readEA link An­nounc­ing Fu­ture Fo­rum—Ap­ply Now 6 Jul 2022 17:35 UTC 92 points 11 comments4 min readEA link Fauna Con­nec­tions—a free re­mote sym­po­sium for an­i­mal advocates 3 Aug 2022 21:57 UTC 9 points 2 comments1 min readEA link Ap­ply now—EAGxVir­tual (21-23 Oct) 12 Sep 2022 16:51 UTC 94 points 5 comments3 min readEA link How CEA ap­proaches ap­pli­ca­tions to our programs 4 Nov 2022 19:02 UTC 71 points 3 comments3 min readEA link Save the dates: 2021 EA conferences 14 Jan 2021 14:07 UTC 40 points 0 comments1 min readEA link Ap­ply now to be an EAGx event organizer 25 Jan 2021 14:19 UTC 10 points 2 comments3 min readEA link Ap­ply now for EA Global: Re­con­nect (March 20-21) 10 Feb 2021 10:55 UTC 14 points 10 comments2 min readEA link Good Done Right conference 4 Feb 2020 13:21 UTC 42 points 2 comments1 min readEA link (www.stafforini.com) What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence? 10 Apr 2021 0:51 UTC 21 points 2 comments1 min readEA link The first AI Safety Camp & onwards 7 Jun 2018 18:49 UTC 25 points 2 comments8 min readEA link Save the Date for EA Global Bos­ton and San Francisco 4 Mar 2017 3:29 UTC 20 points 13 comments1 min readEA link EA Global Light­ning Talks (San Fran­cisco 2018) 30 Nov 2018 21:14 UTC 20 points 3 comments11 min readEA link EAG sur­vey data analysis 24 Jan 2021 17:26 UTC 21 points 7 comments8 min readEA link EAGxNordics 2019 Postmortem 23 Aug 2019 9:37 UTC 56 points 3 comments5 min readEA link EA Global 2017 Update 6 Dec 2016 16:10 UTC 16 points 19 comments3 min readEA link Four fo­cus ar­eas of effec­tive altruism 8 Jul 2013 4:00 UTC 12 points 13 comments5 min readEA link Ap­ply now | EA Global: Lon­don (29-31 Oct) | EAGxPrague (3-5 Dec) 1 Sep 2021 19:19 UTC 52 points 11 comments1 min readEA link Ap­ply for CEA event support 1 Feb 2022 16:24 UTC 76 points 8 comments4 min readEA link EAGxBos­ton: Up­dates and Info from the Or­ga­niz­ing Team 11 Mar 2022 0:13 UTC 65 points 9 comments7 min readEA link [Question] What EAG ses­sions would you like on an­i­mal welfare? 20 Mar 2022 10:32 UTC 20 points 7 comments1 min readEA link [Question] What EAG ses­sions would you like on epistemics? 20 Mar 2022 15:40 UTC 10 points 3 comments1 min readEA link [Question] What EAG ses­sions would you like on AI? 20 Mar 2022 17:05 UTC 7 points 10 comments1 min readEA link [Question] What EAG ses­sions would you like to at­tend on biorisk? 20 Mar 2022 17:13 UTC 5 points 2 comments1 min readEA link What EAG ses­sions would you like on Global Catas­trophic Risks? 20 Mar 2022 17:26 UTC 9 points 4 comments1 min readEA link [Question] What EAG ses­sions would you like to see on global health and wellbe­ing? 20 Mar 2022 17:32 UTC 7 points 2 comments1 min readEA link [Question] What EAG ses­sions would you like to see on hori­zon scan­ning? 20 Mar 2022 17:33 UTC 5 points 2 comments1 min readEA link [Question] What EAG ses­sions would you like to see on Global Pri­ori­ties Re­search? 20 Mar 2022 17:34 UTC 5 points 1 comment1 min readEA link [Question] What EAG ses­sions would you like to see on meta-EA? 20 Mar 2022 17:34 UTC 5 points 6 comments1 min readEA link Time-Time Tradeoffs 1 Apr 2022 15:29 UTC 58 points 6 comments3 min readEA link Our Cur­rent Direc­tions in Mechanis­tic In­ter­pretabil­ity Re­search (AI Align­ment Speaker Series) 8 Apr 2022 17:08 UTC 3 points 0 comments1 min readEA link The Ul­ti­mate Guide to EA Con­fer­ences (WIP No­tion Tem­plate) 15 Apr 2022 13:01 UTC 5 points 1 comment1 min readEA link Ap­ply for EAGxPrague by April 30th! 25 Apr 2022 5:36 UTC 27 points 5 comments2 min readEA link On the fence about ap­ply­ing to EAG or EAGx? Talk to some­one (me?) who went! 28 Apr 2022 12:38 UTC 15 points 4 comments1 min readEA link Are you will­ing to talk about your ex­pe­rience at­tend­ing EAG or EAGx with some­one who’s con­sid­er­ing ap­ply­ing? 28 Apr 2022 12:42 UTC 5 points 0 comments1 min readEA link EAG & EAGx lodg­ing considerations 4 May 2022 23:58 UTC 35 points 21 comments3 min readEA link Ap­ply to at­tend an EA con­fer­ence! 20 May 2022 20:10 UTC 38 points 5 comments2 min readEA link [Question] EA Speaker Re­pos­i­tory? 20 May 2022 16:31 UTC 10 points 4 comments2 min readEA link My first effec­tive al­tru­ism con­fer­ence: 10 learn­ings, my 121s and next steps 21 May 2022 8:51 UTC 9 points 3 comments4 min readEA link Dead­line Ex­tended: An­nounc­ing the Le­gal Pri­ori­ties Sum­mer In­sti­tute! (Ap­ply by June 24) 26 May 2022 8:00 UTC 24 points 1 comment2 min readEA link Save the date: EAGxVir­tual 2022 24 Jun 2022 15:25 UTC 86 points 5 comments1 min readEA link EAGxBos­ton 2022: Ret­ro­spec­tive 14 Jul 2022 13:09 UTC 57 points 2 comments9 min readEA link Tips + Re­sources for Get­ting Long-Term Value from Re­treats/​Con­fer­ences (and in gen­eral) 24 Jul 2022 15:41 UTC 36 points 0 comments13 min readEA link Si­mu­lated an­neal­ing, or, the im­por­tance of in­for­mal EA socialising 28 Jul 2022 5:08 UTC 27 points 4 comments3 min readEA link In­vite: UnCon­fer­ence, How best for hu­mans to thrive and sur­vive over the long-term 27 Jul 2022 22:19 UTC 10 points 2 comments2 min readEA link Crowd­sourced Crit­i­cisms: What does EA think about EA? 8 Aug 2022 22:59 UTC 32 points 19 comments14 min readEA link EAGx Rot­ter­dam 2022 22 Aug 2022 9:12 UTC 29 points 10 comments1 min readEA link Longter­mism Sus­tain­abil­ity Un­con­fer­ence Invite 1 Sep 2022 12:34 UTC 3 points 0 comments2 min readEA link Case Study of EA Global Re­jec­tion + Crit­i­cisms/​Solutions 23 Sep 2022 11:38 UTC 197 points 156 comments23 min readEA link EAG DC: Meta-Bot­tle­necks in Prevent­ing AI Doom 30 Sep 2022 17:53 UTC 4 points 0 comments7 min readEA link Up­com­ing EA con­fer­ences in 2023 (and 2022) 5 Oct 2022 21:15 UTC 78 points 8 comments2 min readEA link EAGxVir­tual: A vir­tual venue, timings, and other up­dates 13 Oct 2022 13:22 UTC 47 points 3 comments2 min readEA link Progress Open Thread: EAGxVir­tual 2022 22 Oct 2022 13:49 UTC 44 points 14 comments1 min readEA link [Question] Which are the best con­fer­ences to at­tend that are not EAGs? 24 Oct 2022 13:21 UTC 24 points 5 comments1 min readEA link Les­sons from tak­ing group mem­bers to an EAGx conference 14 Nov 2022 12:03 UTC 68 points 2 comments6 min readEA link • Thank you for posting this! I’ve been frustrated with the EA movement’s cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for “longtermism” was a Torres hit piece. I understand and agree with the view that media outreach should be done by specialists—ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren’t the only people with those qualifications! There’s no reason they need to be the public face of all of EA—they represent one faction out of at least three. EA is a general concept that’s compatible with a range of moral and empirical worldviews—we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement’s behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was. Perhaps more EA orgs—like GiveWell, ACE, and FHI—should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting. • So, proposing that we give everyone equal voting power gives those on the forum with more voting power an incentive to lessen mine (by downvoting this). So how about this: we make the agreement karma democratic. That way we can see what people actually agree or disagree on and since it doesn’t affect karma we can make it democratic without angering those with disproportionate voting power. • 3 Dec 2022 23:40 UTC 3 points 0 ∶ 3 Remember the anti-work subreddit moderator’s disastrous unsanctioned interview? Let that be a cautionary tale of how interacting with the media can go. It should be self evident that only CEA sanctioned individuals should be allowed to speak to the media. • [ ] [deleted] • I also strongly agree! I think this an important topic that needs to be discussed more frequently within this community. I’m curious whether most EA participants are on the same page that the lack of demographic diversity is harmful to effectiveness. The EA events I have attended have been much whiter (and more predominantly male) than the general population of my area, and many conversations have had an exclusive, elitist vibe to them. (This is obviously subjective but to me this manifested as people immediately asking people about their credentials, and folks initiating group conversations with narrow intellectual topics that are not inclusive.) • First, I’ve been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations. This is useful information to know! Thanks for sharing it. I just want to send good vibes for “if you see something you see is weird, lean towards transparency.” My personal intuition is that it’s good to have a culture where people are politely, and perhaps lightly, headhunted. However, there definitely could be more issues here: 1. It seems bad if some groups/​orgs get highly targeted. This can create an uncomfortable situation if not handled well. 2. It’s hard to be polite. I could easily imagine recruiters being pushy or rude. I imagine that perhaps in this case at least, some orgs used similar logic to target IDinsight (lots of great talent and training, but work seems less directly aligned for specific EA goals), but didn’t realize how many others were doing it. I’d recommend messaging the Community Health team at CEA to get a bit of coordination. Or just directly send an email to the various orgs flagging the issue. I imagine they might well be able to find a more reasonable solution. • I’m nervous that readers might conflate “the specific situation of IDInsight, which we still know little about, is justified” with, “it’s generally good to have more headhunting”. I mostly agree with the latter, but can’t say much on the former. • 3 Dec 2022 23:19 UTC 27 points 5 ∶ 0 I really want to be in favor of having a less centralized media policy, and do think some level of reform is in-order, but I also think “don’t talk to journalists” is just actually a good and healthy community norm in a similar way that “don’t drink too much” and “don’t smoke” are good community norms, in the sense that I think most journalists are indeed traps, and I think it’s rarely in the self-interest of someone to talk to journalists. Like, the relationship I want to have to media is not “only the sanctioned leadership can talk to media”, but more “if you talk to media, expect that you might hurt yourself, and maybe some of the people around you”. I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything. So, overall, I am in favor of some kind of change to our media policy, but also continue to think that the honest and true advice for talking to media is “don’t, unless you are willing to put a lot of effort into this”. • I agree that public communication is risky, but I think that plenty more people are qualified to do it than just CEA and the movement’s “big three” public intellectuals (MacAskill, Ord, and Singer). My comment here was partly a response to this one. • As Byrne points out, and some notable examples testify, some people manage to: 1. “Go to the monastery” to explore ideas as a hardcore believer. 2. After a while, “return to the world”, and successfully thread the needle between innovation, moderation, and crazy town. This is not an easy path. Many get stuck in the monastery, failing gracefully (i.e. harmlessly wasting their lives). Some return to the world, and achieve little. Others return to the world, accumulate great power, and then cause serious harm. Concern about this sort of thing, presumably, is a major motivation for the esotericism of figures like Tyler Cowen, Peter Thiel, Plato, and most of the other Straussian thinkers. • One thing this reminds me of is a segment of Holden Karnofsky’s interview with Ezra Klein. HOLDEN KARNOFSKY: At Open Philanthropy, we like to consider very hard-core theoretical arguments, try to pull the insight from them, and then do our compromising after that. And so, there is a case to be made that if you’re trying to do something to help people and you’re choosing between different things you might spend money on to help people, you need to be able to give a consistent conversion ratio between any two things. So let’s say you might spend money distributing bed nets to fight malaria. You might spend money [on deworming, i.e.] getting children treated for intestinal parasites. And you might think that the bed nets are twice as valuable as the dewormings. Or you might think they’re five times as valuable or half as valuable or ⅕ or 100 times as valuable or 1100. But there has to be some consistent number for valuing the two. And there is an argument that if you’re not doing it that way, it’s kind of a tell that you’re being a feel-good donor, that you’re making yourself feel good by doing a little bit of everything, instead of focusing your giving on others, on being other-centered, focusing on the impact of your actions on others,[where in theory it seems] that you should have these consistent ratios. So with that backdrop in mind, we’re sitting here trying to spend money to do as much good as possible. And someone will come to us with an argument that says, hey, there are so many animals being horribly mistreated on factory farms and you can help them so cheaply that even if you value animals at 1 percent as valuable as humans to help, that implies you should put all your money into helping animals. On the other hand, if you value [animals] less than that, let’s say you value them a millionth as much, you should put none of your money into helping animals and just completely ignore what’s going on factory farms, even though a small amount of your budget could be transformative. So that’s a weird state to be in. And then, there’s an argument that goes […] if you can do things that can help all of the future generations, for example, by reducing the odds that humanity goes extinct, then you’re helping even more people. And that could be some ridiculous comic number that a trillion, trillion, trillion, trillion, trillion lives or something like that. And it leaves you in this really weird conundrum, where you’re kind of choosing between being all in on one thing and all in on another thing. And Open Philanthropy just doesn’t want to be the kind of organization that does that, that lands there. And so we divide our giving into different buckets. And each bucket will kind of take a different worldview or will act on a different ethical framework. So there is bucket of money that is kind of deliberately acting as though it takes the farm animal point really seriously, as though it believes what a lot of animal advocates believe, which is that we’ll look back someday and say, this was a huge moral error. We should have cared much more about animals than we do. Suffering is suffering. And this whole way we treat this enormous amount of animals on factory farms is an enormously bigger deal than anyone today is acting like it is. And then there’ll be another bucket of money that says: “animals? That’s not what we’re doing. We’re trying to help humans.” And so you have these two buckets of money that have different philosophies and are following it down different paths. And that just stops us from being the kind of organization that is stuck with one framework, stuck with one kind of activity. […] If you start to try to put numbers side by side, you do get to this point where you say, yeah, if you value a chicken 1 percent as much as a human, you really are doing a lot more good by funding these corporate campaigns than even by funding the [anti-malarial] bed nets. And [bed nets are] better than most things you can do to help humans. Well, then, the question is, OK, but do I value chickens 1 percent as much as humans? 0.1 percent? 0.01 percent? How do you know that? And one answer is we don’t. We have absolutely no idea. The entire question of what is it that we’re going to think 100,000 years from now about how we should have been treating chickens in this time, that’s just a hard thing to know. I sometimes call this the problem of applied ethics, where I’m sitting here, trying to decide how to spend money or how to spend scarce resources. And if I follow the moral norms of my time, based on history, it looks like a really good chance that future people will look back on me as a moral monster. But one way of thinking about it is just to say, well, if we have no idea, maybe there’s a decent chance that we’ll actually decide we had this all wrong, and we should care about chickens just as much as humans. Or maybe we should care about them more because humans have more psychological defense mechanisms for dealing with pain. We may have slower internal clocks. A minute to us might feel like several minutes to a chicken. So if you have no idea where things are going, then you may want to account for that uncertainty, and you may want to hedge your bets and say, if we have a chance to help absurd numbers of chickens, maybe we will look back and say, actually, that was an incredibly important thing to be doing. EZRA KLEIN: […] So I’m vegan. Except for some lab-grown chicken meat, I’ve not eaten chicken in 10, 15 years now — quite a long time. And yet, even I sit here, when you’re saying, should we value a chicken 1 percent as much as a human, I’m like: “ooh, I don’t like that”. To your point about what our ethical frameworks of the time do and that possibly an Open Philanthropy comparative advantage is being willing to consider things that we are taught even to feel a little bit repulsive considering—how do you think about those moments? How do you think about the backlash that can come? How do you think about when maybe the mores of a time have something to tell you within them, that maybe you shouldn’t be worrying about chicken when there are this many people starving across the world? How do you think about that set of questions? HOLDEN KARNOFSKY: I think it’s a tough balancing act because on one hand, I believe there are approaches to ethics that do have a decent chance of getting you a more principled answer that’s more likely to hold up a long time from now. But at the same time, I agree with you that even though following the norms of your time is certainly not a safe thing to do and has led to a lot of horrible things in the past, I’m definitely nervous to do things that are too out of line with what the rest of the world is doing and thinking. And so we compromise. And that comes back to the idea of worldview diversification. So I think if Open Philanthropy were to declare, here’s the value on chickens versus humans, and therefore, all the money is going to farm animal welfare, I would not like that. That would make me uncomfortable. And we haven’t done that. And on the other hand, let’s say you can spend 10 percent of your budget and be the largest funder of farm animal welfare in the world and be completely transformative. And in that world where we look back, that potential hypothetical future world where we look back and said, gosh, we had this all wrong — we should have really cared about chickens — you were the biggest funder, are you going to leave that opportunity on the table? And that’s where worldview diversification comes in, where it says, we should take opportunities to do enormous amounts of good, according to a plausible ethical framework. And that’s not the same thing as being a fanatic and saying, I figured it all out. I’ve done the math. I know what’s up. Because that’s not something I think. […] There can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like […] it’s trying to be as weird as possible. It’s being completely hard-core, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in. And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should. […] I also just want to endorse the meta principle of just saying, it’s OK to have a limit. It’s OK to stop. It’s a reflective equilibrium game. So what I try to do is I try to entertain these rigorous philosophical frameworks. And sometimes it leads to me really changing my mind about something by really reflecting on, hey, if I did have to have a number on caring about animals versus caring about humans, what would it be? And just thinking about that, I’ve just kind of come around to thinking, I don’t know what the number is, but I know that the way animals are treated on factory farms is just inexcusable. And it’s just brought my attention to that. So I land on a lot of things that I end up being glad I thought about. And I think it helps widen my thinking, open my mind, make me more able to have unconventional thoughts. But it’s also OK to just draw a line […] and say, that’s too much. I’m not convinced. I’m not going there. And that’s something I do every day. • Thank you so much for putting in the trouble to put this together!!! I appreciate it a lot! • My attempt at a reasonable AI/​semis portfolio: MSFT − 10% INTEL − 10% Nvidia − 15% SMSN − 15% Goog − 15% ASML − 15% TSMC − 20% Interested if anyone thinks I got this hugely wrong. • Thank you for writing this. I’m not sure whether I agree or disagree, but it seems like a case well made. While I do not mean to patronise, as many others will have found this, the one contribution I feel I have to make is an emphasis on how very differently people in the wider public may react to ideas/​arguments that seem entirely reasonable to the typical EA. Close friends of mine, bright and educated people, have passionately defended the following positions to me in the past: -They would rather millions die from preventable diseases than Jeff Bezos donate his entire wealth to curing those diseases if such donation was driven by obnoxious virtue-signalling. The difference made to real people didn’t register in their judgements at all, only motivations. Charitable donation can only be good if done privately without telling anyone. -It is more important that money be spent on the people it is most costly and difficult to help than those whose problems can be cured cheaply because otherwise the people with expensive problems will never be helped. -Charity should be something that everyone can agree on, and thus any charity dedicated to farmed animal welfare is not a valid donation opportunity. -The Future of Humanity Institute shouldn’t exist and people there don’t have real jobs. I didn’t even get to explaining what FHI is trying to do or what their research covers; from the name alone they concluded that discussion of how humanity’s future might go should be considered an intellectual interest for some people, but not a career. They would not be swayed. Primarily, I think the “so what?” of this is trying to communicate EA ideas, nuanced or not, to the wider public is almost certainly going to be met with backlash. The first two anecdotes I list imply that even “It is better to help more people than fewer people.” is contentious. Sadly, I don’t think most of what this community supports fits into the “selfless person deserving praise” category many people have, and calling ourselves Effective Altruists sounds like we’ve ascribed ourselves virtues without justification that a person on the street would acknowledge. Accepting some people will react negatively and this is beyond our control, my humble recommendation would be for any more direct attempt to communicate ideas to the public gets substantial feedback beforehand from people in walks of life very different to the EA norm. People are really surprising. • 3 Dec 2022 21:01 UTC 4 points 0 ∶ 0 Thanks for this, Vasco, Hanzhang, Melissa! A couple of thoughts: 1. It seems to me that on reasonable quantifications of the ways in which direct tree planting efforts in the UK do not optimize for climate impact (utter non-neglecteness, lack of advocacy, trajectory change, low policy additionality, the factors you mention in the appendix) one would have a prior where tree planting is several orders of magnitude less cost-effective than strategies that seek to optimize for impact. As such, I find a posterior of a 1-order-of-magnitude difference within reach quite surprising. 2. While I am generally quite pessimistic on forestry interventions because of the utter non-neglectedness and the difficulty to get to credible additionality and permanence, it seems direct tree planting in the UK is kind of close to the worst thing one can do from a climate angle. So for donors that cannot be moved from tree planting it could be interesting to see what the best charities in this space might look like, e.g. advocacy to improve REDD+ or peatland protection etc. 3. Re Google Trends for neglectedness: A great datasource for climate philanthropy are the Climate Works reports—those show clearly not only that forests are very well funded compared to other areas, but also that funding is growing strongly (it has been an early focus of Bezos Earth Fund, the largest climate philanthropist). In addition, IIRC, conservation philanthropy more broadly, of which a significant share focuses on forests, is several X larger than climate philanthropy. • Hi Johannes, Thanks for sharing your thoughts! I find a posterior of a 1-order-of-magnitude difference within reach quite surprising. 1 t/​£ is estimated to be about 1 (= log10(1/​0.0722)) order of magnitude (OOM) higher than the cost-effectiveness of tree planting in the UK. However, the difference to the projects funded by CCF may be quite larger. Fitting a lognormal distribution with 2.5th and 97.5th percentiles equal to the lower and upper bound of the 95 % confidence interval you guessed (with the disclaimer that it should not be intended a resilient estimate) leads to a mean of 2.34 kt/​. This is about 5 (= log10(2.34 k /​ 0.0722)) orders of magnitude higher than the cost-effectiveness of tree planting for the UK.

• I’ve been wondering about cost-effectiveness in this space for a long time, so thanks for writing this and especially for releasing the quantitative model! At the top, it looks like you are saying that $100 million per year for 10 years could reduce x risk by about one percentage point, meaning about 100 basis points (0.01%) per billion dollars-is that correct? In the model column AF in tab X-risk, you say that the effort would be over a century, so does that mean spending$10 billion total? Elsewhere you say you consider spending $100 million per year just on one institution, so are you really talking about spending$100 billion this century on the top 10 institutions? Then this would be about 1 basis point per $billion. This is in the range of cost effectiveness values collected here. So if I’m understanding you correctly,$10 billion spent over the century would reduce the existential risk from the Chinese Communist Party Politburo by 10%, and 15% of that would have happened anyway, so you are reducing overall existential risk by 0.085%, which would be 0.85 basis points per $billion? • Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt. With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 100 years from a single one-time$100M commitment (perhaps distributed over multiple years) focusing on a single institution. The comment in the summary about 100 million/​year was assuming that the funder(s) would focus on multiple institutions. Thus, the 100 basis points per billion figure is the “correct” one provided our per-institution estimates are in the right order of magnitude. We’re about to get started on our second iteration of this work and will have more capacity to devote to the cost-effectiveness estimates this time around, so hopefully that will result in less speculative outputs. • Thanks for the clarification. I would say this is quite optimistic, but I look forward to your future cost-effectiveness work. • Thank you for writing this. I have thought about writing a critical post making broadly similar arguments, but with a greater focus on how the FTX disaster played out in November. I don’t plan to do this right now. At least some of the people who are working on this have a reasonable read on my views, and there are other things I want to focus on for now. Again—thanks for writing this. I will follow the discussion with interest—and so will many journalists! • 3 Dec 2022 18:28 UTC 6 points 3 ∶ 0 As someone who is not a hedonistic utilitarian, most of the arguments in this post strike me as incredibly weak. For example it can certainly be argued, and I personally believe, that negative experiences are not bad in such a way that a world without them would be superior. Grief is unpleasant, but I would not prefer a world without grief. I realise that this is not itself an argument, but the possibility of dissent does undermine the idea that the elimination of suffering follows so obviously from its existence that it can violate the is-ought gap. The post is filled with the same sort of logical leaps, where the author’s beliefs “must” be true with no argument as to why. Most academic philosophers are not consequentialists. If you “find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism” then you probably don’t have a very strong understanding of normative ethics. I may be missing the argument in the post, and would welcome a clear restatement of the premises, but as far as I can tell there is no serious attempt to address criticisms or alternatives to hedonistic utilitarianism other than “if you thought about it hard enough, you’d agree with me”. edit: I hadn’t read it before making this comment, but this other post from today seems to provide a much better answer to the central premise of this post than I would be able to provide. https://​​forum.effectivealtruism.org/​​posts/​​7dGZnj7bwpM2kSJqm/​​against-meta-ethical-hedonism • If you “find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism” then you probably don’t have a very strong understanding of normative ethics. FWIW I’d like to take this opportunity to advertise my list of recommended readings about non-utilitarian normative ethics, which some utilitarians may find educational. Maybe someone can write a similar list for metaethics. • 3 Dec 2022 18:16 UTC 33 points 6 ∶ 0 No inside knowledge, but as the article states they received a rather large grant from FTXFF, plus other funding from FTX and/​or Alameda going back several years. Much, up to potentially all, of that could have to be repaid depending on the facts and outcome of litigation. I think it likely the clawback risk is significant enough that the risk constitutes a serious incident under the UK charity rules. None of that relies on new information. In other words, I don’t think the fact of the report itself provides any useful information. However, it is worth noting that RP and OP have released statements characterizing their FTX clawback exposure (or lack thereof for OP). Unless CEA can provide some clarity, I’d assume the worst-case scenario for CEA would be very, very bad. One would need to see updated financials and a history of all FTX/​Alameda contributions to assess whether insolvency could result from the worst-case scenario. Do they have enough unrestricted assets to cover the worst-case outcome? Personally: I generally would not donate to an organization with potentially catastrophic FTX risk unless the organization had convinced me the risk didn’t exist (or was sufficiently small to disregard), or that my donations were legally restricted in a way that protected them from creditors in the event of an insolvency. • Personally: I generally would not donate to an organization with potentially catastrophic FTX risk unless the organization had convinced me the risk didn’t exist (or was sufficiently small to disregard), or that my donations were legally restricted in a way that protected them from creditors in the event of an insolvency. Given that CEA writ large includes Giving What We Can, EA Funds and the Donor Lottery, this would be a pretty big deal in terms of people’s giving this month. If I need to avoid donating through CEA, my life as a donor gets a lot harder. Given this, and given how many members of the community will be making big charitable donations around this time of year, some clarification from CEA on this front seems pretty valuable. • Right—and I totally get why organizations are hesitant to share certain information right now. If CEA is in a precarious situation and/​or cannot disclose enough information to reassure donors, it may at least be in a position to demonstrate that EA Funds and the Donor Lottery are either already safe from the claims of CEA’s general creditors or will be made safe, at least on a going forward basis by Dec 31. This is because, in many jurisdictions, certain donations can be restricted by the donor in ways neither the charity nor general creditors can breach. But it’s possible for the charity to intend to provide this protection yet screw it up . . . and has happened in at least one non-EA case I can think of. (all this is summarized from cell phone while out of town...I have some broader thoughts about risk containment in draft form on my home computer) • I agree—I personally really appreciate humor on the EA Form. Also, EA is generally not cold and calculating, it is “warm and calculating.” • Thanks for sharing this post, I absolutely agree. Hopefully critics of EA can come to see the genuinely warm-hearted motivations that I think most EAs have. • As I write this, the commenting guidelines say “Aim to explain, not persuade” and “Approach disagreements with curiosity”. It doesn’t feel like the media policy embodies those values! Whenever I’ve seen media/​outsiders criticize EA, EAs react defensively—which is a very normal human reaction, but hardly the kind of thing that should be coded into CEA policy. My two cents is that if anyone is contacted by the media to discuss EA, they have no obligation whatsoever to follow CEA’s media policy. This isn’t a political party. • The media is an extremely different discursive environment than the EA forum and should have different guidelines. I don’t want to assume that the public sphere cannot become earnestly truthseeking, but right now it isn’t at all and bad things happen if you treat it like it is. • Left unspoken? EA needs more head hunting aimed at senior non-EAs. • I’d counter that the focus on race and gender is very US-centric rather than culturally universal. I volunteer at a local charity, gender proportions are heavily skewed towards women being the bigger group. I neither find it a problem nor think any diversity measures should be introduced. It also seems fairly intuitive to me that it is the people who are the most privileged that can focus on such problems as AGI Safety and existential risk rather than those who struggle financially to live on the week to week basis. • I think a lot of the disagreements in the comments is coming down to different conceptions of headhunting. Dan, you refer to targeting/​specific/​direct outreach to particular individuals, but that doesn’t seem the crucial difference, its in the intent, tone and incentives. “Hey X person, you’re doing a great job at your current job. You might be totally happy at your current job, but I thought I’d flag this cool new opportunity that seems really impactful—happy to discuss why it might be a good fit” seems fine. Giving a hard sell, strongly denigrating the current employer or being strongly incentivised for switches (eg paid a commission) seems way less fine. • 3 Dec 2022 16:32 UTC 3 points 0 ∶ 0 Congrats on putting this up! • I think the problem of “EA leadership doesn’t consider it important to hire experienced people even for people who are leading a project (who in turn don’t consider it important to hire experienced people)” is a root cause problem for a lot of somewhat negative things going on in EA (which is nobody’s fault, but could be useful to improve, I think). • Why do you think people think it’s unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)? • I feel like a lot of this is downstream from people being reluctant to hire experienced people who aren’t already associated with EA. Particularly for things like operations roles experience doing similar roles is going to make far more of a difference to effectiveness than deep belief in EA values. When Coke need to hire new people they don’t look for people who have a deep love of sugary drinks brands, they find people in similar roles for other things and offer them money. I feel like the reason EA orgs are reluctant to do this is that there’s a degree of exceptionalism in EA. • I agree that it’s downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don’t really need it, doing things that are short-term good but long-term bad (with the assumption that they’ll have moved on before the bad stuff kicks in), etc. (cf. the book Moral Mazes.) Hiring mission-aligned people is one of the best ways to provide a check on that type of behavior. *I think some orgs maybe should be more open to hiring people who are aligned with the org’s particular mission but not part of the EA community—eg that’s Wave’s main hiring demographic—but for orgs with more “hardcore EA” missions, it’s not clear how much that expands their applicant pool. • It’s pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission. In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org. • Wooo super happy you managed to organise this! • Very interesting thread. I’m an non-EA experienced manager with a successful team-building company and was looking into how to help EA orgs with team-building, but it turns out I might be more useful as a manager coach? I started managing teams 15 years ago and eventually left the corporate world to be a tour guide. Covid forced me back to the manager role and founded my current startup, woyago, which is almost on autopilot. My Linkedin Profile: https://​​www.linkedin.com/​​in/​​antomontani/​​details/​​experience/​​ I have free time and would be happy to offer advice for those of you looking for help on management. Example areas I might have useful input on (copying heavily from your post Ben!): My Calendly can be found in my bio. Happy to (finally!) find a way to add impact to my life by helping you. • Here’s a related thought that I’m curious for people’s views on: if org X has a reputation for being good at interviewing and hiring candidates, and org Y is hiring for a similar role, and org Y says to candidates “if you have an offer from X then we’ll hire you with no further process”, or similarly “if you work or have worked at X, we know you’re good and don’t need to assess you ourselves”. This can feel like org Y is misappropriating the products of org X’s work and expertise finding and assessing good people. Is this unethical? My inclination is to say something similar to the replies to this headhunting post: it sucks to have this happen to you, but trying to prevent it in a heavy-handed way would be worse, so it seems better to just be aware of the phenomenon and be mindful of how you are benefiting from the work of others. (And again, the dynamic is different in for-profit organizations in competition with each other vs. non-profits with at least some amount of goal alignment.) • I think this is a good conversation to have. I broadly agree with the majority voice of the comments, that though it can be difficult and unfair to have your employees headhunted away from you after you invested in their development and planned around them being here, ultimately it seems better to allow it to happen because of the benefits to the employee and their new employer. At the same time, I do want to acknowledge that there is a version of this behaviour that is a problem. To the extent that any headhunter is: • disrespectful of people’s time by trying to involve them in processes that aren’t suitable for or interesting to them, • persistent in the face of polite refusal from the employee in question, • in any aspect intentionally misleading or dishonest, then I’m sure we’d all agree they were doing something wrong. It’s harder to prevent this kind of behaviour, because it’s often subjective when the line has been crossed, but I’d support a general understanding that if a headhunter does this kind of thing, then we hold both them and the organization they are hiring for responsible, perhaps privately at first and then publically if the behaviour persists. Anyone using recruiters or headhunters should feel under an obligation to ensure their agents are acting in ways consistent with their own values. • Does anyone have thoughts on 1. How does the FTX situation affect the EV of running such a survey? My first intuition is that running one while the situation’s so fresh is worse than waiting a 3-6 months, but I can’t properly articulate why. 2. What, if any, are some questions that should be added, changed, or removed given the FTX situation? • Thanks for the article, it definitely seems like an important problem. This should get even worse because of the upcoming energy crisis : 1. Many energy sources need a lot of water to keep working 1. Nuclear plants and coals plant need water to cool down the pipes 2. Copper and lithium extraction are very water intensive, and the Chilean governement has already limited some of the use of water for mining 3. Shale oil uses millions of liter of water 4. Biofuels are extremely water intensive as well 2. With les energy, the water depletion becomes worse. Water gets harder to pump, and desalination becomes even less of an option. Just an additional note: I think the post would be better with a bit of formating. Keeping the bolding of the document, putting everything in justified. Keeping this quote : Our blue planet holds plenty of water, but only 2.5% of it is fresh. The amount of fresh water has fallen 35% since 1970, as ground aquifers have been drawn down and wetlands have deteriorated. Meanwhile, demand for water-intensive agriculture and energy is soaring. Overall water demand is on pace to overshoot supply by 40% by 2030. - Stuart Goldenberg for Barron’s, May 3, 2014 • Does anyone here know why Center for Human-Compatible AI hasn’t published any research this year even though they have been one of the most prolific AGI safety organizations in previous years? https://​​humancompatible.ai/​​research • Are there any reasons why groundwater depletion isn’t already higher on the list of EA priorities? Seems really big in scale, but I have no idea how tractable/​neglected it is. • Groundwater depletion is an important topic, and I’m glad you’re bringing attention to it. • Thank you so much for sharing Ben! I’m glad to hear the calls have been fun. What you described fits my observations so far. I also think that management coaching is probably one of the key “interventions” here. As with any skill, a great deal of learning how to be a (good) manager is learning a set of new behaviors. Having someone to reflect on the development of those behaviors and the respective decision-making can be extremely helpful. • tl;dr: In the context of interpersonal harm: 1. I think we should be more willing than we currently are to ban or softban people. 2. I think we should not assume that CEA’s Community Health team “has everything covered” 3. I think more people should feel empowered to tell CEA CH about their concerns, even (especially?) if other people appear to not pay attention or do not think it’s a major concern. 4. I think the community is responsible for helping the CEA CH team with having a stronger mandate to deal with interpersonal harm, including some degree of acceptance of mistakes of overzealous moderation. (all views my own) I want to publicly register what I’ve said privately for a while: For people (usually but not always men) who we have considerable suspicion that they’ve been responsible for significant direct harm within the community, we should be significantly more willing than we currently are to take on more actions and the associated tradeoffs of limiting their ability to cause more harm in the community. Some of these actions may look pretty informal/​unofficial (gossip, explicitly warning newcomers against specific people, keep an unofficial eye out for some people during parties, etc). However, in the context of a highly distributed community with many people (including newcomers) that’s also embedded within a professional network, we should be willing to take more explicit and formal actions as well. This means I broadly think we should increase our willingness to a) ban potentially harmful people from events, b) reduce grants we make to people in ways that increase harmful people’s power, c) warn organizational leaders about hiring people in positions of power/​contact with potentially vulnerable people. I expect taking this seriously to involve taking on nontrivial costs. However, I think this is probably worth it. I’m not sure why my opinion here is different from others[1]’, however I will try to share some generators of my opinion, in case it’s helpful: A. We should aim to be a community that’s empowered to do the most good. This likely entails appropriately navigating the tradeoff of both attempting to reducing the harms of a) contributors feeling or being unwelcome due to sexual harassment or other harms and b) contributors feeling or being unwelcome due to false accusations or overly zealous response. B. I think some of this is fundamentally a sensitivity vs specificity tradeoff. If we have a detection system that’s too tuned to reduce the risk of false positives (wrong accusations being acted on), we will overlook too many false negatives (people being too slow to be banned/​censured, or not at all), and vice versa. Consider the first section of “Difficult Tradeoffs In the world we live in, I’ve yet to hear of a single incidence where, in full context, I strongly suspect CEA CH (or for that matter, other prominent EA organizations) was overzealous in recommending bans due to interpersonal harm. If our institutions are designed to only reduce first-order harm (both from direct interpersonal harm and from accusations), I’d expect to see people err in both directions. Given the (apparent) lack of false positives, I broadly expect we accept too high a rate of false negatives. More precisely, I do not think CEA CH’s current work on interpersonal harm will lead to a conclusion like “We’ve evaluated all the evidence available for the accusations against X. We currently think there’s only a ~45% chance that X has actually committed such harms, but given the magnitude of the potential harm, and our inability to get further clarity with more investigation, we’ve pre-emptively decided to ban X from all EA Globals pending further evidence.” Instead, I get the impression that substantially more certainty is deemed necessary to take action. This differentially advantages conservatism, and increases the probability and allowance of predatory behavior. C. I expect an environment with more enforcement is more pleasant than an environment with less enforcement. I expect an environment where there’s a default expectation of enforcement for interpersonal harm is more pleasant for both men and women. Most directly in reducing the first-order harm itself, but secondarily an environment where people are less “on edge” for potential violence is generally more pleasant. As a man, I at least will find it more pleasant to interact with women in a professional context if I’m not worried that they’re worried I’ll harm them. I expect this to be true for most men, and the loud worries online about men being worried about false accusations to be heavily exaggerated and selection-skewed[2]. Additionally, I note that I expect someone who exhibit traits like reduced empathy, willingness to push past boundaries, sociopathy, etc, to also exhibit similar traits in other domains. So someone who is harmful in (e.g.) sexual matters is likely to also be harmful in friendly and professional matters. For example, in the more prominent cases I’m aware of where people accused of sexual assault were eventually banned, they also appeared to have done other harmful activities like a systematic history of deliberate deception, being very nasty to men, cheating on rent, harassing people online, etc. So I expect more bans to broadly be better for our community. D. I expect people who are involved in EA for longer to be systematically biased in both which harms we see, and which things are the relevant warning signals. The negative framing here is “normalization of deviance”. The more neutral framing here is that people (including women) who have been around EA for longer a) may be systematically less likely to be targeted (as they have more institutional power and cachet) and b) are selection-biased to be less likely to be harmed within our community (since the people who have received the most harm are more likely to have bounced off). E. I broadly trust the judgement of CEA CH in general, and Julia Wise in particular. I think their judgement is broadly reasonable, and they act well within the constraints that they’ve been given. If I did not trust them (e.g. if I was worried that they’d pursue political vendettas in the guise of harm-reduction), I’d be significantly more worried about given them more leeway to make mistakes with banning people.[3] F. Nonetheless, the CEA CH team is just one group of individuals, and does a lot of work that’s not just on interpersonal harm. We should expect them to a) only have a limited amount of information to act on, and b) for the rest of EA to need to pick up some of the slack where they’ve left off. For a), I think an appropriate action is for people to be significantly more willing to report issues to them, as well as make sure new members know about the existence of the CEA CH team and Julia Wise’s work within it. For b), my understanding is that CEA CH sees themself as having what I call a “limited purview”: e.g. they only have the authority to ban people from official CEA and maybe CEA-sponsored events, and not e.g. events hosted by local groups. So I think EA community-builders in a group organizing capacity should probably make it one of their priorities to be aware of the potential broken stairs in their community, and be willing to take decisive actions to reduce interpersonal harms. Remember: EA is not a legal system. Our objective is to do the most good, not to wait to be absolutely certain of harm before taking steps to further limit harm. One thing my post does not cover is opportunity cost. I mostly framed things as changing the decision-boundary. However, in practice I can see how having more bans is more costly in time and maybe money than the status quo. I don’t have good calculations here, however my intuition is strongly in the direction that having a safer and more cohesive is worth the relevant opportunity costs. 1. ^ fwiw my guess is that the average person in EA leadership wishes the CEA CH team does more (is currently insufficiently punitive), rather than wish that they did less (is currently overzealous). I expect there’s significant variance in this opinion however. 2. ^ This is a potential crux. 3. ^ I can imagine this being a crux for people who oppose greater action. If so, I’d like to a) see this argument explicitly being presented and debated, and b) see people propose alternatives for reducing interpersonal harm that routes around CEA CH. • Thanks for this shortform! Lots of great points raised + broadly agree. I (and others I have spoken to about this) also feel similarly RE: 1, especially where the uncertainty is around the tradeoff between the person’s harm and potential contribution /​ impact, instead of uncertainty around the veracity of the allegations. I especially think it would be easy to underestimate the less-tangible negative impacts of people choosing not to get involved or opting to leave the EA community because they feel unsafe. RE: 2) is there anything stopping the CH team from expanding, if you think capacity might be an issue here? It just seems like an important enough thing to get right. I think informal actions can be helpful, but if you’re at the stage where you’re explicitly warning every newcomer about someone, that basically seems bad enough to warrant some kind of more formal action IMO—I hope this isn’t very common, and to the extent that it is, this would be another indication a lower bar for taking action would be appropriate. I think 3) would be ideal, but I think the onus is less on the individuals and more on external /​ more upstream factors—it may be pretty disempowering and retraumatising to go to the effort of doing this if you think nothing’s going to happen anyway. RE: 4) what does this look like concretely? RE: limited purview While it might be hard to formally extend CH’s reach to say, local groups, I could see a scenario where they could have input into whether or not local groups seek funding, if there’s a history of complaints etc. I wonder whether this is something that’s within their scope /​ capacity at the moment? RE: the crux in footnote 2, I personally think this pretty weak—I think base rates push strongly in favour of women needing to be much more worried about sexual violence than men being worried about false accusations, and my guess is we’re pretty far off the scenario where false accusations are comparable to the harm from sexual violence etc (commenting in personal capacity etc) • Nice article but not a divine topic for me. I may think that friendship is something to care for in the next generations and we really should have a circle of trust with friends, family and romantic partner. Also I would promote a new app for finding friends only, not like a tinder or sex/​romance stuff. • Thanks so much for doing this! Nitpick: the “advice I’ve given people so far” link is broken. • Hi everyone. I’m Evan Harper, a community manager at Metaculus. It occurred to me that I spent way too much time writing a question about Top Gun: Maverick today that will probably just drop off the front page quickly, and I wanted to say hi to the EA Forums at some point anyway, so I’m taking the excuse to shamelessly promote. I can also more proudly promote our re-opened forecast on the Raphael Warnock /​ Herschel Walker Senate election in Georgia. Both these are intended to help bring in new forecasters as part of the ongoing Beginner Tournament. And just in general, we’ve opened something like 40 new public questions in the last 20 days so there’s no shortage of interesting topics in addition to the front page which many of you will already have seen. For example check out this amazing one on zero-carbon aluminium smelting that some mysterious stranger by “@not_an_oracle” just dropped on me almost exactly as-is, perfectly written. My hero. And hey, once you’ve forecasted on something, then you’ll be a Metaculus forecaster, and I have to listen to questions, concerns, ideas, and suggestions from Metaculus forecasters as part of my job. Say hi! • I couldn’t have said this better myself! Coaching provides huge value towards career and impact growth, and I would love to see more EAs investing in themselves. • For what it’s worth, I also share the intuitive aversion. Reading Habryka’s comment, I’m not sure that the aversion would stand up to reflection. But I could imagine it doing so after I thought more about it, e.g., if poaching employees would lead to unequal or loopsided mentoring or hiring costs, or if headhunters were paid per person and not less for people from organizations which are already doing valuable work. • Yeah, I think the strongest arguments against headhunting is training/​cultural-onboarding costs. I do think there is a thing where hiring someone right out of college is often net-negative, but if you train them, they become net-positive after a year or two. I think it would suck to invest so much in training someone, just for them to walk away to an organization that offered a better experience because they had to spend fewer resources training others. I do think it makes sense to have norms here. At Lightcone we have some norms that if you do accept an offer after a 3-month trial period that you do really try to make things work out for 2 years, though if you find something that seems genuinely more impactful you should do it (and the organization would encourage you to go and do it). • Just to add, the fact that it sucks to invest in people and have them leave could lead long-term to organizations being less keen to invest in people in the first place, which would be ultimately bad for both employers and employees. That said, within the EA community, training someone and then watching them leave is less of a dead loss than it would be at a for-profit firm, because there’s a pretty good chance that they’re going to go do something that you’re also in favour of, even if it’s not the thing you chose to work on yourself. I’ve actually heard the funding pitch before of “you should fund us because we hire people previously unknown to the EA community and many of them go on to be hired by OpenPhil or etc. and cite their experience with us as helpful for that”. I agree with you that the right way to deal with this is via flexible informal norms. I don’t so much recommend more rigid /​ coercive /​ formal tools, but probably among the least bad of them I’ve seen is “here is a starting bonus, but if you leave before your first year or so, you have to pay it back”, or guaranteed pay rises after certain periods of time, etc. • Joeri Kooimans is/​was teaching philosophy in prison, so maybe you could talk to him? • I can’t believe I didn’t read this until just now. You are attacking unstated assumption of the philanthropy community writ large, but which includes EA. One is that better psychology is an area for philanthropy and altruism minded people to care about. Most people in our society put the needs of the body far higher than psychological/​”spiritual” needs (and neglect taking care of the psychological distress of others as a work of charity). I think this argument would actually have to be won in order for the psychedelics argument to work as a promising new subset of that line, which I can buy that it may be. The metaphysical assumption that the mind matters as a separate issue from the needs of the body and that there are big gains to having better psychological tools for making the mind better or at least to not suffer. Once again, however, I suspect that most people have a hard time believing that better psychological states for people like them would have better visible real-world effects. They don’t believe in a tight coupling of greater psychological health and real-world improvement for people who are already doing fairly well on both fronts. • Open Phil is accepting applications from impacted FTX grantees (see our post here), and we’ve been giving donors the option to either contribute to that effort (effectively funging us), or to request that we forward particular kinds of applications to them (with the applicants’ permission). If you’re a 6-figure-or-more donor and are interested in either of the two, you can reach out to us at inquiries@openphilanthropy.org. (Note that we may have to prioritize earlier/​larger donors if we get a large volume of requests.) • 2 Dec 2022 23:27 UTC 123 points 16 ∶ 2 Maya, I’m so sorry that things have made you feel this way. I know you’re not alone in this. As Catherine said earlier, either of us (and the rest of the community health team) are here to talk and try to support. I agree it’s very important that no one should get away with mistreating others because of their status, money, etc. One of the concerns you raise related to this is an accusation that Kathy Forth made. When Kathy raised concerns related to EA, I investigated all the cases where she gave me enough information to do so. In one case, her information allowed me to confirm that a person had acted badly, and to keep them out of EA Global. At one point we arranged for an independent third party attorney who specialized in workplace sexual harassment claims to investigate a different accusation that Kathy made. After interviewing Kathy, the accused person, and some other people who had been nearby at the time, the investigator concluded that the evidence did not support Kathy’s claim about what had happened. I don’t think Kathy intended to misrepresent anything, but I think her interpretation of what happened was different than what most people’s would have been. I do want people to know that a lack of visible action doesn’t mean that no one looked into a situation or took it seriously. More here about why there may not be much visible action. I think these problems are really hard to deal with fairly and well. I’m sure my team doesn’t always have the balance right, but you can read more about our approach here. Cultural change also can’t all be handled by CEA or any one centralized source. We want to support organizers, employers, online spaces, and other EA spaces in building a healthy culture. To anyone who’s an organizer or other person shaping the culture of an EA space, we’re here to talk if you’d like to. • This is a very good post and is a great framework to discuss existential risks. • Thanks! • Interesting, my takeaway from FTX was exactly the opposite. That we should focus on getting away from venture capitalists/​acquiring as much money as possible/​other mindsets that got us into this mess, and instead cultivate talent that are so dedicated to EA that they’re willing to do altruistic work for very little money. • Posted this early, so excuse any notifications. instead cultivate talent that are so dedicated to EA that they’re willing to do altruistic work for very little money. As someone who is working at an EA org for free, I don’t agree with this. I come from a background of non-EA youth advocacy for multiple cause areas, including education, climate change and animal rights. I have had so many good co-founders go into non impact-focused, high paying roles like consulting because they don’t get paid anywhere near the value they provide. If you want good talent that knows how to plan, takes initiative and knows how to execute, that kind of talent knows enough to apply to dozens of other better-paying roles, and probably enough to secure very high paying roles. I work for free now because I’m in uni and it’s socially acceptable to not make full-time pay. If you underpay a competent person, they will not only face financial pressure, but also see it as a reflection of how they are valued. I don’t think this leads to healthy movement growth in the long run. • What percent of expenses in various cause areas are for professional staff in high-income countries? • My update from a case of fraud isn’t that money can’t be made ethically. This isn’t to dismiss the possibility of value drift etc, which we should take even more seriously than we have been. Having said that , a few things: 1. I generally am in favor of moving away from a vibes/​patronage based community to a more meritocratic professional-ish group. And the approach you suggested (ie not paying people well) doesn’t make it easy to hire people from the “outside world” whom we have a lot to learn from (like hmm corporate governance maybe? or accounting?)I think it’ll also make the diversity problem significantly worse—and continue selecting for privileged folks who can afford to actually do the work “purely altruistically” 2. Also, there are a bunch of ways in which labor can’t substitute for capital. I work in biosecurity and it seems like we can do significantly fewer things now , especially magaporjects that involve significant brick and mortar infrastructure. I wouldn’t be surprised if at some point down the road, AI Safety also requires significant spend on compute/​data., not to say anything of the myriad neartermist stuff that’s almost infinitely scalable. In general, my update is from the situation is more : we need money but we also need better ops , more interfacing with the real world, better corporate governance and generally fewer incentuousy lookign orgs. • 2 Dec 2022 22:15 UTC 34 points 16 ∶ 0 Yes, we all benefit (on average, in expectation) etc. from a more efficient labour market, and an important part of that is ensuring that workers hear about relevant opportunities for them. Not everyone is constantly refreshing the 80k job board, and many jobs are never listed, so it makes sense to do direct headhunter outreach to potential hires. Organisations should focus on trying to retain talent by being as positively impactful as possible, and by offering an attractive working environment /​ compensation package, not by keeping their employees in the dark about their alternatives. Obviously recruiting people based on misleading information is bad, but that’s true regardless of where you’re recruiting from, and similarly it’s bad to try to retain your current employees in misleading ways. • Minor readability suggestion: for very small probabilities, e.g. smaller than 0.01%, just state them as N out of 10^k, where N is between 1 and 10. Or as N*10^k, where k is negative. I think numbers smaller than 0.01% are more intuitive when presented these other ways than as percentages. I’d normally have to do the conversion out of percentage first to get an intuitive grasp of their magnitude. • 2 Dec 2022 20:18 UTC 37 points 12 ∶ 0 I’m not a lawyer, but my understanding is that even informal agreements against headhunting other EA organizations’ employees would likely violate US antitrust law. • Thanks for writing this! I agree that this is a useful exercise. Some other considerations that may count in favour of neartermist interventions: 1. Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations. 2. More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.) • Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.) Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it). • Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.) • I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better. On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind. • SBF is watching this thread closely • Hi! I have absolutely no expertise in this, but it seems long-term good to maximize the quality of matches between employers and employees. So, formally, I suppose I disagree with the statement: Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect. If an employee takes a job at another org, presumably they expect it to be a better match for them going forward. I’d count that as a positive effect, assuming (on average) it increases their effectiveness, decreases their chances of burnout, etc. Even if its just for money or location, its hard to know what intra-household bargains have been made to do EA-work, etc. There might also be positive general equilibrium effects: An expectation of a robust EA job market (with job-to-job transitions) increased my willingness to leave a non-EA job (academia) and enter this ecosystem. I would have been more hesitant had I felt there was a norm against hiring from other orgs. Though I’ll flag that I’m not confident I accurately understand the term ‘head-hunting’ here, as opposed to recruiting, as opposed to hiring. In any case, a strong ‘no head-hunting/​recruiting’ norm seems like it would weakly pressure orgs not to hire from other orgs (since they wouldn’t want to be seen as recruiting from other orgs). I get that there are costs associated with re-hiring, re-training, and re-integrating that would be avoided if the original org just directly hires from the non-EA-employed camp. Maybe I’m underestimating these! My uninformed guess is that they are small relative to the benefits of increasing match quality. Curious about others thoughts on this though! Thanks for writing it. • Good points- I take back my earlier “Clearly...” statement, and agree it needs to also include utility gains for the worker in the calculation. Just to clarify, I wouldn’t be advocating that orgs don’t hire from peer orgs. Of course, post jobs, make them widely known, take and consider applications from all place. But I think it’s different to spend money on dedicated staff to directly target and aggressively recruit staff from friendly orgs within your ecosystem. • I share the negative emotional reaction to headhunting candidates from ostensibly allied organisations—it does inevitably feel like an adversarial move. Ultimately, though, I find it quite hard to justify this opposition intellectually. The main effect of headhunting is to provide employees with information—e.g. that they seem like a good fit for this exciting role they might not have known about (or considered applying to) otherwise. I support people making their own employment decisions on the basis of the best possible information, and (in most cases) oppose hiding information from people because it might cause them to make decisions we don’t like. If you phrase an opposition to headhunting as “don’t make our staff aware of opportunities they might freely decide to pursue over their current job”, I think it sounds a lot more dubious as an organisational philosophy for an ostensibly altruistic organisation—it strongly suggests that management don’t have their employees’ best interests at heart. • Thanks for the comment- I see where you are coming from. As noted in a previous reply, I think a lot has to do with how much the headhunter informs vs convinces. There are a lot of parallels with advertising. Do we think that advertising performs a positive social function? Well, it could if it simply provides information about a new product and allows consumers to make more informed choices. But also the advertiser has incentives to increase sales, so why would we trust them to be truthful and have everyone’s best interests at heart? Headhunters/​recruiters have incentives to fill roles, so I don’t think we should assume that they are playing a neutral, information-providing role. • I don’t know nearly enough about headhunting to say anything definitive. But if we think they’re misleading—rather than informing—maybe the argument should be ‘EA orgs shouldn’t use headhunters’ for the reasons you laid out in these comments. It feels counter productive from the orgs side to trick someone into a job they wouldn’t have taken with full information (*especially* for a community trying to operate with integrity). That seems like a distinct point from ‘EA orgs shouldn’t poach from one another’ (which is what it seemed like the post was about). In general, my prior is that norms should be the same for hiring the EA-employed and the non-EA-employed, whether that’s using headhunting services or not. • Yeah, this also seems right to me. My experiences with headhunters in the broader world have been pretty bad, and many of them seemed pretty shady, so I would definitely dock an EA org a lot of points if I saw them reach out to people with deceptive marketing. • 2 Dec 2022 19:48 UTC 124 points 58 ∶ 0 Yes, I at least strongly support people reaching out to my staff about opportunities that they might be more excited about than working at Lightcone, and similarly I have openly approached other people working in the EA community at other organizations about working at Lightcone. I think the cooperative atmosphere between different organizations, and the trust that individuals are capable of making the best decisions for themselves on where they can have the best impact, is a thing I really like about the EA community. • Thanks for the comment- I understand where you are coming from, and see how this could go either ways. But I think I’d tend to disagree. I’m always happy for people to be aware of other opportunities and consider them, but I think there’s a difference when there are paid professionals targeting specific people to switch jobs. These professions tend to not just inform, but also convince. So in the situation of a job switch, you end up with a situation where the recruiting organization gains, the recruited organization loses, and actual job-seeker perhaps gains but this isn’t totally clear, depends on the amount that their decision was motivated by information vs convincing. And there’s a deadweight loss from the salary of the headhunter. Therefore, I think that the net effect of a headhunter could be positive or negative. Certainly it seems like they would have a higher impact if they recruited people from low-impact orgs to move to high-impact orgs. • I don’t know, this sounds to me like treating employees at EA organizations as children that have to be protected from “convincing misinformation”. My employees are totally capable of handling headhunters trying to convince them, and I think most other people in EA are too. These people are not children, and it’s not my right or job as an employer to protect them from harmful-to-me-seeming information, especially when I am obviously in a massive conflict of interest in regard to that information. • Perhaps obvious, but while I agree that your employer should not make it their business to protect you from misinformation of this kind, I still think that anyone who spread genuinely “convincing misinformation” would be doing something wrong and should stop. (I’m not necessarily expecting people to agree on whether a given headhunting pitch is misinformation or not, but in cases where it is, that’s obviously a problem.) • Wasn’t part of the general objection early on to Leverage over them appearing to ~headhunt (I don’t know details) from other orgs like MIRI? (That very well may not be part of your issues with them though?) • Indeed, I think that criticism (as well as the criticism that they recruited donors away from other organizations) was quite unjustified (and I contributed somewhat to it a few years ago). • How much would I personally have to reduce X-risk to make this the optimal decision? Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/​25B. But this isn’t answering your question of personally reducing x-risk. Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns. • Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction). Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.) • 2 Dec 2022 19:34 UTC 10 points 1 ∶ 0 One thing to consider: it is possible that the ultimate real-world effect of a donation to certain organizations would be increasing the funds flowing back to the FTX bankruptcy estate rather than furthering the donor’s charitable intent. For one, donating any significant funds could increase the organization’s profile as a clawback target. Two, some organizations may need to close down whether you provide some additional funding or not. Three, some organizations could benefit from the cleansing waters of bankruptcy themselves (or settling any FTX claims first) before being infused with new money. Finally, if you want to donate to an organization with serious clawback risk, you may want to think about ways to structure that donation to minimize creditor risk. All of this is based on my view that prospective donors have no ethical obligations to FTX victims. Your mileage may vary if you hold a different view. • I have a friend in my program (not exactly EA, but EA curious and a great guy) who has done a good deal of work with an organization that teaches and discusses philosophy with prisoners. If you would like, I can ask him if he would mind being put in touch, as he might have some useful insights/​connections. • Yes, that would be great. Thank you! I am new to the forum so I don’t know what the usual level of contact info sharing is here. Let me know what works best for you. • No problem, welcome to the forum! You can feel free to share whatever you’re comfortable with, but personally I would recommend you don’t post your email address in the comments, as there was recently someone webcrawling the forum for email addresses to send a scam email to. I would reserve information like that to DMs, my own plan is to DM you his email address if and when he gives me his approval. If you’d prefer, feel free to give me your email address and I can send it to him instead, again, whatever works best for you. • Firstly, I want to flag that this prediction is in strong disagreement with market predictions: the rate on a 20-year treasury is 3.85% as I write this, suggesting that investors do not expect a dramatic increase in inflation. This is in one of the largest, most liquid, and most attended-to markets on the planet, the only competition I am aware of being other US Government bonds. Secondly, the weighted average maturity of US government debt is around five years, to give a concrete value for thinking about how long the US government can have much higher inflation before markets are able to fully react. That’s a moderate amount of time, but if you say that the US government is willing to accept multiple years of 15% inflation (an extremely bold claim), you could still only get a temporary 50% reduction in the debt without fixing the underlying entitlement issues. Which is why it is very strange that this post assumes as a hard constraint that the US government will fulfill its entitlement obligations. I’m not sure why that is assumed. Faced with the option set “inflation” and “cut Medicare and Social Security”, the government might easily choose Medicare and Social Security. Yes, there have been promises, but they are not very credible. Maybe the inflation target gets set to 3% or 4%, numbers that are still very small, but cuts to the commitments seem as or more plausible as spending expands. Once you drop that assumed constraint, the option set of the government expands to a wide variety of more acceptable solutions. Finally, “Inflation is going to be terrifyingly high any day now: buy gold/​crypto/​my special security” has been a recurrent promise of financial snake oil salesmen for decades. Always be careful when you see people claiming it, particularly if they’re also selling something. Debt fears have a similar pedigree: we might be told to be terrified of 130% now, but I remember back when it was 90%, which turned out to be an Excel error. They might be right this time, but you should look for a lot more than a single analysis without theoretical justification, which relies heavily on datapoints following legendarily expensive wars. In the period since the 1950s, attitudes towards government defaults have shifted. Monarchies act differently from independent central banks. • For the first and second example you listed, I think they fail the gender-reversal test. If it had been a woman who said she’d arranged a one-on-one with a man cause he was handsome, nobody would feel upset. Similarly if a group of girlfriends were privately ranking which of the men in the area they’d most like to sleep with. Interestingly, this actually happened at an EA workplace of mine once. I talked to the people involved and told them how it made me feel. They seemed surprised, then felt guilty, and after some discussion and debate (we are EAs after all), they decided to not do it anymore. I think this was more just a matter of low EQ and not thinking things through, rather than an objectification of women. • If it had been a woman who said she’d arranged a one-on-one with a man cause he was handsome, nobody would feel upset. Personally, I would be extremely upset, and report it to the community health team • I notice I’m confused. If a woman said “Ooh, he’s attractive. I should set up a one-on-one with him”, you would report that to the community health team? Why? This seems like ordinary and harmless behavior. Maybe not the most strategic way to have a good conversation or get a good long term partner, but hardly a threat to the community. (Although this is only one sentence. Maybe she only made this judgment after seeing that he had oodles of EA Forum karma, which is obviously the only correct way to evaluate mate quality 😛 ) I’m struggling to understand the thought process that would lead to this being reported. • As Kirsten mentioned, the context of it being an EA conference is key. If a woman said “Ooh, he’s attractive. I should set up a one-on-one with him”, you would report that to the community health team? I would assume it was a joke, if she was serious I would tell her not to, if she did it I would report it. Why? Because EAG(x) conferences exist to enable people to do the most good, conference time is very scarce, misusing a 1-1 slot means someone is missing out on a potentially useful 1-1. Also, this kinds of interactions make it much harder for me to ask extremely talented and motivated people I know to participate in these events, and for me to participate personally. For people that really just want to do the most good, and are not looking for dates, this kind of interaction is very aversive. This seems like ordinary and harmless behavior. Thankfully, in my experience, it’s not ordinary, the vast majority of people schedule 1-1s at EAGs to discuss ways to do more good. Also, as we can see from these posts and my personal reaction, it’s not always harmless. I really value EAG time! I really don’t want to ask my most altruistic and talented friends to come to EAGs and then have them hit on, especially young ones that are choosing careers! There are other conferences and meetups for people that are looking for that. • I’m sure other people have answers about why they’d prefer not to have people book meetings based on attraction, but I’d like to say I support this kind of thing being reported to the Community Health team. The EAG team have repeatedly asked people not to use EAG or the Swapcard app for flirting. 1-1s at EAG are for networking, and if you’re just asking to meet someone because you think they’re attractive, there’s a good chance you’re wasting their time. It’s also sexualizing someone who presumably doesn’t want to be because they’re at a work event. Reporting this kind of breach of EAG rules seems entirely appropriate! https://​​twitter.com/​​amylabenz/​​status/​​1558435599668895745?s=46&t=unZ0UrHR9pJNN03keeNXcw • Thanks for sharing this. I think people have a tendency to overgeneralise about what “men” or “women” care about when having these conversations. • For what it’s worth, I’m a 30 year old woman who’s been involved with EA for eight years and my experience so far has been overwhelmingly welcoming and respectful. This has been true for all of my female EA friends as well. The only difference in treatment I have ever noticed is being slightly more likely to get speaking engagements. Just posting about this anonymously because I’ve found these sorts of topics can lead to particularly vicious arguments, and I’d rather spend my emotional energy on other things. • Supporting the community with this new competition is quite valuable. Thanks! Here is an idea for how your impact might be amplified: For ever researcher that is somehow has full time funding to do AI safety research I suspect there are 10 qualified researchers with interest and novel ideas to contribute, but who will likely never be full time funded for AI safety work. Prizes like these can enable this much larger community to participate in a very capital efficient way. But such “part time” contributions are likely to unfold over longer periods, and ideally would involve significant feedback from the full-time community in order to maximize the value of those contributions. The previous prize required that all submissions be of never before published work. I understand the reasoning here. They wanted to foster NEW work. Still this rule drops a wet blanket on any part-timer who might want to gain feedback on ideas over time. Here is an alternate rule that might have fewer unintended side effects: Only the portions of ones work that has never been awarded prize money in the past is eligible for consideration. Such a rule would allow a part-timer to refine an important contribution with extensive feedback from the community over an extended period of time. Biasing towards fewer higher quality contributions in a field with so much uncertainty seems a worthy goal. Biasing towards greater numbers of contributors in such a small field also seems valuable from a diversity in thinking perspective too. • 2 Dec 2022 18:20 UTC 2 points 1 ∶ 0 The headlines on this one would be a nice distraction :) • As a few people have mentioned, I have a very different financial background than SBF in that everyone knows how I became wealthy. In cinematic detail, no less! Moreover, as an American citizen and California resident, audits aren’t just a fun intellectual exercise for me—they really happen 😲. Recently (culminating in 2021), the California Franchise Tax Board audited my 2016-2018 filings. Here’s what my accountant wrote me about that: they have issued a “No Change” report. This audit was so extensive they literally ticked and tied every number on your tax return back to original source documents. We responded to a number of Information Document Requests (IDRs) and had over 20 video calls with the FTB team to talk them through various reporting positions on investments you hold. In talking with the lead agent on our audit conclusion call he said “we didn’t even find a rounding error……….great work on the tax reporting.” For tax filings of this size this is the Super Bowl victory for us geeky accountants. We run self audits every year to prepare for the real ones, and I generally see no benefit to trying to get away with cheating at wealth generation when there are many legitimate options in front of me. (e.g. when I’m not fishing for karma on twitter, I run the enterprise software company Asana. Check it out!) Would an independent auditor go deeper somehow than the govt who’s trying to find fraud and generate revenue? I guess so, maybe? Maybe you’ll trust that I didn’t fabricate the above? I’m open to it, but generally skeptical it’s actually adding that much. Perhaps PR self-oppo (like politicians do) would be a more novel exercise. • 2 Dec 2022 17:51 UTC 21 points 8 ∶ 0 Thank you for having the courage to say this out loud. Just remember that the “EA community” does not have a monopoly on doing good. If the atmosphere is toxic, people are just going to leave, and rightfully so. You will be better off, and do more good overall, in environments and spaces where you feel respected and safe. If EA is not committed to such a space, they will slowly die off, and be replaced by an organisation that is. • In a world where the EA culture is bad, but where no-one else is really doing any better, we may not be able to be replaced in this way, and it becomes even more important to ensure that we get the culture right here. • I think an attitude of irreplaceablity can be dangerous: someone could easily make the mistake of thinking that bad apples need to be protected and covered up for in order to preserve the movement as a whole. (I certainly hope nobody thinks this way here, but this has happened before in other movements). In truth, the ideas aren’t going away. Individual people can be replaced, and new groups can form in the event of a blowup. Try and fix the culture, sure, but if it’s too far gone, don’t be afraid to blow the whistle and blow it up, in the long term it’s healthier. • That’s a true point—but I don’t a good objective. EA should strive to exist with the best, highly-aligned to doing good people and I think we need a culture the prioritises people’s lived experiences, feelings, and interactions for that to happen. • Of course. I’m not involved in any irl EA communities so I can’t really judge how bad/​good they are. It would be better if the current movement survived with good community norms in place, but if it doesn’t, it’s not the literal end of the world, a new, better community will replace it. • Man this is one of the best posts I’ve ever read on the forum. Extremely educational while remaining very engaging (rare to find both). Thank you for writing this, I hope you’ll do similar write-ups for other research you do! • Can’t wait for EAGuantanamo! Kidding of course, but I’m not sure how valuable it’d be given how difficult it is for former convicts to get jobs (e.g. low expected earnings to contribute to high impact charities down the line). But for groups doing work on recidivism and the like, I do hope they are recruiting out of pools of ex-cons to really understand what the problems are that folks face. • Maya—thanks for a thoughtful, considered, balanced, and constructive post. Regarding the issue that ‘Effective Altruism Has an Emotions Problem’: this is very tricky, insofar as it raises the issue of neurodiversity. I’ve got Aspergers, and I’m ‘out’ about it (e.g. in this and many other interviews and writings). That means I’m highly systematizing, overly rational (by neurotypical standards), more interested in ideas than in most people, and not always able to understand other people’s emotions, values, or social norms. I’m much stronger on ‘affective empathy’ (feeling distressed by the suffering of others) than on ‘cognitive empathy’ (understanding their beliefs & desires using Theory of Mind.) Let’s be honest. A lot of us in EA have Aspergers, or are ‘on the autism spectrum’. EA is, to a substantial degree, an attempt by neurodivergent people to combine our rational systematizing with our affective empathy—to integrate our heads and our hearts, as they actually work, not as neurotypical people think they should work. This has lead to an EA culture that is incredibly welcoming, supportive, and appreciative of neurodivergent people, and that capitalizes on our distinctive strengths. For those of us who are ‘Aspy’, nerdy, or otherwise eccentric by ‘normie’ standards, EA has been an oasis of rationality in a desert of emotionality, virtue-signaling, hypocrisy, and scope-insensitivity. Granted, it is often helpful to remind neurodivergent people that we can try to improve our emotional skills, sensitivity, and cognitive empathy. However, I worry that if we try to address this ‘emotions problem’ in ways that might feel awkward, alienating, and unnatural to many neurodivergent people in EA, we’ll lose a lot of what makes EA special and valuable. I have no idea how to solve this problem, or how to strike the right balance between welcoming and valuing neurodiversity, versus welcoming and valuing more neurotypical norms around emotions and cognitive empathy. I just wanted to introduce this concern, and see what everybody else thinks about it. • Thank you very much for your perspective! I recently wrote about something closely related to this “emotions problem” but hadn’t considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we ‘normies’ remember to keep you in mind! • Throwing out one possible approach: 1. People think about where they have blindspots around reading certain styles of writing, and acknowledge that in those areas, they may not get the point being made, even if there is an important point 2. When someone makes a post that communicates in a way that you identify as your blindspot, you think about whether you can respond in the same style that they communicated. 3. If you can—do so. If you can’t—you don’t have to respond to the post at all. This is the crux of my suggestion. If you just see the world differently from someone else, so much so that responding to it would involve a clash of your worldviews, it’s okay to just leave it alone. I think “let it go” is an undervalued approach on every internet forum, and especially so here. That’s my best guess at a strategy that works both for someone who systematizes a lot reading an “overly” emotional post, and for someone who systematizes very little reading an “overly” analytical post. But I agree this is something of a wicked problem and we need some way to tackle it. In the absence of an explicit approach, I think the OP is right to point out that people will just respond in an analytical way to emotional posts and that may not help anyone at all. • I really appreciated your comment and think it’s important to acknowledge and ensure neurodiverse people feel welcome, and I’m coming from a place where I agree with Maya’s reflections on emotions within EA and am neurotypical. Not sure I have time to post my thoughts in depth but I think the rational Vs. intuitive emotional intelligence tension within EA is something worth a lot more thought. It’s a tension /​ trade-off I’ve picked up on in the EA professional realm: where people aren’t getting on in EA organisations, where people aren’t feeling heard, and where the working culture becomes one that’s more afraid of losing status /​ threat mindset than supportive, to the detriment of employees. Maybe as a counter to what you’re saying, some of the people who helped me best own and articulate my emotions (in the context of another EA repeatedly undermining me) are bay-area rationalist EAs who you might describe as neurodivergent. Why? I think a lot of people from that community have just done the work on themselves to recognise emotions in themselves, and consequently in others. And this is driven by valuing emotions /​ internal worlds intrinsically—in that integrating head and heart way you write about—and then getting better in that domain. So to link this back to Maya’s post; 1. agree with making sure EA is truly inclusive and, in being better at responding to emotions and traumatic experiences, doesn’t swing to excluding neurodivergent people, 2. I think this tension /​ trade-off goes beyond social realm, and into the professional, and 3. I would like to play up how many neurodivergent people—especially those who might instinctively behave in a way that creates the culture Maya has highlighted as problematic - can actually be really good at creating an emotionally responsive and caring environment. Happy to discuss further time permitting (which is sadly not on my side!) • howdoyousay—thanks for this supportive post. I agree that many neurodivergent people can develop quite a good set of emotional skills (like some of your Bay Area rationalists did), and can promote emotionally responsive and caring environments. (When I teach my undergrad course on ‘Human Emotions’—syllabus here—one of my goals is to help neurodivergent students improve their understanding of the evolutionary origins and adaptive functions of specific emotions, so they take them more seriously as human phenomena worth understanding) My main concern is that EA should not become just another activist movement where emotions over-ride reason, where ‘lived experience’ gets prioritized over quantitative data, and where neurodivergent people get cancelled, shunned, and stigmatized for the slightest violations of social norms, or for ‘offending’ neurotypical people. You’re right that striking the right balance is worth a lot more discussion—although my sense is that, so far, EA as a community has actually done remarkably well on this issue! • 2 Dec 2022 16:57 UTC 16 points 5 ∶ 1 I’ve worked to pitch (and in some cases, been the target of) investigative pieces for the last 15 or so years of my life, and honestly, nothing here strikes me as particularly troubling. These are routine errors in communication, or cognitive biases (e.g., salience bias), and probably not indications of any sort of wrongful conduct. • Misstatements about Sam’s frugality. It’s possible there was an effort to mislead, but seems more plausible that this was just salience bias. A billionaire driving a Corolla is salient; owning a luxury condo in the Bahamas is not. Unless there is evidence that people actively misled the reporter, this is not particularly notable to me, other than reminding us all not to fall victim to that bias. • Warnings about Sam’s misbehavior. Say someone causes others to find them completely ethical in 99.9% of all interactions. That’s a very high rate! But once one becomes prominent, even a very high rate of perceived ethical behavior will lead to a high number of warnings because the number of interactions increases exponentially. Every prominent person has at least some people saying, “They’re unethical.” This is often because the prominent person merely turned a request down, when many requests are being made. (I am much less prominent than Sam but have had this happen to me many times.) I find the failure to respond to the exceedingly vague allegations against Sam unremarkable. • Sam’s contradictions on malaria nets. You can read this as a lie. You can also read this as changing sentiments. We all contradict ourselves, and this is especially true when we are talking about highly speculative questions, such as cause prioritization, where our evidentiary basis is often quite slim, and where much depends on relatively small differences in the assumptions we make (e.g., a small change in the probability of hostile AGI). It is possible Sam is lying about his commitment to malaria nets to cover up his crimes. It’s also possible that he just changed his mind, or at different moments, has a different emotional and rhetorical commitment to various causes. To give another example, Sam once stated that he was very committed to animal protection. Over time, he shifted his commitments and seemed more focused on concerns such as AI. I don’t see that as a lie, even though I disagreed with it. It’s just change. The main thing that I would find concerning in this piece is the excessive focus on PR by EA leaders. Don’t focus on PR. Focus on trying to get a true and accurate account out there in the media. It’s very hard to manipulate or even strategize about how to portray yourself. It’s much easier to be real, because you don’t have to constantly perform. That should be a norm within EA, especially among leaders. • I think your point about the various “warning flags” is well-taken. Of course, in retrospect, we’ve been combing the forums for comments that could have given pause. But the volume of comments is way too large to imagine we would have actually updated enough on a single comment to make a difference. That said, I think the mass exodus of Alameda employees in 2018 should have been a bigger warning flag, cause for more scrutiny on the business, to the extent where those with a concern for the risks should have tried to dig deeper on those employees, even with the complications that NDAs can pose. We can’t say we weren’t aware of it—that episode even made it into SBF’s fawning 80k interview, albeit mostly framed as “how do you pick yourself up after hardships?”. The best case scenario conclusion of such an investigation very likely wouldn’t have been “SBF is committing massive fraud” especially as that might not have happened until years later. But I think it still would have been useful for the community to know that SBF had a reckless appetite for risk, so we could anticipate at least the potential for FTX to just outright collapse, especially as the crypto industry turned sour earlier this year. • 2 Dec 2022 16:46 UTC −15 points 4 ∶ 17 I am unpleasantly surprised by what I hear in this and other posts that make it sound like there is a sort of EA “party scene” or something. It sounds like EA men (and maybe others) need to focus a lot more on working their butts off every single day to help others, and a lot less (or ideally not at all, but that can be tough) on trying to get laid/​dating. • Yeah, how dare they not dedicate every waking hour to helping others-the audacity! In all seriousness, pretty sure the problem here are things like people who don’t respect others’ boundaries/​not recognizing power dynamics, the culture that normalizes this, and the institutions that don’t adequately mitigate this risk, not the fact that people trying to do good in their life can also spend parts of their life socializing. • In all seriousness, pretty sure the problem here are things like people who don’t respect others’ boundaries/​not recognizing power dynamics, the culture that normalizes this, and the institutions that don’t adequately mitigate this risk That goes without saying. • 2 Dec 2022 16:31 UTC 17 points 9 ∶ 0 I feel the way you do. I feel your pain. Hugs and solidarity. • An extreme extension of utilitarian, rationalist, and effective altruist logic can blind us to the negative experiences of individuals and major flaws in the EA community. I fear that people within the EA community are not always taking allegations of harm seriously out of concern that (1) there is “more impactful” work that they could be doing than investigating such allegations, (2) investigating allegations of harm against prominent individuals may damage the reputation of Effective Altruism, and (3) some individuals are having such a “high-impact” that they don’t want to find them guilty of an act that may impede such effective work. I overall agree with the ideas presented in this post and I think they deserve more attention. I think the above part is especially true. Its true that discriminatory tendencies in a community doing good don’t erase its overall positive impact. HOWEVER. It does, how you state, exclude some people from helping. And if that “some people” is 50% (in some countries more) of college graduates, that seems like a real big problem. Thank you for writing this! • 2 Dec 2022 15:30 UTC 10 points 0 ∶ 0 Thank you for the important post! “we might question how well neuron counts predict overall information-processing capacity” My naive prediction would be that many other factors predicting information-processing capacity (e.g., number of connections, conduction velocity, and refractory period) are positively correlated with neuron count, such that neuron count is pretty strongly correlated with information processing even if it only plays a minor part in causing more information processing to happen. You cite one paper (Chitka 2009) that provides some evidence against my prediction (based on skimming the abstract, this seemed to be roughly by arguing that insect brains are not necessarily worse at information processing than vertebrate brains). Curious if you think this is the general trend of the literature on this topic? • I think this post is super valuable. The following is an illustration of my endorsement. Humor has much more to do with error culture than is generally assumed. People who have a sense of humor put distance between themselves and the things they work on, and they don’t immediately collapse if a mistake is made. “It is a damn serious thing to be funny,” and by this they mean not only that it takes a lot of brain power to invent a good joke, but that good humor is based on deep and balanced seriousness. We need more of the latter, and a little less of the cramped ambition to point out every blunder to others. Humor is an unheard-of advantage in politics, because it can be used to say many things that would be insulting if said seriously. You probably know the story of Winston Churchill, who considered his French to be quite passable, while French people who listened to him spoke without hesitation of a “massacre of the French language”. De Gaulle later wrote that he learned English listening to Churchill speak French. In any case, Churchill had a sense of humor, and he used it as often as he used his bumpy French. To make his point particularly forceful, he did not simply ask de Gaulle to give way to British troops in Africa, but stated curtly, “Si vous m’obstaclierez, je vous liquiderai.” And by humor I don’t mean a dull “permanent grin” or “making fun” at the expense of others. But the ability to laugh about the small and big shortcomings of life. Those who have a sense of humor can laugh at themselves. You wouldn’t believe how many people take themselves insanely seriously, and how much more pleasant it is when someone doesn’t take themselves so seriously for once. Don’t you feel the same way: at length, there is hardly anything more annoying and boring than colleagues who creep along the walls all year in a rainy state of listlessness and put on an expression for no reason, as if they were carrying a sign in front of them that says: “Anyone who wants to get along with me must first reveal the dark secret of my thoughtfulness. For I am insanely clever and not a simpleton like the good-humored majority in the room I am in at the moment.” Of course, you can get smarter without humor. Newton was everything but funny, and still brilliant. Schopenhauer was profound, but certainly not known for his laughing fits. Henrik Ibsen was an extremely creative mind and yet not a paragon of hilarity, quite the opposite. But the reverse conclusion is also not correct: just because you put on a wrinkled face and turn up your nose on principle, you are not yet insanely clever. I prefer Benjamin Franklin, who was so amused by the stilted titles of the scientific papers of his time that in a letter to the Royal Academy of Brussels he philosophized just as turgidly about the disadvantages of farting and proposed a prize for the discovery of a pill “that shall render the natural discharges, of wind from our bodies, not only inoffensive, but agreeable as perfumes”. • Thank you for this perspective! Once again, I think you’ve expressed better than I did the connection between humor and humility. I love all of your historical examples as well; it has me thinking that, rather than following current examples of comedy, looking further back in the past might be an even more fruitful approach to getting inspiration for EA-applicable humor that has stood the test of time. • Great write-up! Sometimes establishing policy on a local level can help build momentum to scale it up. In DC one similar policy we are advocating for is to have the Mayor establish an Animal Welfare Liaison. We have used candidate questionnaires to gauge interest in such an office. You can see an example of some results here: https://​​dcvfa.com/​​who-are-the-best-at-large-candidates-on-animal-issues/​​ • excited to give this a listen. Anyone else listen to this podcast? I love this podcast but this episode was a though listen. Really made me think. • Hey Maya, I’m Catherine - one of the contact people on CEA’s community health team (along with Julia Wise). I’m so so sorry to hear about your experiences, and the experiences of your friends. I share your sadness and much of your anger too. I’ll PM you, as I think it could be helpful for me to chat with you about the specific problems (if you are able to share more detail) and possible steps. If anyone else reading this comment who has encountered similar problems in the EA community, I would be very grateful to hear from you too. Here is more info on what we do. Ways to get in touch with Julia and me : • 2 Dec 2022 14:41 UTC 12 points 6 ∶ 0 Thank you for writing this. I’m sure it was very difficult to do, and so I really appreciate it. Effective altruism has an emotions problem. I strongly agree with this. Have you seen Michel’s post on emotional altruism? It doesn’t get to your points specifically, but is similarly the need for more open emotion in the movement. I also to add something that, in my experience, cannot really be ignored when speaking about the expression of emotion in particular in EA. EA has a lot of people who are on the autism spectrum who may relate to emotion differently, particularly in the way they speak about it. There are others who aren’t on the spectrum but similarly have a natural inclination to be less or differently publicly emotional. EA/​rationality can feel rather welcoming (like “my people”) to people like this (which is good—welcomingness of people whose brains work differently is good) and this may produce a feedback loop. This is not at all to deny your recommendations on this particular area. Rather, it is to acknowledge that some proportion (far from all) of what you call the “emotions problem” is probably just people being themselves in a way we should find acceptable, which means that I am a bit more confused about how to best address it. • thanks for pointing this out—I think this is a key point AND I think it is inflected by gender. My guess (not being an expert on autism, but being somewhat of an expert on gender) is that women who are autistic are more likely to learn, over time, how to display and react to emotion “like normal people”, because women build social capital through relational and emotional actions. Personal experience (I am a woman, to a first degree approximation): as a child I did not really understand emotion /​ generally felt aversive when other people expressed it. Over time I learned how to feel /​ respond to others’ emotions in a socially normative way, through observation and self-reflection and learning. This is not to say that those of us in EA who are naturally different w.r.t. our emotional processing should feel bad/​abnormal, but to say that EA would be a more welcoming community, especially to women, if people in EA learned how to process and respond to “normative” emotional expressions. Someone above said that EAs see debate as an expression of caring, and I (a) am the same way and (b) understand that most people are not! I’ve learned to ask “are you looking for discussion and finding solutions together, or are you not ready for that yet?” (Similarly, people with more normative emotional expression entering EA should learn to ask/​adapt to the person they’re talking to.) I’ve been in spaces that I think are very good at this and have a cultural norm of it. • Hi Nick—thanks for the thoughtful post! I think cash arms make a lot of intuitive sense, my main pushback would be a practical one: cash and intervention X will likely have different impact timelines (e.g. psychotherapy takes a few months to work but delivers sustained benefits, perhaps cash has massive welfare benefits immediately but they diminish quickly over time). This makes the timing of your endline study super important, to the point that when you run the endline is really what determines which intervention comes out on top, rather than the actual differences in the interventions. I have a post on this here with a bit more detail. Your point on the ethics here is an interesting one, I agree that medical ethics might suggest “control” groups should still receive some kind of intervention. Part of the distinction could be that medical trials give sick patients placebos, which control patients accurately believe might be medicine, which feels perhaps deceptive, whereas control groups in development RCTs are well aware that they aren’t receiving any intervention (i.e. they know they haven’t received psychotherapy or cash), which feels more honest? The downside is this changes the research question from “What is the impact of X?” to “How much better is X than cash”, and there are lots of cases were the counterfactual really would be inaction. A way around this might be to give control groups an intervention that we know to be “good” but that doesn’t affect the specific outcome of interest. e.g. I’ve worked on an agriculture RCT that gave control groups water/​sanitation products that had no plausible way to affect their maize yield but at least meant they weren’t losing out. This might not apply to broad measures like WELBYs I’m honestly not sure about the ethical side here though, interested to explore further. • Thanks so much Rory and for the links to your earlier post and the USAID stuff! I think your criticism is a good criticism of RCTs in general, but it seems to me more a criticism comment about RCT design then being a clear argument against comparing with cash transfers. RCTs on development NEED longer term outcome measurement, and surely need at a minimum 2 data points at 2 different times after the study. And of course the most important data point is after many months or even many years as you talked about in your article. I’m not at all sure about the ethical side either . Medical RCTs compare a new trial treatment against the most up-to-date treatment—not so much because we worry about “tricking” a patient like you say (there are still plenty of RCTs with sugar placebo pills which is deemed ethically OK), we are still OK with a kind of ‘deception’. What we AREN’T OK with is doing a trial where we give the control arm nothing at all, when we know there is a better option than nothing for the medical condition. And I’d argue that cash is usually a better option than nothing for many development conditions. That’s a great and sobering point about the counterfactual potentially being inaction if cash transfers won the day. Why should the counterfactual be inaction though? I would hope as development people we are good enough that if Cash was equivalent or better than intervention X, this wouldn’t lead us not to inaction but instead to give more cash instead. Maybe I’m naive and idealistic though, and maybe you’re right that there is actually a practical advantage in seeing a positive impact of intervention X, even if it is worse than a cash transfer. I don’t think that should be the case though. That’s the whole question really—should we spend our millions on RCTs asking “What is the impact of X”, or “Is X better than cash”. What we really want to know, the practical question which underlies the research question is is “Should we be implementing this intervention at scale”. I’d argue that to answer that, the question vs. Cash is the one that matters more. Thanks so much for your reply, I can see you’ve thought about this far more than me and I loved your original post—weird that searches on the forum didn’t bring it up, maybe they should employ google search on the site haha. • I think a lot of these tips will suit everyone who plans to speak publically. Except for audio equipment of course :) • I am open to the idea that private slack messages can be shared to make a point, but it seems that someone just shared a tonne of them (they range across Rob’s comments, FTX early warnings etc) and I dislike that—it damages people’s ability to communicate freely if they think a chunk of messages are gonna get shared. • Population growth is an existential risk. The HANDY model shows many regimes where self organizing systems grow to the point of catastrophic failure. These models attempt to explain the fall of prior isolated civilizations, including the complete loss of population such as Easter Island and the loss or civilization such as the collapse of Teotihuacan. https://​​www.sciencedirect.com/​​science/​​article/​​pii/​​S0921800914000615 It would be worthwhile to factor in such risks. • Truly a sensitive subject this is. Just read this post twice in a row. Thank you! It would be great to see a few real-life experiences mentioned in the text too. • EA: We should never trust ourselves to do act utilitarianism, we must strictly abide by a set of virtuous principles so we don’t go astray. Also EA: It’s ok to eat animals as long as you do other world-saving work. The effort and sacrifice it would take to relearn my eating patterns just isn’t worth it on consequentialist grounds. Sorry for the strawmanish meme format. I realise people have complex reasons for needing to navigate their lives the way they do, and I don’t advocate aggressively trying to make other people stop eating animals. The point is just that I feel like the seemingly universal disavowment of utilitarian reasoning has been insufficiently vetted for consistency. If we claim that utilitarian reasoning can be blamed for the FTX catastrophe, then we should ask ourselves what else we should apply that lesson to; or we should recognise that FTX isn’t a strong counterexample to utilitarianism, and we can still use it to make important decisions. • Thanks for the digest James! • Wow, looks like an empowering experience for a novice writer! Going to check those group writings in my spare time. • Thanks for info, friends! • Now THIS is truly an orginal post for this forum. Thanks! Enjoyed reading it, and it expanded my thinking! • Hey Maya! I just wanted to thank you for sharing your experience! I’m sure it wasn’t easy to write it up and it took a lot of courage, and I’m really glad you did it! • I want to agree with this, but I think that if SBF had “gotten away with it” we’d have taken his money, which makes me doubt our sincerity here. It sounds a lot more like “don’t get caught doing fraud” • [ ] [deleted] • As a note, while I agree people though that via Alamada, FTX was “Using exchange data to trade against their own customers”, the fact that Alamaeda lost so much money confuses me as to if this was actually true. • As another crypto insider: • the scam coin thing is true, the entire Solana ecosystem is full of such coins. Many VCs in the space knowingly fund projects they know to be likely scams, Solana took this to an extreme, all backed by SBF funding. People in the space for a while knew they should basically stay away from the whole ecosystem. New users ofcourse would have gotten ***ed. • frontrunning was a common accusation that could be true and no one cared. I had money on FTX and my reasoning for that was not “these people won’t frontrun me” but rather “so what if these people eat 0.5% fees on each trade, my trades are still positive EV”. No one had an incentive to care about frontrunning because the profits (and genreal risk tolerance) in crypto are ludicrously high across the board. (Unless you’re a naive retail investors who is unaware how high your risk exposure is when you buy random coins on FTX.) • paying for tweet influencing is common across the board, many twitter users were caught red-handed by other twitter users. (Good luck getting successful lawsuits though.) I don’t know if SBF’s Solana projects engaged in this but it would be surprising to me if not even one of them did it. And in general, VCs often aren’t aware and don’t care about all the scammy behaviour that projects they fund are involved in. More generally, there are strong incentives in the crypto space to not call out other investors and projects doing shady stuff because this reduces the probability you will be invited to funding rounds, given access to dealflow etc. If you’re pursuing earning to give in crypto, you should be aware that staying quiet about all the scams you see is almost a requirement if you wanna make big money. (This is assuming ofcourse that you’re not involved in the scams yourself.) There is not a single rich person in crypto (starting with vitalik himself, and every exchange CEO) who is unaware of this, most people keep quiet about it and do not name-and-shame individual projects who scam. I myself chose to stay quiet about this for a long time after I left the space because “what if I later realise I want access to the ludicrous amounts of money here”. I made a post about it at the time: https://​​forum.effectivealtruism.org/​​posts/​​KPy4yuSsGk4qMwK3g/​​ea-opportunity-cryptocurrency-and-defi • 2 Dec 2022 12:05 UTC 9 points 1 ∶ 0 I really appreciate the summaries, thank you! • [ ] [deleted] • I completely agree with this. As a (Americans read: neo) Liberal that thinks the Green movement does far more harm than good, some of the political campaigning I’ve seen EAs do really puts me off and makes me question the entire movement. SBF’s lobbying of politicians in the US is another example of egregious misuse of funds. Until those checks and balances are in place, we should be focusing on directing funds to the most impactful causes. That should be the beginning and end of EA in my opinion. Politics is almost never the best ROI approach to anything, using EA’s own methodology to calculate impact. There will of course be exceptions, but I find it hard to believe any amount of money will be better spent trying to influence a government as opposed to buying malaria nets. We also need to avoid thinking and framing our actions as a group identity. It’s to be expected that people come to different and opposing conclusions even within a movement with clear stated principles. As such, political action shouldn’t be done in the name of the group as a whole. • Duarte—I agree with your additional points here. FWIW, I was always uneasy with SBF’s massive donations to (mostly) Democratic politicians, and with his determination to defeat Trump at any cost, by any means necessary. It just didn’t make sense in terms of EA reasoning, values, and priorities. It should have been a big red flag. But I think the lack of political diversity in EA, and many EAs’ tacit agreement with SBF’s partisan political views, led too many EAs to think it was no big deal that SBF was mixing EA and politics in unprincipled and somewhat bizarre ways. In the future, I think we should have stronger skepticism about anybody who tries to link EA to partisan political activism. • Thanks for the submission, much appreciated! • Cheers for the entry, much appreciated! • I particularly appreciated: • That this looks to an organization outside the EA community • The brief pointers on limitations: they would make it easier for someone to build on this The code formatting could use a bit of work (you can format code in the forum editor), but that’s the least important factor. • Thanks for the entry, much appreciated! • Some comments about the approach: Heh, reminds me of some past work. In particular, see here. When you say: So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive… Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs. I’m not sure whether this is the case. In particular, what does this assume about the return to capital and the return to labor? See equation 3 here: How low does r have to be for that conclusion to hold? It’s very possible that I am missing something. Labour growth is considerably more stable than capital growth, but still volatile, so will be assumed to be a constant rate of 10% with a standard deviation of 5%, with the lower bound taken as the mean due to difficulties in higher growth rates (30% would imply that 4% of the world would be engaged in EA-relevant work by 2060, which seems highly implausible) Arguably, labor growth is endogenous, not exogenous, and a function of both labor and capital? α will be assumed to be the same level as the economy as a whole, at 0.4. Why? It’s possible that it might be very different, and that this depends on the type of existential risk. E.g,. some types of AI safety seem like they can be done while capital constrained, some types of biorisk might be particulary capital heavy (e.g,. funding better protective equipment) These results counterintuitively imply that the current marginal individual would be substantially higher marginal impact working to expand effective altruism than working on maximising the reduction in existential risk today, with 99.7% confidence One interesting thing to look at might be what under what modelling assumptions this holds. Overall I like the approach. I think that most of the uncertainty is going to come from model error, though. • I’ve heard a number of stories of women feeling uncomfortable in EA spaces and they sadden me every time. • 2 Dec 2022 11:38 UTC 1 point 0 ∶ 0 FWIW, Richard Pettigrew has written a condensed version of their paper on the EA Forum. • 2 Dec 2022 11:36 UTC 81 points 18 ∶ 1 Hey Maya, I like your post. It has a very EA conversational style to it which will hopefully help it be well received and I’m guessing took some effort. A problem I can’t figure out, which you or someone else might be able to help suggest solutions to - -If I (or someone else) post about something emotional without suggestions for action, everyone’s compassionate but nothing happens, or people suggest actions that I don’t think would help -If I (or someone else) post about something emotional and suggest some actions that could help fix it, people start debating those actions, and that doesn’t feel like the emotions are being listened to -But just accepting actions because they’re linked to a bad experience isn’t the right answer either, because someone could have really useful experience to share but their suggestions might be totally wrong If anyone has any suggestions, I’d welcome them! • Maybe one way to address this would be separate posts? The first raises the problems, shares emotions. The second suggests particular actions that could help. • I think you may be on the right track with how you wrote this comment actually—taking a moment to let the person know they were heard before switching to problem-solving mode. IMO social media websites should sometimes give users a reminder to do this after they hit the “submit” button, but before their comment is posted. Perhaps the submit button could check whether a particular tag is present on the original post? • This is well put. I think people can say that debating is their way of trying to care. Not a full solution but I think people don’t sometimes realise this. • What do you think of ACE’s recent recommendations? • Some suggested remedies. I know some of these are weird, but I honestly think they are good. Many solutions don’t attempt to manage appropriate to scale, in a distributed way or with correct incentives, I think these do: • Poll to understand the scale of the problem • Let’s know how many people feel this way. Is there a link between this and the number of women the community? We don’t have to guess this stuff, we can just know • People at EAGs can report people who used meetings to try and flirt with them in a way they didn’t like. Slowly increase punishments (I suggest probabilistic bans from EAGs eg 5% you are banned for 6 months) until the harms to women are less than the cost of the bans. I like flirting at EAG parties, so I think there is a different tone there, but seems fine for during the day for there to be a high risk to flirting without someone appreciating it. • I like probabilistic bans because most of the time they are just a warning but they still sometimes have bite. • (This image is from the last time we had this discourse. I guess it would replicate in a representative poll. Most women don’t want to be flirted with at a conference during the day, though some do. As I say, seems we should increase the cost of doing so) • People sometimes argue that I’m too harsh on this. But currently I think the harms from people being flirted with who dont’ want to be are greater than the harms from those who would have their freedom curtailed, so I suggest we try it. • I unironically support people who have been harmed gossiping about people who have done so. If you hear a bad rumour about someone, by all means check it, but I think it’s okay to share what someone has said to you. • There are costs to this in terms of community trust so consider carefully if rumours are true, but I still think we undergossip tbh. • There should be a clear process for what happens around bad behaviour in relation to EAGs in particular and a way for people to be forgiven of bad behaviour (given credible change and timescales based on badnesss). EA should not operate on reasonable doubt, but on balance of harms (and I say this as someone who sometimes falls afoul of this stuff—but the harms to all involved matter equally, where as “beyond reasonable doubt” generally ignores harms to the accuser imo) • Scandal markets. I unironically think there should be a manifold market on whether any EA above a certain reputation level is found guilty of harassment by and independent investigator. Then people can share their information by betting privately. Investigations happen at random • This sounds mechanistic and weird, but imagine if it was normal, would we remove it? I doubt it • Prediction markets are distributed whistleblowing • “What if the accused manipulates the market?” This increases liquidity and draws attention to the market • “Wouldn’t it feel awful/​be tasteless for powerful people to have markets on whether they would harass someone?” I think the status quo is worse. I don’t mind putting additional burdens on people in positions of power. And I am confident that this would decrease the likelihood of some big scandal such as destroys other communities. • Because I believe you can’t advocate for this without having a market yourself, mine is here. • I’m worried a lot of this is missing the point, and potentially missing important solutions. I’m going to use EAG for my examples here, as I think it is the strongest case of what I’m describing, but I think my argument generalises to a lot of scenarios and spaces in the EA community. In my mind, there are two competing things going on here: 1. At an EAG, you are likely to meet people who are at a similar stage in their life to you, who have similar interests, and who are likely to be both intelligent and altruistic, both attractive qualities. If you meet one of these people, and they feel similarly about you, you could enjoy some flavour of romance together, and it would be mutually fulfilling. Things being mutually fulfilling between parties is self-evidently a good thing. 2. At an EAG, some people, primarily women, have bad experiences as a result of others’ romantic attention. These experiences can range from uncomfortable to traumatic. I think these negative experiences can then be grouped into two further categories: 1. Those that are are the result of malicious intent 2. Those that are the result of power dynamics, and can arise despite positive intentions. I think your solutions are primarily concerned with the 2a category, and when reading it I was reminded of this comment, which I think puts it better than I could. There are people with malicious intent in every community, and I don’t think EA requires any particularly novel solution to deal with them. I agree with Isabel in that I’m also worried that when these threads come up, people will spend their efforts trying to either gauge the size of the problem, or theorise the optimal solution, rather than take any meaningful action. I think 2b can be equally as damaging, and more should be done about it. Because EA is such a small, well-resourced community, there are especially strong power dynamics at play between individuals. As discussed in the blog post linked above, the EA community does not have strong boundaries between professional and romantic lives, in fact it seems especially tolerant of this intermingling—I claim this is a strongly negative thing. If someone a prospective future employer/​grantmaker/​”senior leader” starts flirting with me at EAG, even if they are being incredibly respectful and only have good intentions, I am under a lot of pressure to cooperate, even if that’s not what I want at all. If at the start I do genuinely reciprocate that attraction, and we engage in some kind of romantic interaction, and I later change my mind, there is again a huge pressure on me not to leave the arrangement, even though that’s what I want to do. I’m not suggesting that EAs shouldn’t date one another, but I am suggesting a much stronger acknowledgement of power dynamics at play, both on an individual community level. Due to the lack of community emphasis, I suspect many beneficiaries of power dynamics in these situations do not think of themselves that way, and so may inadvertently do harm (this isn’t aimed at you personally—I don’t know whether you are or aren’t aware of this). It seems plausible to me that this would also help with 2a, as well as make the community feel more inclusive. • How about an opt-in speed dating event in the evening? That way the 40+% of women who desire flirts can obtain them, and there is no need or excuse to flirt with people during professional 1-on-1s. If the conference organizers aren’t comfortable organizing a speed dating event, maybe one of the women who wants to be flirted with could step up and organize it unofficially. Could do lottery admission to keep the gender ratio even. Edit: An EA matchmaking service is another idea 2nd edit: Amanda Askell says she likes ambiguity. Maybe you women should put your heads together on this • Scandal markets are a good idea • Transparent process with room for forgiveness but that considered harms to all parties (rather than underrating accused) • Gossip is good • Punishments for people who make people uncomfortable at EAGs seem like a good idea • idk about “punishments” exactly; I would like EAG organizers to prioritise preventing harm, rather than acting as a justice system. Preventing harm is sometimes going to mean making clear to people that they should stop doing what they’re doing, and sometimes going to mean temporarily or permanently excluding people. These things look like punishments but I don’t know if I’d describe them as such. • Polling seems like a good idea • When talking about Sam Bankman-Fried I read a bunch of times the claim that EA failed because it didn’t put sufficient effort into checking his background. It might be worthwhile to fund a new organization, ideally as independent as possible from other orgs whose sole reason for existance is to look into the powerful people in EA and criticize them when warrented. While it might be great if CEA would be able to fill that role, they happen to be an org that in the past didn’t honor a confidentiality promise when people came to them with critizism of powerful people in EA and don’t think this was enough of a problem to list it on their mistakes page. • I very much doubt the reason it’s won’t be made privately available is due to Pfizer thinking it wouldn’t be worth it. More likely it’s down to sufficient stock being available in the NHS for the cohort that will be receiving it, and the government not wanting to add more demand, which would increase the cost per dose for the NHS. It’s perverse, but a likely consequence of the Beveridge style universal healthcare system used in the U.K. • The suggestion is to treat the COVID vaccine like the flu vaccine, and make it free for those who need it most, and available to buy for those who don’t. Making it available for sale doesn’t increase costs to the NHS. • What’s stunning to me is the following: There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group. Leaking private slack conversations to journalists is a 101 on how to destroy trust. The response to SBF and FTX betrayal shouldn’t be to further erode trust within the community. EA should not have to learn every single group dynamic from first principles—the community might not survive such a thorough testing and re-learning of all social rules around discretion, trust and why its important to have private channels of communication that you can assume will not be leaked to journalists. If the community ignores trust, networks and support for one another—then the community will not form, ideas will not be exchanged in earnest and everyone will be looking over their shoulder for who may leak or betray their confidence. Destroying trust decimates communities—we’ve all found that with SBF. The response to that shouldn’t be further, even more personal and deep betrayals. I will now have to update against how open I am in discussions with other EAs—which is a shame as the intellectual freedom, generosity, honesty and subtlety are what I love about this community—but it seems I will have to consider “what may a journalist think of this if this person leaked it?” as a serious concern. • Even if you’re not concerned about leaks, the possibility of compelled disclosure in a lawsuit has to be considered. So if would be seriously damaging for information to show up in the New Yorker, then phone, in-person, and Inspector Gadget telegram should be the preferred method of communication anyway. I definitely appreciate the point about trust, just wanted to add that people should consider the risks of involuntary disclosure through legal process (or hacking) before putting stuff in writing. • There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group. I don’t know about others, but I find it deeply uncomfortable there’s an invite-only conference and a private slack channel where, amongst other things, reputational issues are discussed. For one, there’s something weird about, on the one hand, saying “we should act with honesty and integrity” and also “oh, we have secret meetings where we discuss if other people are going to make us look bad”. • The thing that most keenly worries me about this is the lack of openness and accountability of this. We are a social movement, so of course we will have power dynamics and leadership. But with no transparency or accountability, how can anyone know how to make change? • I think its wrong to say there’s no transparency or accountability (this isn’t to say we should just assume all checks now are enough, but I don’t think we should conclude that none so far exist). Obviously for anything actually criminal then proper whistleblowing paths exist and should be used! At the moment, I think even checks like this discussion are far more effective than in most other communities because EA is still quite small, so it hasn’t got the issues of scale that other institutions or communities may experience. On transparency: Transparency is a part of honesty, but has costs and I don’t think its at all clear in this instance that that cost was remotely required to be paid. Again, this will only cause future discussions to be slower, more guarded and less honest—the community response to this will similarly decide how much we should guard ourselves when talking with other EAs. As a side point: its also the case that this instance isn’t actual “transparency” but fed lines to a journalist, then selectively quoted and given back to us. The cost of transparency in every discussion at a high-level of leadership (for example) is that the cost of new ideas becomes prohibitively high as everyone can pick you apart, weigh in, misrepresent or re-direct discussion entirely. Compare e.g. local council meetings with the public and those without, and decisions made in committee vs those made by individual founders. Again transparency is a part of honesty but I can put my trust in you—for example—without needing you to be transparent about every conversation you have about me. If, however, the norm is that we expect total transparency of information and constant leaks—then we should expect a community of paranoia, dishonest conversation and continuous misrepresentations of one another. • I think you may be assuming what I am calling for here is much more wide-ranging. There still doesn’t seem to be good justification for not knowing who is in the coordination forum or on these leadership slack channels. Making the structures that actually exist apparent to community members would probably not come at such a prohibitively high cost as you suggest • I think its completely fine for invite-only slacks to exist and for them to discuss matters that they might not want leaked elsewhere. If they were plotting murders or were implicated in serious financial crime, or criminal enterprise, or other such awful unforgiveable acts, then yes I can see why we would want to send a clear signal that anything like that is beyond the pale and discretion no longer protects you. In that instance I think no one would object to breach of trust. However, we aren’t discussing that scenario. This is a breach of trust, which erodes honest discussion in private channels. The more this is acceptable in EA circles, the less honesty of opinion you will get—and the more paranoia will set in. Acting with honesty and integrity does not mean opening up every discussion to the world, or having an expectation that chats will leak in the event that you discuss “if other people are going to make us look bad”. Nevermind the difficulty that arises in then attempting to predict what else warrants leaks if that’s the bar you’ve set. • This strikes me as weirdly one-sided. You’re against leaking, but presumably, you’re in favour of whistleblowing—people being able to raise concerns about wrongdoing. Would you have objected to someone leaking/​whisteblowing that e.g. SBF was misusing customer money? If someone had done so months ago, that could have saved billions, but it would have a breach of (SBF’s) trust. The difference between leaking and whistleblowing is … I’m actually not sure. One is official, or something? • This fundamentally misunderstands norms around whistleblowing. For instance, UK legislation on whistleblowing does not allow you to just go to journalists for any and all issues—they have to be sufficiently serious. This isn’t just for “official” reasons but because it’s understood that trust within institutions is necessary for a functioning society/​group/​community/​company and norms that encourage paranoia over leaks lead to non-honest conversations filtered through fears of leaks. Even in the event that a crime is being committed, you are expected to first go to authorities, rather than journalists—to journalists only if you believe authorities won’t assist. In the SBF example I’d hope someone would have done precisely that. Moreso, to protect trust, whistleblowing is protected only for issues that warrant that level of trust breach—ie my comment is that this is disproportionate breach of trust, with long term effect on community norms. Furthermore, whistleblowing on actual crimes is entirely different to leaking private messages about managing PR. And—again—is something one should do first to authorities—not necessarily to journalists! Essentially you are eliding very serious whistleblowing of crimes to police or public bodies, to leaked screenshots to journalists of private chats about community responses. • I’d guess the distinction would be more ‘public interest disclosure’ rather than ‘officialness’ (after all, a lot of whistleblowing ends up in the media because of inadequacy in ‘formal’ channels). Or, with apologies to Yes Minister: “I give confidential briefings, you leak, he has been charged under section 2a of the Official Secrets Act”. The question seems to be one of proportionality: investigative or undercover journalists often completely betray the trust and (reasonable) expectations of privacy of its subjects/​targets, and this can ethically vary from reprehensible to laudable depending on the value of what it uncovers (compare paparrazzi to Panorama). Where this nets out for disclosing these slack group messages is unclear to me. • You don’t have to agree with or understand someone to extend compassion! Speaking for myself, sometimes I fear compassion will be used as an attack to push for concessions, so before I outwardly express it, I check whether I agree with criticisms. In that sense, discussion can be me taking something more seriously, not less. Now I’m not saying that’s helpful but I do think there are different communication styles at play here. I’m sad to hear that you’ve felt the way you describe. • Discussion can be compassionate. Disagreement can be compassionate. In fact, I’d argue that failing to have compassion and empathy for someone making a point is going to pretty seriously impair your ability to engage with the point, and even if you do you’re going to have a hard time communicating about it in a way that will be heard. I think seeing these things as in tension is a mistake. • I genuinely find this fascinating. I don’t think I’ve ever felt worried expressing empathy would be used as a push for concessions, and haven’t wanted for it with this intent. I think your experience might be common though, perhaps among men in particular, and I think it we should talk about it more. Thanks for putting this out there. • Yeah, I think this article is a bit of a case in point. What is the author wanting if not significant changes and what are many comments rejecting if not discussions of whether that is reasonable. • I would like a polling question on this. I think that say 10 − 30% of women have had 2 or more belittling experiences at an EAG and that’s bad. But I read this paragraph and it seems alien to me. What % of women+nb folks have this experience in EA? I could tell you how tears streamed down my face as I read through accounts of women who have been harmed by people within the Effective Altruism community. I could describe how my fists curled and my jaw clenched as I scrolled through forum comments and Reddit threats full of disbelief and belittlement. I could try to convey the rising temperature of my blood as it boiled; I could explain to you that I could not focus in class for a full two days. But I don’t think that I will. I’m unsure that the Effective Altruism community has room for my anger. • Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it. • I would mainly like it to be easy to fill out so that the results are representative. I think it’s pretty easy for surveys like this to end up only filled in by people with the strongest opinions. • If it’s worth anything I would expect that figure to be between 28-43%. (I’m anchoring on your estimate, I would probably have guessed somewhere around 40-55% if I hadn’t read your comment ) • I thought I would surface some of the points from the post and allow people to express opinions on them. I know this can seem off, but I think cheaply allowing us to see what the community thinks about stuff is useful. • I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations. Agreevote if you think the actions here are, on balance bad, disagreevote if you disagree. • I read this blog post and the comments and controversy it generated. The amount of invalidation and general nastiness in the comments (that have since been deleted so I won’t link to them) shocked and saddened me. Agreevote if you think the nasty comments were from people in the EA community, disagreevote if you think they weren’t. • At an EAG afterparty, an attendee talked about how he scheduled a one-on-one with someone because he found her attractive. Agrevote if you think this is good/​fine, disagreevote if you think it’s bad. • [ ] [deleted] • I have heard 2+ accounts of this (heck, as I’ve apologised for before, I’ve done it), so I think it’s pretty common. My stance is that EAGs should have a high penalty for making people feel they aren’t valued for their work. People can take the risk if they want to but there should be a high penalty if people are bad at it. Social gatherings and afterparties are, in my opinion, the place to flirt without a risk of some kind of community sanction. • He casually mentioned that some of them made a list ranking women in EA in the Bay that they wanted to hook up with. Agrevote if you think this is good/​fine, disagreevote if you think it’s bad. • Imagine the opposite situation—a group of women talking about in detail about which men they would want to hook up with. Agreevote if you think that would be good/​fine, disagreevote if you think that’s bad. • 2 Dec 2022 10:12 UTC 22 points 0 ∶ 1 What systems/​solutions currently exist for “dealing with” misconduct, harassment, or assault after it happens? What systems should exist? • I feel some hesitation about solutions that involves handing the power to blacklist or “punish” people to one agency. • But it’s really hard for individuals to publicly post about other people’s problematic behavior. • A friend of mine in the EA community told me they had been sexually harassed and stalked by another EA member and were considering posting on social media about it. I encouraged them to post do so. They were scared of potential backlash so they didn’t. • But I didn’t post about it at all. It feels inappropriate for me to do so on someone else’s behalf, especially since I’m not particularly wrapped up in Y’s life. • I wonder if scandal markets could potentially be useful (as Scott Alexander recently discussed in a recent thread), or something else inspired by scandal markets. • I think it’s plausible that we could use scandal markets on high-profile people in the EA community. • Scott writes, and I pretty strongly agree: “I’m tired of bad things happening, and then learning there was a “whisper network” of people who knew about it all along but didn’t tell potential victims. It’s unreasonable to expect suspicious to come out and make controversial accusations about powerful people on limited evidence. But a prediction market seems like a good fit for this use case.” • But the majority of the people who do/​will do/​have done problematic things are unlikely high-profile enough to have a scandal market made about them. It seems plausible to me that a well-designed system would be able to effectively deal with the kinds of issues the OP talks about and things like financial misconduct. But I feel like there are a few challenges: • I can’t think of any community that effectively deals with misconduct that isn’t also authoritarian-esque (the CCP comes to mind). (That being said, please comment what other communities or systems exist that effectively deal with misconduct). • And a well designed system should reflect that different problematic actions require different responses. • I think this points to the weakness of a centralized system: most people agree that things like rape should lead to removal from the community. But a lot of things are debateable (like making a ranked list of women someone wants to hook up with), and if CEA or whomever implemented some response as “the authority”, it would almost certainly be opposed by some for being too lenient and by some for being too harsh. • It almost feels like making public knowledge of these kinds of things is the right thing to do, because then people will react accordingly. • But simply saying “we’re going to publicize every distasteful things others do so so that people can decide for themselves how they should respond” feels bad for a lot of reasons. For one thing, it would erode trust between members if people felt like they might be publicly outed for small infractions. • I actually think people should be complaining to, or even complaining about, the community health team significantly more than they are. People on that team are paid to address problems like misconduct/​harassment/​assault. Complaints like Maya’s should be a key performance metric for them. In my view, there should be a stronger default of people like Maya contacting the community health team to say “hey, I heard about women getting ranked in a way that made me uncomfortable”. And the community health team privately contacting the rankers to say “hey, you aren’t helping our goal of a warm professional community that welcomes a wide variety of people and incentivizes them to care about doing good over being hot”. Some might find this draconian—to clarify, I don’t think disciplinary action is justified here. I just think these conversations would be positive expected utility if done well. • Thanks for sharing. I think EAs are only ethically better than other people under consequentialist ethics, but are just as bad as anyone else when it comes to virtues and obeying good social rules, which is sad, because we can and should do better. • I don’t think this is true. Not sure how you’d measure or verify/​refute this, but I suspect that the average EA man objectifies women much less than the average non-EA man. It’s just that we have an imbalanced gender ratio so these incidents are disproportionately concentrated onto few women, which is really unfair to them. • Fantastic post! This is more informative AND more interesting than most philosophy papers on the topic. You accurately summarise meta-ethical hedonism and provide fair criticisms. This is up there with your post on infinite ethics. If I can find the time, I’ll write down why I disagree and post it here or send you an email. • I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations. I’m one of the people (maybe the first person?) who made a post saying that (some of) Kathy’s accusations were false. I did this because those accusations were genuinely false, could have seriously damaged the lives of innocent people, and I had strong evidence of this from multiple very credible sources. I’m extremely prepared to defend my actions here, but prefer not to do it in public in order to not further harm anyone else’s reputation (including Kathy’s). If you want more details, feel free to email me at scott@slatestarcodex.com and I will figure out how much information I can give you without violating anyone’s trust. • (some of) Kathy’s accusations were false just to draw some attention to the “(some of)”, Kathy claimed in her suicide note that her actions had led to more than one person being banned from EA events. My understanding is that she made a mixture of accusations that were corroborated and ones that weren’t, including the ones you refer to. I think this is interesting because it means both: • Kathy was not just a liar who made everything up to cause trouble. I would guess she really was hurt, and directed responsibility for that hurt to a mixture of the right and wrong places. (Maybe no-one thought this, but I just want to make clear that we don’t have to choose between “she was right about everything” and “she was wrong about everything”.) • Kathy was not ignored by the community. Her accusations were taken seriously enough to be investigated, and some of those investigations led to people being banned from events or groups. Reddit may talk shit about her, but the people in a position to do something listened. (I should say that what I’m saying is mostly based on what Kathy said in her public writings combined with second-or-third hand accounts, and despite talking a little to Kathy at the time I’m missing almost all the details of what actually happened. Feel free to contradict me if something I said seems untrue.) • Responding to the attention on Kathy’s specific case (I’m aware I’m adding more to it) - I think it’s a good example of one of the issues Maya brings up. In debating her specific case and the truth of it, we’re is detracting from the key argument that the EA community as a whole is neglecting to validate and support community members who experience bad things in the community In this post, it’s women and sexual assault primarily. But there are other posts (1, 2) exempifying ways the EA community itself can and should prioritise internal community health. To argue the truth of one specific example might be detracting from recognising that this might be a systematic problem. • Can you link to your post? I’m asking in order to avoid the (probably already existing) situation where people see that “some of Forth’s accusations” are allegedly not true, but they don’t know which, so they just doubt all of them. • If someone has a record of repeatedly making accusations that have been proven false, I think it is reasonable and prudent to “just doubt all” their accusations. This person was clearly terribly ill and did not get the help she needed and deserved. It’s painfully clear from reading her heartbreaking note that she was wildly out of touch with reality. • 2 Dec 2022 16:46 UTC 16 points 16 ∶ 14 Parent edit: after discussion below & other comments on this post, I feel less strongly about the claim “EA community is bad at addressing harm”, but stand by /​ am clarifying my general point, which is that the veracity of Kathy’s claims doesn’t detract from any of the other valid points that Maya makes and I don’t think people should discount the rest of these points. A suggestion to people who are approaching this from a “was Kathy lying?” lens: I think it’s also important to understand this post in the context of the broader movement around sexual assault and violence. The reason this kind of thing stings to a woman in the community is because it says “this is how this community will react if you speak up about harm; this is not a welcoming place for you if you are a survivor.” It’s not about whether Kathy, in particular, was falsely accusing others. The way I read Maya’s critique here is “there were major accusations of major harm done, and we collectively brushed it off instead of engaging with how this person felt harmed;” which is distinct from “she was right and the perpetrator should be punished”. This is a call for the EA community to be more transparent and fair in how it deals with accusations of wrongdoing, not a callout post of anybody. Perhaps I would feel differently if I knew of examples of the EA community publicly holding men accountable for harm to women, but as it stands AFAIK we have a lot of examples like those Maya pointed out and not much transparent accountability for them. :/​ Would be very happy to be corrected about that. (Maya, I know it’s probably really hard to see that the first reply on your post is an example of exactly the problem you’re describing, so I just want to add in case you see this that I relate to a lot of what you’ve shared and you have an open offer to DM me if you need someone to hold space for your anger!) • Predictably, I disagree with this in the strongest possible terms. If someone says false and horrible things to destroy other people’s reputation, the story is “someone said false and horrible things to destroy other people’s reputation”. Not “in some other situation this could have been true”. It might be true! But discussion around the false rumors isn’t the time to talk about that. Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn’t “we need to engage with the ways Bob felt harmed and make him feel valid.” It’s not “the Bob lied lens is harsh and unproductive”. It’s “we condemn these false and damaging rumors”. If the headline story is anything else, I don’t trust the community involved one bit, and I would be terrified to be associated with it. I understand that sexual assault is especially scary, and that it may seem jarring to compare it to less serious accusations like Bob’s. But the original post says we need to express emotions more, and I wanted to try to convey an emotional sense of how scary this position feels to me. Sexual assault is really bad and we need strong norms about it. But we’ve been talking a lot about consequentialism vs. deontology lately, and where each of these is vs. isn’t appropriate. And I think saying “sexual assault is so bad, that for the greater good we need to focus on supporting accusations around it, even when they’re false and will destroy people’s lives” is exactly the bad kind of consequentialism that never works in real life. The specific reason it never works in real life is that once you’re known for throwing the occasional victim under the bus for the greater good, everyone is terrified of associating with you. Perhaps I would feel differently if I knew of examples of the EA community publicly holding men accountable for harm to women. This is surprising to me; I know of several cases of people being banned from EA events for harm to women. When I’ve tried to give grants to people, I have gotten unexpected emails from EA higher-ups involved in a monitoring system, who told me that one of those people secretly had a history of harming women and that I should reconsider the grant on that basis. I have personally, at some physical risk to myself, forced a somewhat-resistant person to leave one of my events because they had a history of harm to women (this was Giego Caleiro; I think it’s valuable to name names in some of the most extreme clear-cut cases; I know most orgs have already banned him, and if your org hasn’t then I recommend they do too—email me and I can explain why). I know of some other cases where men caused less severe cases of harm or discomfort to women, there were very long discussions by (mostly female members of) EA leadership about whether they should be allowed to continue in their roles, and after some kind of semi-formal proceeding, with the agreement of the victim, after an apology, it was decided that they should be allowed to continue in their roles, sometimes with extra supervision. There’s an entire EA Community Health Team with several employees and a mid-six-figure budget, and a substantial fraction of their job is holding men accountable for harm to women. If none of this existed, maybe I’d feel differently. But right now my experience of EA is that they try really hard to prevent harm to women, so hard that the current disagreement isn’t whether to ban some man accused of harming women, but whether it was okay for me to mention that a false accusation was false. Again in honor of the original post saying we should be more open about our emotions: I’m sorry for bringing this up. I know everyone hates having to argue about these topics. Realistically I’m writing this because I’m triggered and doing it as a compulsion, and maybe you also wrote your post because you’re triggered and doing it as a compulsion, and maybe Maya wrote her post because she’s triggered and doing it as a compuIsion. This is a terrible topic where a lot of people have been hurt and have strong feelings, and I don’t know how to avoid this kind of cycle where we all argue about horrible things in circles. But I am geninely scared of living in a community where nobody can save good people from false accusations because some kind of mis-aimed concern about the greater good has created a culture of fear around ever speaking out. I have seen something like this happen to other communities I once loved and really don’t want it to happen here. I’m open to talking further by email if you want to continue this conversation in a way that would be awkward on a public forum. • Thank you, this is clarifying for me and I hope for others. Responses to me, including yours, have helped me update my thinking on how the EA community handles gendered violence. I wasn’t aware of these cases and am glad, and hope that other women seeing this might also feel more supported within EA knowing this. I realize there are obvious reasons why these things aren’t very public, but I hope that somehow we can make it clearer to women that Kathy’s case, and the community’s response, was an outlier. I would still push back against the gender-reversal false equivalency that you and others have mentioned. EA doesn’t exist in a bubble. We live in a world where survivors, and in particular women, are not supported, not believed, and victim-blamed. Therefore I think it is pretty reasonable to have a prior that we should take accusations seriously and respond to them delicately. The Forum, if anywhere on earth, should be a place where we can have the nuanced understanding that (1) the accusations were false AND (2) because we live in a world where true accusations against powerful men are often disbelieved, causing avoidable harm to victims, we need to keep that context in mind while condemning said false accusations. So to clarify my stance: I don’t think it was wrong to mention that the false accusation is false. I think it seems dismissive and insensitive to do so without any acknowledgement of the rest of the post. I don’t think it would have hurt your point to say “yes, EA is a male-dominated culture and we need to take seriously the harms done to women in our community. In this specific instance, the accusations were false, and I don’t believe the community’s response to these accusations is representative of how we handle harm.” I think the disconnect here is that you are responding /​ care about this specific claim, which you have close knowledge of. I know nothing about it, and am responding to /​ care about the larger claim about EA’s culture. I believe that Maya’s post is not trying to to make truth claims about Kathy’s case and is more meant to point out a broad trend in EA culture, and I’m trying to encourage people to read it as such, and not let the wrongness of Kathy’s claims undermine Maya’s overall point. (edit: basically I agree with your comment above: if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that.) • Thanks for your thoughtful response. I’m trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don’t want to read it. This comment is a bad compromise between all these things and I apologize for it, but: I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think “everyone knows” (in Zvi’s sense of the term, where it’s such strong conventional wisdom that nobody ever checks if it’s true ) that the typical response to rape accusations is to challenge and victim-blame survivors. And that although this may be true in some times and places, the typical response in this community is the one which, in fact, actually happened—immediate belief by anyone who didn’t know the situation, and a culture of fear preventing those who did know the situation from speaking out. I think it’s useful to acknowledge and push back against that culture of fear. (this is also why I stressed the existence of the amazing Community Safety team—I think “everyone knows” that EA doesn’t do anything to hold men accountable for harm, whereas in fact it tries incredibly hard to do this and I’m super impressed by everyone involved) I acknowledge that makes it sound like we have opposing cultural goals—you want to increase the degree to which people feel comfortable expressing out that EA’s culture might be harmful to women, I want to increase the degree to which people feel comfortable pushing back against claims to that effect which aren’t true. I think there is some subtle complicated sense in which we might not actually have opposing cultural goals, but I agree to a first-order approximation they sure do seem different. And I realize this is an annoyingly stereotypical situation - I, as a cis man, coming into a thread like this and saying I’m worried about a false accusations and chilling effects. My only two defenses are, first, that I only got this way because of specific real and harmful false accusations, that I tried to do an extreme amount of homework on them before calling false, and that I only ever bring up in the context of defending my decision there. And second, that I hope I’m possible to work with and feel safe around, despite my cultural goals, because I want to have a firm deontological commitment to promoting true things and opposing false things, in a way that doesn’t refer to my broader cultural goals at any point. • Thanks, I realize this is a tricky thing to talk about publicly (certainly trickier for you, as someone whose name people actually know, than for me, who can say whatever I want!). I’m coming in with a stronger prior from “the outside world”, where I’ve seen multiple friends ignored/​disbelieved/​attacked for telling their stories of sexual violence, so maybe I need to better calibrate for intra-EA-community response. I agree/​hope that our goals shouldn’t be at odds, and that’s what I was trying to say that maybe did not come across: I didn’t want people to come away from your comment thinking “ah, Maya’s wrong and people shouldn’t criticize EA culture.” I wanted them to come away both knowing the truth about this specific situation AND thinking more broadly about EA culture, because I think this post makes a lot of other very good points that don’t rely on the Kathy claims. (And thinking more broadly could include updating positively like I did, although I didn’t expect that would be the case when I made that comment!) You’re probably right that it’s not worth giving much more of a response, but I appreciate you engaging with this! • I’m not too confident about this, but one reason you may not have heard about men being held accountable in EA is that it’s not the sort of thing you necessarily publicize. For example, I helped a friend who was raped by a member of the AI safety research community. He blocked her on LessWrong, then posted a deceptive self-vindicating article mischaracterizing her and patting himself on the back. I told her what was going on and helped her post her response once she’d crafted it via my account. Downvotes ensued for the guy. Eventually he deleted the post. That’s one example of what (very partial) accountability looks like, but the end result in this case was a decrease in visibility for an anti-accountability post. And except for this thread, I’m not going around talking about my involvement in the situation. I don’t know how much of the imbalance this accounts for, nor am I claiming that everything is fine. It’s just something to keep in mind as one aspect of parsing the situation. • Thank you, yeah I think I may be overindexing on a few public examples (not being privy to the private examples that you and others in thread have brought up). Glad to hear that there are plenty of examples of the community responding well to protect victims/​survivors. I still also don’t think everything’s fine, but unsure to what extent EA is worse than the rest of the world, where things are also not fine on this front. • In the cases like this I’ve been most closely involved in, the women who have reported have not wanted to publicise the event, so sometimes action has been taken but you wouldn’t have heard about it. (I also don’t think it’s a good habit to try to maximise transparency about interpersonal relationships tbh.) • Yeah, this is very fair and I agree that transparency is not always the right call. To clarify, I’ll say that my stance here, medium confidence, is: (1) in instances which the victim/​survivor has already made their accusations public, or in instances where it’s already necessarily something that isn’t interpersonal [e.g. hotness ranking], the process of accountability or repair, or at least the fact that one exists, should be public; (2) it should be transparent what kind of process a victim can expect when harm happens. There’s some literature around procedural justice and trust that indicates that people feel better and trust the outcomes of a process more when it is transparent and invites engagement, regardless of whether the actual outcome favors them or not. I am glad to hear that there have been cases where women have felt safe reporting and action has been taken! (edited to delete a para about CEA community health team’s work that I realized was wrong, after seeing this page linked below) • I’d agree I’d favour systems that help people feel confident in the outcome even when it doesn’t favour them, and would like to see EA do better in these areas! • Regardless of the accuracy of this comment, it makes me sad that the top comment on this post is adversarial/​argumentative and showing little emotional understanding/​empathy (particularly the line “getting called out in posts like this one”). I think it unfortunately demonstrates well the point the author made about EA having an emotions problem: On the forum in particular and in EA discourse in general, there is a tendency to give less weight/​be more critical of posts that are more emotion-heavy and less rational. This tendency makes sense based on EA principles… to a certain extent. To stay true to the aforementioned values of scientific mindset and openness, it makes sense that we challenge people’s ideas and are truth-seeking in our comments. However, there is an important distinction between interrogating someone’s research and interrogating someone’s lived experience. I fear that the attitude of truth-seeking and challenging one another to be better has led to an inclination to suspend compassion in the absence of substantial evidence of wrongdoing. You’re allowed to be sorry that someone experienced something without fully understanding it. • I very rarely engage in karma voting, and didn’t do so for this comment either. That said, one relevant point is that the comment with the most karma gets to sit at the top of the comments section. That means that many people probably vote with an intention to functionally “pin” a comment, and it may not be so much that they think the comment should represent the most important reaction to a post, as that they think it provides crucial context for readers. I think this comment does provide context on the part of this otherwise very good and important post that made me most uncomfortable as stated. I also agree that Alexander’s tone isn’t great, though I read it in almost the opposite way from you (as an emotional reaction in defense of his friends who came forward about Forth). • To be honest I’m relieved this is one of the top comments. I’ve seen Kathy mentioned a few times recently in a way I didn’t think was accurate and I didn’t feel able to respond. I think anyone who comes across her story will have questions and I’m glad someone’s addressed the questions even if it’s just in a limited way. • I’m glad you made your post about how Kathy’s accusations were false. I believe that was the right thing to do—certainly given the information you had available. But I wish you had left this sentence out, or written it more carefully: But they wouldn’t do that, I’m guessing because they were all terrified of getting called out in posts like this one. It was obvious to me reading this post that the author made a really serious effort to stay constructive. (Thanks for that, Maya!) It seems to me that we should recognize that, and you’re erasing an important distinction when you categorize the OP with imprudent tumblr call-out posts. If nothing else, no one is being called out by name here, and the author doesn’t link any of the tumblr posts and Reddit threads she refers to. I don’t think causing reputational harm to any individual was the author’s intent in writing this. Fear of unfair individual reputational harm from what’s written here seems a bit unjustified. • EDIT: After some time to cool down, I’ve removed that sentence from the comment, and somewhat edited this comment which was originally defending it. I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying “X doesn’t believe sexual assault victims” or “EA has a culture of doubting sexual assault victims”. And they all hoped someone else would go first to mention all the evidence that these particular rumors were untrue, so that that person could be the one to get flak over this for the rest of their life (which I have, so good prediction!), instead of them. I think there’s a culture of fear around these kinds of issues that it’s useful to bring to the foreground if we want to model them correctly. But I think you’re gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that. Although it’s possible that the value gained by saying this true thing is higher than the value lost by potential chilling effects, I don’t want to claim to have an opinion on this, because in fact I wrote that comment feeling pretty triggered and upset, without any effective value calculations at all. Given that it did get heavily upvoted, I can see a stronger argument for the chilling effect part and will edit it out. • Hi Scott, Thank you for both of your comments. I appreciate you explaining why you wrote a post about Kathy and I think it’s useful context for people to understand as they are thinking about these issues. My intention was not to call anybody out, rather, to point to a pattern of behavior that I observed and describe how it made me (and could make others) feel. • Thanks for removing the sentence. I’m sorry you’ve gotten flak. I don’t think you deserve it. I think you did the right thing, and the silence of other people “in the know” doesn’t reflect particularly well on them. (Not in the sense that we should call them out, but in the sense that they should maybe think about whether they knowingly let a likely-innocent person suffer unjust reputation harm.) I think there’s a culture of fear around these kinds of issues that it’s useful to bring to the foreground if we want to model them correctly. Agreed. I think the culture of fear goes in both directions. Women often seem to fear making accusations. But I think you’re gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that. Not what I was gesturing at, but potentially valid. My thinking is that attempts to share info “in good faith” should not be punished, regardless of whether that info pushes towards condemnation vs exoneration. (We can debate what exactly counts as “good faith”, but I think it should be defined ~symmetrically for both types of info. I’d like more discussion of what constitutes “good faith”, and fewer implications that [call-outs/​denials] are always bad. I’m open to super restrictive definitions of “good faith”, like “only share info with CEA’s community health team and trust them to take appropriate action” or similar.) In any case, my main goal was to get you to reciprocate what I saw as the OP’s attempt to be less triggered/​more constructive, so thanks for that. • I did not know Kathy well, but I did meet and talk with her at length on a number of occasions in EA/​aligned spaces. We talked about cultural issues in the movement and for what it is worth, she came across as someone of good character, good judgement and measured takes. I am not across the particulars of her accusations and I feel matters like this have a place, actual courts and not forums. I don’t think cherry picked criticisms of her claims are appropriate. I think EA will continue to stumble on this issue, and our downfall as a movement will continue to be handling deontologicaly or virtuously abhorrent behaviour. I think the author of this forum post has been points of great importance. In particular, their critique of the style of writing required to be taken seriously and understood in the manner intended, is novel. • While this is important (clarifying of misinformation), I want to mention that I don’t think this takes away from the main message of the post. I think it’s important to remember that even with a culture of rationality, there are times when we won’t have enough information to say what happened (unlike in Scotts case), and for that reason Mayas post is very relevant and I am glad it was shared. It also doesn’t seem appropriate to mention this post as “calling out”. While it’s legitimate to fear reputations being damaged with unsubstantiated claims, this post doesn’t strike me as doing such. • I want to strong agree with this post, but a forum glitch is preventing me from doing so, so mentally add +x agreement karma to the tally. [Edit: fixed and upvoted now] I have also heard from at least one very credible source that at least one of Kathy’s accusations had been professionally investigated and found without any merit. Maybe also worth adding that the way she wrote the post would in a healthy person be intentionally misleading, and was at least incredibly careless for the strength of accusation. Eg there was some line to the effect of ‘CFAR are involved in child abuse’, where the claim was link-highlighted in a way that strongly suggested corroborating evidence but, as in that paraphrase, the link in fact just went directly to whatever the equivalent website was then for CFAR’s summer camp. It’s uncomfortable berating the dead, but much more important to preserve the living from incredibly irresponsible aspersions like this. • It would be nice to imagine that aspiring to be a rational, moral community makes us one, but it’s just not so. All the problems in the culture at large will be manifest in EA, with our own virtues and our own flaws relative to baseline. And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light. I don’t know how to deal with it except to address specific issues as they come to light. I guess I would just say that you are not alone in your concern for these issues, and that others do take significant action to address them. I support what I think of as a sort of “safety culture” for relationships, sexuality, race, and culture in the EA movement, which to me means promoting an openness to the issues, a culture of taking them seriously, and taking real steps to address them when they come up. So I see your post as beneficial in promoting that safety culture. • Hey AllAmericanBreakfast. I’m Catherine from the Community Health team. I’m so so sorry to hear that your friend was raped. If at all possible, I want to make sure they have support, justice, and that the perpetrator doesn’t have the opportunity to do this again. It doesn’t matter if your friend doesn’t identify as EA, if your friend, or the perpetrator are involved in the EA community in anyway we’re here to do our best to help. I’ll reach out via PM. • Hey :) I was raped before I was involved in EA. I normally find these discussions hard and frustrating. I feel we often talk past one another and that the people with similar experiences withdraw because it’s still painful/​ they get frustrated and hurt. I would like people like me to know: 1. There are a lot of people who have similar experiences to me who are active in the EA community. You may not see them here because of the aforementioned issue but we are here. 2. There are a lot of people who take these issues very seriously, including me, 3. I trust and endorse Catherine Low entirely. She has seen it all with me and has been kind, empathetic, and not unilateral. 4. To the extent possible, please consider reporting either to Catherine, the community health team or the police, or both. Kirsten is entirely right, this is horrifically unfair and you have no obligation to do so, but it is very important that people with a track record of sexual (any) violence not be in positions of power in any institutions or communities for the safety of other community members. 5. If there is anything whatsoever I can do, including talking openly about my experiences (I do have a blog draft actually about how I coped with my rape) which I am happy to share, an adamant vouch for Catherine and CEA’s team, or just generally a cup of tea, you should hit me up. • And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light. I’m very sad to hear about this. I don’t understand why the community health team is not able to handle this kind of thing. Did your friend make a report? Does the community health team need more funding or employees? Are they afraid to take on people with clout? Even if the accused is doing a lot of good work, if the accusation is found to be credible, at the very least we should ensure the accused does not occupy a position of responsibility. If they are serious about AI safety, they should agree to this measure themselves, for the sake of guarding humanity’s future. EAs should work to ensure that positions of responsibility are occupied by people of exemplary moral character, in my view. (Edit for clarification: I don’t want my view rounded off to “EAs should work to ensure positions of responsibility are occupied by the people who are hardest to cancel”. For example, my notion of “exemplary moral character” accounts for the possibility that failure to report on false accusations made by Kathy Forth could represent a character deficit, even if such failure-to-report makes one harder to cancel. I also think that everyone is flawed, and ability to recognize and learn from one’s mistakes is really important.) • My friend is not part of EA, she was just at an EA-adjacent organization, where the community health team does not have reach AFAIK. • Seems to me she should be talking to them anyway. • I am confident this comes from a good place but I really really dislike that this comment is telling (the friend of) someone who was raped what she should do. People who have been raped can respond however they want, whether they decide to report the situation or not is entirely up to them, and I hate when people act like there is one correct response. • Thanks Kirsten. I’m interested in understanding your position better. Do you agree there are circumstances under which reporting a crime is the correct response? (Would you agree that an FTX employee blowing the whistle on SBF would be the correct response, for example?) If you can think of at least one scenario where you think reporting a crime is the correct response, maybe you could outline how this scenario differs? (For the purpose of our discussion, I’m assuming that the current crime is serious, unambiguous, and unrepented, constituting significant evidence that the perpetrator will cause major harm to others.) My first guess is you think there’s something unique about rape such that the associated trauma means reporting can cause suffering. In that case, this would appear to be a straightforward demandingness dilemma—one’s feeling about the statement “it is correct to report rape” might be similar to one’s feeling about the statement “it is correct to forgo luxuries to donate to effective charities”. In both cases you’re looking at taking on discomfort yourself in order to do good for others. (In my mind the key considerations for demandingness dilemmas are: how much good you’re doing for others, how much discomfort you’re taking on, and what is personally psychologically sustainable for you. And I think saying “Seems to me they should [do the demanding thing]” is generally OK.) Thanks for any thoughts you’re willing to share. • Hi Truck Driver Wannabe, I really appreciate your effort to understand the other side of the argument and I see why you are confused about the reaction. For me, I find the idea that a person has any responsibility whatsoever to involve the cea community health team in any matter regarding their personal life (including and especially sexual assault) baffling. Reporting to CEA is not obviously net harm reducing, because a predator who is kicked out of CEA sponsored events can and will just move to another community and continue their predatory behavior elsewhere. And that is assuming that CEA handles the situation perfectly. I also don’t think a person has such a responsibility to report to law enforcement, only partly because law enforcement has generally not earned a reputation for handling these cases well. If we lived in a different world where law enforcement was more competent in these cases, then I agree this would be a straightforward demandingness dilemma. However, I don’t expect anyone to be publicly retraumatized in the service of helping strangers and I think it is extremely unfair to do so. Being publicly humiliated, mocked, disbelieved, called names, concern trolled, having every past sexual and romantic encounter up for public scrutiny, and being forced to publicly and repeatedly detail the most horrifying moments of your life is not even almost the same as, say, donating ten percent of your income. All or many of these things often happen to people who report sexual assault to a responsible and thorough law enforcement agency that does all the right things and has ample resources. In general I don’t think it’s that healthy to expect others to give a certain amount of their time or money or anything else. I think we should all set an example in our own lives and be public about why we make the choices we do, but respect that others have the right to choose what and how much they give (emotionally and otherwise). But even if I didn’t believe that in general, I would still believe it in case of sexual assault. • Hey Truck Driver Wannabe (great Forum name by the way) - I’m a medical doctor and have recently completed extra training in helping people who’ve experienced sexual assault. There are no ‘shoulds’ (except that the perpetrator should not have done it). I can’t do this topic justice in a Forum commentary (nor would I want to) but if you’d like to contact me directly, I’m happy to talk to you more about this. • I agree with the overall reasoning for why we need inflation hedges. I also agree that US debt poses a risk, as all debts do, but I would view this risk slightly differently, using a historical rather than budgetary lens: • to a large degree, the privilege of the dollar is due to the US being the largest economy, the major world power and a stable government. That makes it “too big to fail” • eventually, the US will not be the largest economy /​ major world power (looking at history you have to make a strong bet against permanent supremacy). At that point, it will have to come back to Earth and make much more constrained choices like all the other countries have to • this course of events is unlikely to happen because some Fed decision about the timing of interest rates, or because some politician in the 2020s refused to cut back entitlements by a small percentage. It will happen because some other country starts to be seen as the safer option. Prolonged GDP decline, a major military defeat, and/​or government overthrow could all help precipitate such an outcome. The reason this matters is that typical US debt hawks will advocate as a main solution reduced spending on long-term infrastructure, military technology and so on. However, if you’re concerned with debt, you should be more concerned about GDP (what you get out of the dollar) than government spending (what you put into it, so to speak); about military power than military leanness;and about government stability than government drama (including periodic debt ceiling freakouts). A strong government backed by a strong share of the world economy—that is what investors see in the US dollar. The minute they stop seeing that, the party is over and hard choices will need to be made. (Remember how the markets reacted to Liz Truss’s budget a few months ago? That is what a former empire making economic decisions looks like.) So if you keep GDP running, prevent China from overturning the world order, and avoid obvious own-goals such as January 6 style craziness the US should be fine for a while longer. Now, is this US-favourable outcome the most beneficial to the world? I don’t know—it may be better to shift the equilibrium at some point (though on balance, I would tend to say, probably not now). But if it is the outcome you want, those would be the key items to safeguard. • Agree with your post ! • [ ] [deleted] • Teachers are the core of any education system. Their work in the classroom and the relationships they forge with their students impacts the future of society. Supporting the education of young people though does not fall solely on teachers’ shoulders. The resources of a wider community can also make an impact. A school outreach program builds partnerships between educational institutions and sponsoring organizations to open up new pathways for growth and success. It paves a better road to the future for students, sponsors, and their communities. • Hi, thank you for your post, and I’m sorry to hear about your (and others’) bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I’ve heard about in EA have taken place in the Bay Area, I’m not sure why. I’ve mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these communities are smaller, or I might just not have heard of incidents which have taken place. Maybe it’s my perspective that’s unrepresentative. In any case, if you haven’t tried it yet, consider spending more time in other EA communities. • Your comment (at least how it’s read as, maybe different from your intentions) reads as “that’s a particularly problematic location, just go to a different one”. That doesn’t solve the problem. That doesn’t hold the Bay * or any community accountable or push for change in a positive direction. I think that sort of logic is a common response to what Maya writes about and doesn’t help or make anything better. *and this is coming from an ex-Berkeley community builder • What is it about the Bay area that makes these issues more prevalent or severe, if they are? Seems worth finding out if we want to push for change in a positive direction. • My thesis here revolves around the overlap between tech and EA culture and how this shapes the demographics. We should expect higher rates of youth, whiteness, maleness, and willingness to move for high pay in the Bay Area because of the influx of people moving for tech jobs in the past 10 years. There could also be some kind of weird sexual competition exacerbated by scarcity. Here are some other unusual things about the Bay Area which may contribute to the “vibes” mentioned: • Founder effects: Bay Area EA organizations tend to be more focused on AI and therefore look to hire tech-types, growing the presence of people who fit this demographic (these orgs also could have been founded in the Bay because of these demographics, it’s unclear to me which came first) • Extremely high wealth inequality and the correlation of wealth with other things EAs select for (e.g. educational attainment) likely means EA in the Bay selects much harder for wealth than in other places • Racism has a profound influence US society. In my experience, people who are unfamiliar with both the history and modern day effects of race in America (or are from more homogenous countries) are worse at creating welcoming spaces and seem to underappreciate the value of creating diverse groups • There is a high prevalence and acceptance of hookup culture and casual sex • There’s high tolerance for non-traditional relationships by broader society • The US is one of the most individualistic cultures in the world according to cultural psychology measures Overall, the Bay Area is much unlike the rest of the world according to most demographic criteria, and it’s plausible that different outreach strategies are needed there in order to find driven and altruistic people from with a diversity of ideas and approaches to doing good. • My guess is that it’s because the bay area has a lot of professional power entangled in it such that power dynamics emerge much more easily in the bay than elsewhere. • I agree that would be an unhelpful takeaway from this post/​these experiences I have only been to the bay area once, and I felt a culture shock from the degree of materialism and individualism that I experienced in the community. On one occasion, I tried to call it out publicly and got rebuffed by a group. However, I do think it’s unfair that the bay area is presented as representative of the wider EA movement, in a way that, for example, ea berlin—wouldn’t be. • I haven’t really spent time with the community there, so I’m curious about the individualist & materialist point. Could you expand on that a bit more? • 2 Dec 2022 6:27 UTC 17 points 2 ∶ 1 I like your recommendations, and I wish that they were norms in EA. A couple questions: (1) Two of your recommendations focus on asking EAs to do a better job of holding bad actors accountable. Succeeding at holding others accountable takes both emotional intelligence and courage. Some EAs might want to hold bad actors accountable, but fail to recognize bad behavior. Other EAs might want to hold bad actors accountable but freeze in the moment, whether due to stress, uncertainty about how to take action, or fear of consequences. There’s a military saying that goes something like: “Under pressure, you don’t rise to the occasion, you sink to the level of your training.” Would it increase the rate at which EAs hold each other accountable for bad behavior if EAs were “trained” in what bad behavior looks like in the EA community and in scripts or procedures for how to respond, or do you think that approach would not be a fit here? (2) How you would phrase your recommendations if they were specifically directed to EA leadership rather than to the community at large? • These are both very important questions—for (1), I think it depends on the circumstance in all honesty. For example, the same way that volunteers are often trained before EAGs and EAGxs, I could see participants receiving something (as part of the behavior guidelines) outlining scenarios and describing why they were an example of inappropriate or appropriate behavior. However, I think it would be extremely difficult to “train” all members of the EA community as people are involved in many different capacities. For (2), I think that, despite all situations involving interpersonal harm and conflict being unique and complex, it could be useful to have more transparency in some areas. I don’t mean naming specific individuals and discussing all the details of each case, I more mean something like (X action is unaccpetable and will result in Y consequence if found to be true). Another note—my suggestions were aimed towards EA community members because I truly believe that, often, people simply do not understand how their actions/​words make others feel. I hope that by raising awareness of this people will be motivated to change themselves without necessitating external conflict (although I understand that’s not always the case). • What is the license problem that you foresee, Elliot? What specifics concern you? I haven’t thought about it carefully, what perspective am I missing? • What is the license problem that you foresee, Elliot? What specifics concern you? I haven’t thought about it carefully, what perspective am I missing? The license gives anyone the right to e.g. put my posts in a book and sell them without my consent. It lets them do all kinds of stuff with my work. I think my work is valuable and and I want to retain my IP and copyright rights about it. I think the prior, default system was good: fair use and quotations, plus asking for permission for other stuff. BTW I’ve been plagiarized multiple times, I’ve had multiple people put my ideas in their published commercial books without consent or even notifying me (including some copyright violations), and including with mangling my ideas so badly that I wouldn’t want to be associated with their version so simply giving credit doesn’t fix the problem for me. Talking about someone’s ideas and quoting and paraphrasing them fairly and reasonably takes some skill that many people lack. One person offered to credit me as a co-author of his book when I found out he’d put a ton of my ideas in it. I declined because I would not want authorship of his low quality writing and reasoning, plus I was not involved with authoring the book at all. I don’t want him to plagiarize me, and I also don’t want him to incompetently summarize my ideas then credit me, let alone say I endorse it… CC BY would make all this stuff worse not better. But mostly I just want to retain my property rights for my ideas, work, research, writing, etc. I think giving most of my ideas and writing away as free to read is more than generous enough. My plan is to quit using the EA forum, though I’ll write a few things without important philosophy in them, like this one, rather than quitting abruptly. I will continue posting articles at https://​​criticalfallibilism.com and https://​​curi.us plus I’m actively using my forum and two YouTube channels. I have ~30,000 words of EA related draft articles which I’ll no longer be able to use as planned. I’ll probably try to quickly post a fair amount of that at curi.us with only light editing. BTW, when reviewing EA’s terms of use yesterday I found other problems, e.g. a prohibition on posting anything “untrue”. EDIT: I should also mention that I don’t want anyone translating my writing without consent because translations can easily be inaccurate and misleading, and essentially be like misquoting me. Translations basically come with an implication that I endorse what they say because it’s allegedly just my own words. I’ve had an issue with this in the past too, and if I ever get more popular all this stuff will come up more including with my archives. • Huh, very interesting, although it doesn’t seem that the license terms stopped all that from happening to you. BTW, it looks like I can’t indicate agreement or disagreement with your post? Is that a setting you have set? • Yes the current default (US) copyright/​IP system is far from perfect. I’m not aware of setting a setting and both voting things are showing up for me on my own post just like on yours (including with a private browsing window). • 2 Dec 2022 5:59 UTC 0 points 0 ∶ 0 Tl;dr. Sounds like you’re criticizing some views/​approaches, perhaps rightly so. Do you have an alternative approach you suggest in place of those that you criticize? • 2 Dec 2022 5:46 UTC 33 points 11 ∶ 1 Thank you for sharing such a brave, thoughtful and balanced post. • 2 Dec 2022 5:08 UTC 24 points 3 ∶ 0 Thank you for posting this. I was so sad to see the recent post you linked to be removed by its author from the forum, and as depressing as the subject matter of your post is, it cheers me up that someone else is eloquently and forcefully speaking up. Your voice and experience are important to EA’s success, and I hope that you will keep talking and pushing for change. • 2 Dec 2022 4:54 UTC 101 points 12 ∶ 0 Hi hi :) Are you involved in the Magnify Mentoring community at all? I’ve been poorly for the last couple of weeks so I’m a bit behind but I founded and run MM. Personally, I’d also love to chat :) Feel free to reach out anytime. Super Warmly, Kathryn • We didn’t run a draft of this post by DM or Anthropic (or OpenAI), so this information may be mistaken or out-of-date. My hope is that we’re completely wrong! Why not run a draft of the post by them? Not sure what you had to lose there and seems like it could’ve been better (both from a politeness/​cooperativeness perspective and from a tactical perspective) to have done so. • If folks at DM/​Anthropic/​OpenAI ask us to run this kind of thing by them in advance, I assume we’ll be happy to do so; we’ve sent them many other drafts of things before, and I expect we’ll send them many more in the future. I do like the idea of MIRI staff regularly or semi-regularly sharing our thoughts about things without running them by a bunch of people—e.g., to encourage more of the conversation, pushback, etc. to happen in public, so information doesn’t end up all bottled up in a few brains on a private email thread. I think there are many cases where it’s actively better for EAs to screw up in public and be corrected in the comments, rather than working out all disagreements and info-asymmetries in private channels and then putting out an immaculate, smoothed-over final product. (Especially if the post is transparent about this, so we have more-polished and less-polished stuff and it’s pretty clear which is which.) Screwing up in public has real costs (relative to the original essay Just Being Correct about everything), but hiding all the cognitive work that goes into consensus-building and airing of disagreements has real costs too. This is not me coming out against running drafts by people in general; it’s great tech, and we should use it. I just think there are subtle advantages to “just say what’s on your mind and have a back-and-forth with people who disagree” that are worth keeping in view too. Part of it is a certain attitude that I want to encourage more in EA, that I’m not sure how to put into words, but is something like: tip-toeing less; blurting more; being bolder, and proactively doing things-that-seem-good-to-you-personally rather than waiting for elite permission/​encouragement/​management; trying less to look perfect, and more to do the epistemically cooperative thing “wear your exact strengths and weaknesses on your sleeve so others can model you well”; etc. All of that is compatible with running drafts by folks, but I think it can be valuable for more EAs to visibly be more relaxed (on the current margin) about stuff like draft-sharing, to contribute to a social environment where people feel chiller about making public mistakes, stating their current impressions and updating them in real time, etc. I don’t think we want maximum chillness, but I think we want EA’s best and brightest to be more chill on the current margin. • I don’t think this makes sense. Your group, in the EA community, regarding AI safety, gets taken seriously whatever you write. This in not the paradigmatic example of someone who feels worried about making public mistakes. A community that gives you even more leeway to do sloppy work is not one that encourages more people to share their independent thoughts about the problem. In fact, I think the reverse is true: when your criticisms carry a lot of weight even when they’re flawed, this has a stifling effect on people in more marginal positions who disagree with you. If you want to promote more open discussion, your time would be far better spent seeking out flawed but promising work by lesser known individuals and pointing out what you think is valuable in it. Am I correct in my belief that you are paid to do this work? If this is so, then I think the fact that you are both highly regarded and compensated for your time means your output should meet higher standards than a typical community post. Contacting the relevant labs is a step that wouldn’t take you much time, can’t be done by the vast majority of readers, and has a decent chance of adding substantial value. I think you should have done it. • this approach to reasoning assumes authorities are valid. do not trust organizations this way. It is one of effective altruism’s key failings. how can we increase pro-social distrust in effective altruism so that authorities are not trusted? • What sort of substantial value would you expect to be added? It sounds like we either have a different belief about the value-add, or a different belief about the costs. Maybe if you sketched 2-3 scenarios that strike you as a relatively likely way for this particular post to have benefited from private conversations, I’d know better what the shape of our disagreement is. If your objection is less “this particular post would benefit” and more “every post that discusses an AGI org should run a draft by that org (at least if you’re doing EA work full-time)”, then I’d respond that stuff like “EAs candidly arguing about things back and forth in the comments of a post”, the 80K Podcast, and unredacted EA chat logs are extremely valuable contributions to EA discourse, and I think we should do far, far more things like that on the current margin. Writing full blog posts that are likewise “real” and likewise “part of a genuine public dialogue” can be valuable in much the same way; and some candid thoughts are a better fit for this format than for other formats, since some candid thoughts are more complicated, etc. It’s also important that intellectual progress like “long unedited chat logs” gets distilled and turned into relatively short, polished, and stable summaries; and it’s also important that people feel free to talk in private. But having some big chunks of the intellectual process be out in public is excellent for a variety of reasons. Indeed, I’d say that there’s more value overall in seeing EAs’ actual cognitive processes than in seeing EAs’ ultimate conclusions, when it comes to the domains that are most uncertain and disagreement-heavy (which include a lot of the most important domains for EAs to focus on today, in my view). This in not the paradigmatic example of someone who feels worried about making public mistakes. A community that gives you even more leeway to do sloppy work is not one that encourages more people to share their independent thoughts about the problem. I don’t think that sharing in-process snapshots of your views is “sloppy”, in the sense of representing worse epistemic standards than a not-in-process Finished Product. E.g., I wouldn’t say that a conversation on the 80K Podcast is more epistemically sloppy than a summary of people’s take-aways from the conversation. I think the opposite is often true, and people’s in-process conversations often reflect higher epistemic standards than their attempts to summarize and distill everything after-the-fact. In EA, being good at in-process, uncertain, changing, under-debate reasoning is more the thing I want to lead by example on. I think that hiding process is often setting a bad example for EAs, and making it harder for them to figure out what’s true. I agree that I’m not a paradigmatic example of the EAs who most need to hear this lesson; but I think non-established EAs heavily follow the example set by established EAs, so I want to set an example that’s closer to what I actually want to see more. In fact, I think the reverse is true: when your criticisms carry a lot of weight even when they’re flawed, this has a stifling effect on people in more marginal positions who disagree with you. If my reasoning process is actually flawed, then I want other EAs to be aware of that, so they can have an accurate model of how much weight to put on my views. If established EAs in general have such flawed reasoning processes (or such false beliefs) that rank-and-file EAs would be outraged and give up on the EA community if they knew this fact, then we should want to outrage rank-and-file EAs, in the hope that they’ll start something else that’s new and better. EA shouldn’t pretend to be better than it is; this causes way too many dysfunctions, even given that we’re unusually good in a lot of ways. (But possibly we agree about all that, and the crux here is just that you think sharing rougher or more uncertain thoughts is an epistemically bad practice, and I think it’s an epistemically good practice. So you see yourself as calling for higher standards, and I see you as calling for standards that are actually lower but happen to look more respectable.) If you want to promote more open discussion, your time would be far better spent seeking out flawed but promising work by lesser known individuals and pointing out what you think is valuable in it. That seems like a great idea to me too! I’d advocate for doing this along with the things I proposed above. Contacting the relevant labs is a step that wouldn’t take you much time, can’t be done by the vast majority of readers Is that actually true? Seems maybe true, but I also wouldn’t be surprised if >50% of regular EA Forum commenters can get substantive replies pretty regularly from knowledgeable DeepMind, OpenAI, and Anthropic staff, if they try sending a few emails. • What sort of substantial value would you expect to be added? It sounds like we either have a different belief about the value-add, or a different belief about the costs. I’d be very surprised if the actual amount of big-picture strategic thinking at either organisation was “very little”. I’d be less surprised if they didn’t have a consensus view about big-picture strategy, or a clearly written document spelling it out. If I’m right, I think the current content is misleading-ish. If I’m wrong and actually little thinking has been done—there’s some chance they say “we’re focused on identifying and tackling near-term problems”, which would be interesting to me given what I currently believe. If I’m wrong and something clear has been written, then making this visible (or pointing out its existence) would also be a useful update for me. Polished vs sloppy Here are some dimensions I think of as distinguishing sloppy from polished: • Vague hunches <-> precise theories • First impressions <-> thorough search for evidence/​prior work • Hard <-> easy to understand • Vulgar <-> polite • Unclear <-> clear account of robustness, pitfalls and so forth All else equal, I don’t think the left side is epistemically superior. It can be faster, and that might be worth it, but there are obvious epistemic costs to relying on vague hunches, first impressions, failures of communication and overlooked pitfalls (politeness is perhaps neutral here). I think these costs are particularly high in, as you say, domains that are uncertain and disagreement-heavy. I think it is sloppy to stay too close to the left if you think the issue is important and you have time to address it properly. You have to manage your time, but I don’t think there are additional reasons to promote sloppy work. You say that there are epistemic advantages to exposing thought processes, and you give the example of dialogues. I agree there are pedagogical advantages to exposing thought processes, but exposing thoughts clearly also requires polish, and I don’t think pedagogy is a high priority most of the time. I’d be way more excited to see more theory from MIRI than more dialogues. If my reasoning process is actually flawed, then I want other EAs to be aware of that, so they can have an accurate model of how much weight to put on my views. I don’t think it’s realistic to expect Lightcone forums to do serious reviews of difficult work. That takes a lot of individual time and dedication; maybe you occasionally get lucky, but you should mostly expect not to. I agree that I’m not a paradigmatic example of the EAs who most need to hear this lesson [of exposing the thought process]; but I think non-established EAs heavily follow the example set by established EAs, so I want to set an example that’s closer to what I actually want to see more of Maybe I’ll get into this more deeply one day, but I just don’t think sharing your thoughts freely is a particularly effective way to encourage other people to share theirs. I think you’ve been pretty successful at getting the “don’t worry about being polite to OpenAI” message across, less so the higher level stuff. • I agree with a lot of what you say! I still want to move EA in the direction of “people just say what’s on their mind on the EA Forum, without trying to dot every i and cross every t; and then others say what’s on their mind in response; and we have an actual back-and-forth that isn’t carefully choreographed or extremely polished, but is more like a real conversation between peers at an academic conference”. (Another way to achieve many of the same goals is to encourage more EAs who disagree with each other to regularly talk to each other in private, where candor is easier. But this scales a lot more poorly, so it would be nice if some real conversation were happening in public.) A lot of my micro-decisions in making posts like this are connected to my model of “what kind of culture and norms are likely to result in EA solving the alignment problem (or making a lot of progress)?”, since I think that’s the likeliest way that EA could make a big positive difference for the future. In that context, I think building conversations about heavily polished, “final” (rather than in-process) cognition, tends to be insufficient for fast and reliable intellectual progress: • Highly polished content tends to obscure the real reasons and causes behind people’s views, in favor of reasons that are more legible, respectable, impressive, etc. (See Beware defensibility.) • AGI alignment is a pre-paradigmatic proto-field where making good decisions will probably depend heavily on people having good technical intuitions, intuiting patterns before they know how to verbalize those patterns, and generally becoming adept at noticing what their gut says about a topic and putting their gut in contact with useful feedback loops so it can update and learn. • In that context, I’m pretty worried about an EA where everyone is hyper-cautious about saying anything that sounds subjective, “feelings-ish”, hard-to-immediately-transmit-to-others, etc. That might work if EA’s path to improving the world is via donating more money to AMF or developing better vaccine tech, but it doesn’t fly if making (and fostering) conceptual progress on AI alignment is the path to impact. • Ideally, it shouldn’t merely be the case that EA technically allows people to candidly blurt out their imperfect, in-process thoughts about things. Rather, EA as a whole should be organized around making this the expected and default culture (at least to the degree that EAs agree with me about AI being a top priority), and this should be reflected in a thousand small ways in how we structure our conversation. Normal EA Forum conversations should look more like casual exchanges between peers at an academic conference, and less like polished academic papers (because polished academic papers are too inefficient a vehicle for making early-stage conceptual progress). • I think this is not only true for making direct AGI alignment progress, but is also true for converging about key macrostrategy questions (hard vs. soft takeoff; overall difficulty of the alignment; probability of a sharp left turn; impressiveness of GPT-3; etc.). Insofar as we haven’t already converged a lot on these questions, I think a major bottleneck is that we’ve tried too much to make our reasoning sound academic-paper-ish before it’s really in that format, with the result that we confuse ourselves about our real cruxes, and people end up updating a lot less than they would in a normal back-and-forth. • Highly polished, heavily privately reviewed and edited content tends to reflect the beliefs of larger groups, rather than the beliefs of a specific individual. • This often results in deference cascades, double-counting evidence, and herding: everyone is trying (to some degree) to bend their statements in the direction of what everyone else thinks. I think it also often creates “phantom updates” in EA, where there’s a common belief that X is widely believed, but the belief is wrong to some degree (at least until everyone updates their outside views because they think other EAs believe X). • It also has various directly distortionary effects (e.g., a belief might seem straightforwardly true to all the individuals at an org, but doesn’t feel like “the kind of thing” an organization writ large should endorse). In principle, it’s not impossible to push EA in those directions while also passing drafts a lot more in private. But I hope it’s clearer why that doesn’t seem like the top priority to me (and why it could be at least somewhat counter-productive) given that I’m working with this picture of our situation. I’m happy to heavily signal-boost replies from DM and Anthropic staff (including editing the OP), especially if it shows that MIRI was just flatly wrong about how much those orgs already have a plan. And I endorse people docking MIRI points insofar as we predicted wrongly, here; and I’d prefer the world where people knew our first-order impressions of where the field’s at in this case, and were able to dock us some points if we turn out to be wrong, as opposed to the world where everything happens in private. (I think I still haven’t communicated fully why I disagree here, but hopefully the pieces I have been able to articulate are useful on their own.) • I would be curious to hear the pushbacks from people who disagree-voted this! • From a cooperativeness perspective, people probably should not unilaterally create for-profit AGI companies. (Note: Anthropic is a for-profit company that raised704M according to Crunchbase, and is looking for engineers who want to build “large scale ML systems”, but I wouldn’t call them an “AGI company”.)

• Well, I wouldn’t say that MIRI decided not to send drafts to DM etc. out of revenge, to punish them for making a strategic decision that seems extremely bad to me. What I’d say is that the norm ‘savvy people freely talk about mistakes they think AGI orgs are making, without a bunch of friction’ tends to save the world more often than the norm ‘savvy people are unusually cautious about criticizing AGI orgs’ does.

Indeed, I’d say this regardless of whether it was a good idea for someone to found the relevant AGI orgs in the first place. (I think it was a bad idea to create DM and to create OpenAI, but I don’t think it’s always a bad idea to make an AGI org, since that would be tantamount to saying that humanity should never build AGI.)

And we aren’t totally helpless to follow the more world-destroying norm just because we think other people expect us to follow it; we can notice the problem and act to try to fix it, rather than contributing to a norm that isn’t good. The pool of people who need to deliberately select the more-reasonable norm is not actually that large; it’s a smallish professional network, not a giant slice of society.

• I continue to object to a norm of running posts by organizations that the posts are talking about. From many interviews with posters to LW and the EA Forum over the years I know that the chilling effects would be massive, and this norm has already multiple times prevented important things from being said, because it doubled or tripled the cost of publishing things that talk about organizations.

• Yeah, I agree with this too. I don’t think MIRI staff are scared to poke DM about things, but I like taking opportunities to make it clear “it’s OK to talk about MIRI, DM, etc. without checking in with us privately first”, because I expect that a lot of people with good thoughts and questions will get stuck on scenarios like ‘intimidated by the idea of shooting MIRI an email’, ‘doesn’t know who to contact at MIRI’, ‘doesn’t want to deal with the hassle of an email back-and-forth’, etc.

I think it’s good to have ‘send drafts to the org in advance’ as an option that feels available to you. I just don’t want it to feel like a requirement.

(It also seems fine to me to send posts about MIRI to me after posting them. This makes it less likely that I just don’t notice the post exists, and gives me a chance to respond while the post is fresh and people are paying attention to it, while reducing the risk that good thoughts just never get posted.)

• Posted earlier here.

• I don’t feel personally affected by this change but this seems to matter a lot to other people, for example:

• If there is any chance you’ll publish your work in an academic journal, or otherwise will be publishing it somewhere else in the future… not all publishers will care about this license, but it might be prudent to be cautious because some definitely will.

• If you do not want podcasts or translations of your work to be made without your consent (e.g. maybe you’d want to work with any translators to ensure accuracy — you now won’t be able to control this).

• If you do not want your work potentially to be used for commercial purposes.

• If your work is done for contract, and not funded by a grant, you’ll need to review the contract terms and depending on the terms of the contract, you may need to get consent from the funder prior to posting.

• The CC BY 4.0 license is irrevocable, so even if you think there is a chance of your mind changing with regards to one of the above, you could be screwed.

I think people posting deserve to make an informed choice. I’m happy to see the new posting includes a checkbox “Before you can publish this post you must agree to the terms of use including your content being available under a CC-BY license”—this satisfies me. But still possible people might not fully understand the implications so I encourage people to be thoughtful about this.

• There could be a little information summary next to the terms of use which is more accessible that explains the implications eg as you have here.

• I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don’t bother to even open them as they know they’ll be long and legalistic.

• 2 Dec 2022 3:00 UTC
2 points
0 ∶ 0

Great post and glad to see contrarian takes. (That’s true as a general matter but I also happen to agree with this one :P)

Couple quick thoughts:

1. Loyalty is important not just as a personal virtue but for efforts at collective action because it convinces people to engage in long term altruistic thinking. Hahrie Han has done some important empirical work showing that people are motivated to help when they have a sense of a shared past, shared future. Sudden ruptures in social relationships are very destructive for fostering that culture.

2. Good decision-making generally does not involve dramatic changes to our beliefs. This is one of the less-known aspects of Tetlock’s research on super predictors. They very rarely make large updates but are rather making small updates based on continuous data; they don’t overcorrect when the mob moves. It seems likely to me that this is an example where we can try to put Tetlock’s research to effect. I don’t see the sort of dramatic evidence I would need to change my mind about SBF (though I also probably did not view him as highly as many others; my prior was that Sam was a very smart guy who was well intentioned but going in the wrong direction in life.)

3. Good decision-making requires avoiding the Fundamental Attribution Error. I imagine most people are aware of the FAE on this forum. But I blogged about systemic forces that seem more important to me, in the collapse of FTX, than any of Sam’s personal misdeeds.

• 2 Dec 2022 2:59 UTC
1 point
0 ∶ 0

Heavily cosigned (as someone who has worked with some of Nick’s friends whom he got into EA, not as someone who’s done a particularly great job of this myself). I encourage readers of this post to think of EA-sympathetic and/​or very talented friends of theirs and find a time to chat about how they could get involved!

• Thanks Spencer, really appreciated the variety of guests, this was a great podcast.

• This article makes one specific point I want to push back on:

It’s not that E.A. institutions were necessarily more irresponsible, or more neglectful, than others in their position would have been; the venture capitalists who worked with Bankman-Fried erred in the same direction. But that’s the point: E.A. leaders behaved more or less normally. Unfortunately, their self-image was one of exceptionalism.

Anyone who has ever interacted with EA knows that this is not true. EAs are constantly, even excessively, criticizing the movement and trying to figure out what big flaws could exist in it. It’s true, a bit strange, and a bad sign, that these exercises did not for the most part highlight this flaw of reliance on donors who may use EA to justify unethical acts—unless you counted the reforms proposed by Carla Zoe Cremer that were never adopted cited by the article. Yes, there is some blame that could be assigned for never adopting these reforms, but the pure quantity of EA criticism and other potential fault points suggests IMO that it’s really hard to figure out a priori what will fail a movement. EA is not perfect, nobody has ever claimed as much, and to some extent I think this article is disingenuous for implying this. “People focused on doing the most good” =/​= “moral saints who think they are above every human flaw”.

• Some reasons I disagree:

I think internal criticism in EA is motivated by aiming for perfection, and is not motivated by aiming to be as good as other movements /​ ideologies. I think internal criticism with this motivation is entirely compatible with a self-image of exceptionalism.

While I think many EAs view the movement as exceptional and I agree with them, I think too many EAs assume individual EAs will be exceptional too, which I think is an unjustified expectation. In particular, I think EAs assume that individual EAs will be exceptionally good at being virtuous and following good social rules, which is a bad assumption.

I think EA also relies too heavily on personal networks, and especially given the adjacency to the rationalist community, EA is bad at mitigating against the cognitive biases this can cause in grantmaking. I expect that people overestimate how good their friends are at being virtuous and following good social rules, and given that so many EAs are friends with each other at a personal level, this exacerbates the exceptionalism problem.

• I couldn’t attend the interpretability hackathon and was hoping to get acquainted with LLM interpretability research as a sofware dev with no experience in interpretability or transformers. So here’s a starting point following in the footsteps of this submission (see their writeup here):

Basically I am thinking we can use the hackathon as a collaborative study session to become more familiar with transformers and interpretability, ultimately culminating in replicating the results in the linked submission (it took them 3 days but since we have a starting point, possibly we can replicate their project and grok what they did much quicker).

Not shoehorned to this idea though. If you think there is a better avenue to using the hackathon to upskill in LLM interpretability and transformers, do share.

• Nice — this seems ambitious, I really like this idea.

Maybe you can start a study group in GatherTown to continue this virtually as well. I’m sure you’d get takers from other folks interested in ML research.

• 2 Dec 2022 0:44 UTC
2 points
1 ∶ 0

Some skills are dual purpose. Writing clearly helps both with truth seeking and being influential.

This isn’t obvious to me—how does being a good writer make you better at truth seeking?

• https://​​forum.effectivealtruism.org/​​termsOfUse

You also irrevocably waive any “moral rights” or other rights with respect to attribution of authorship or integrity of materials for Your Content. When we make public use of Your Content, we will, where practical, use good faith efforts to credit you as the original author of Your Content.

Wait your terms say you are allowed to delete my name from my posts and use my content without attribution. Why? And is this new or old? Where can I read change history of the terms or something like that? The good faith thing seems basically nonbinding/​meaningless legally, and also doesn’t apply whenever you decide it’s impractical or you’re doing something privately.

Why do you want people to waive moral rights? I saw the same thing in Less Wrong’s terms today but did not find any explanation of the upside.

• Hi Elliot, your quote cuts off the “Subject to section 2.2” qualifier, which is the section that discusses the Creative Commons license.

We’ve tried to give a simple summary of the license in this post, but I might suggest talking to a lawyer if you have questions about legal terms; for example “good faith” is not meaningless legally, it is a well defined term of art.

• You want me to talk to a lawyer to know how quotes and your new CC BY stuff works? (E.g. if I quote from my own article while link posting it, possibly the entire thing, does that keep anything quoted out of CC BY?) You aren’t willing to clarify that yourselves and just want individuals to go pay lawyers to find out how your forum works? That seems very unreasonable.

Also

Subject to Section 2.2, [a bunch of stuff]. You also irrevocably waive any “moral rights”

That qualifier doesn’t appear to apply to the moral right waiving.

• But now the ship of effective altruism is in difficult straits, and he [Will MacAskill], like Jonah, has been thrown overboard.

What a strange line—did I miss some event where people expelled Will from EA to make ourselves look better? Seems like this would make more sense of SBF (but context rules that interpretation out).

(for reference, the book of Jonah is pretty short and you can read it here)

• Don’t read too much into this piece of journalistic flair. It’s common practice to end pieces like this (in a paragraph or two known as a “kicker”) with something that sounds poignant on first glance to the reader, even if it would fall apart on closer scrutiny, like this does.

• Were I being charitable to Lewis-Kraus I might say that he moves on to talk about the belly of the fish so in the story EA is God rather than the folks on the boat. Ie that Will is currently in the fish and that denotes uncertainty about the future.

• Note that the ship is already effective altruism so in this reading the ship is also God (which is actually an interesting twist on the story).

• At the end of that story, the sinners of that city donned sackcloth and ashes

FWIW my read of the text is that the king of Nineveh is the only one who is said to sit in ashes. This wasn’t that hard to check!

• Actually sackcloth and ashes go together so maybe we’re supposed to assume that the Ninevites did the ashes as well? I maybe retract this remark.

• It’s open to interpretation, but I don’t think”thrown overboard” is there to suggest much about the EA community, though I’m sure some wish there were a way to distance EA from someone who was so deeply entangled with SBF.

Whatever the case, I think the reference primarily serves to set up the following:

While MacAskill lies in the belly of the big fish, the fate of effective altruism hangs in the balance. Jonah, accepting the burden of duty, eventually went to Nineveh and told the truth about transgression and punishments. At the end of that story, the sinners of that city donned sackcloth and ashes, and found themselves spared.

Will’s responses in the piece fall far short from “accepting the burden of duty.” First, on his propagating the myth of SBF’s frugality:

When asked about the discrepancies in Bankman-Fried’s narrative, MacAskill responded, “The impression I gave of Sam in interviews was my honest impression: that he did drive a Corolla, he did have nine roommates, and—given his wealth—he did not live particularly extravagantly.”

and this, in reference to the Slack message:

“Let me be clear on this: if there was a fraud, I had no clue about it. With respect to specific Slack messages, I don’t recall seeing the warnings you described.

Perhaps Will doesn’t deserve much blame (that’s certainly a theme running through his comments so far). But if he isn’t able to tell the truth about what happened, or isn’t equipped to grapple with it, it’s bad news for the movements and organizations he’s associated with.

• I am purely quibbling with whether the Biblical allusion fits.

• It doesn’t.

• Like, for one, Nineveh is super alien to Jonah, and he hates the fact that they actually repent, which seems like a bad analogy for Will speaking truth to EAs in order to get us to do better. Also Nineveh’s sins don’t seem like they have much to do with Jonah’s (altho Jonah certainly doesn’t seem to have a totally properly reverent attitude). So the paragraph just doesn’t really make all that much sense.

• Edit (3/​12/​2022): On further reflection, I think these accusations are quite shocking, and likely represent (at best) significant incompetence.

[Low quality- midnight thoughts]

Not sure to what extent to update here, someone unnamed said something about sbf in some slack thread.

It would be good to have some transparency on this issue—perhaps by those who have access to said slack workspace—to know how many people read it, how they reacted, and why they reacted that way.

Although this is unlikely at the moment because of proceeding legal concerns

• Does the EA forum have a terms of use document, or something similar, which gives details of the new (and old) rules? I couldn’t find it with a quick search. EDIT: https://​​forum.effectivealtruism.org/​​termsOfUse

How do quotes interact with CC BY?

• https://​​www.effectivealtruism.org/​​terms-and-conditions

If you have an Account (defined below) with us, we will try to give you reasonable notice of major changes through your Account or the contact information associated with your Account.

So EA violated this term – they could have emailed or DMed me but didn’t, and also gave no notice when i commented.

Also the terms don’t specify that the CC BY stuff doesn’t apply before dec 1, 2022. They should. Someone reading the terms might think that all the old stuff is CC BY and act accordingly. The terms should also include the relevant text from the older terms so people know what terms still, today, govern all the older posts. Deleting the terms that still actively govern older posts today from the terms of use doesn’t make sense.

• I strongly recommend Small and Vulnerable linked in the post, but my motivation was much more mundane. I don’t spend much money and my income will jump discontinuously after grad school so it doesn’t hurt me to give.

• As a teenager, I came up with a set of four rules that I resolved ought to be guiding and unbreakable in going through life. They were, somewhat dizzyingly in hindsight, the product of a deeply sad personal event, an interest in Norse mythology and Captain America: Civil War. Many years later, I can’t remember what Rules 3 and 4 were; the Rules were officially removed from my ethical code at age 21, and by that point I’d stop being so ragingly deontological anyway. I recall clearly the first two.

Rule 1 - Do not give in to suffering. Rule 2 - Ease the suffering of others where possible.

The first Rule was readily applicable to daily life. As for the second, it seemed noble and mightily important, but rarely worth enacting. In middle-class, rural England with no family drama and generally contented friends, there wasn’t much suffering around me. Moving out to University, one of my flatmates was close friends with the man who set up the EA group there, and on learning more about it I was struck by the opportunity for fulfilling my Rules that GiveWell and 80k represented.

This story does not account for my day-to-day motivation to uphold a Giving What We Can pledge or fumble through longtermist career planning. I’ve been persuaded by the flavour of consequentialism used here, think that improving the experience of sentient life is wonderful and, quite frankly, don’t have any other strong compulsions for career aims to offer competition. Generally buying-in to the values and aims of this community is my day-to-day motivation. Nevertheless, on taking a step back and thinking about my life and what I wish to do with it, I still feel about the abstract concept of suffering the way Bucky Barnes feels about Iron Man at the end of that film. The Rules don’t matter to me anymore, but their origin grants my EA values the emotional authority to set out a mission statement for what I should be doing.

This article might be good and satisfying to many people because it gives a plausible sense of what happened in EA related to SBF, and what EA leaders might have known. The article goes beyond the “press releases” we have seen, does not come from an EA source, and is somewhat authoritative.

Rob Wiblin appears quite a few times and is quoted. In my opinion, he is right and most EAs and regular people would agree with him. New Yorker articles include details to suggest a sense of intimacy and understanding, but the associated narrative is not always substantive or true. This style is what Wiblin is reacting to.

Gideon-Lewis makes some characterizations that don’t seem that insightful. As in his last piece, Gideon-Lewis maintains an odd sense of surprise that a large movement with billions of dollars, and a history of dealing with bad actors, has an “inner circle”.

Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum.

• My intuitions differ some here. I don’t know about Will MacAskill’s notion of moral pluralism. But my notion of moral pluralism involves assigning some weight to views even if they’re informed by less data or less reflection, and also doing some upweighting on outsider views simply because they’re likely to be different (similar to the idea of extremization in the context of forecast aggregation).

If a regular person thinks “our great virtue is being right” sounds like hubris, that’s evidence of actual hubris. You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better. You make that replacement because the second statement harmonizes with a wider variety of moral and epistemic views.

PR is more corrosive than reputation because “reputation” allows for the possibility of observers outside your group who can form justified opinions about your character and give you useful critical feedback on your thinking and behavior.

(One of the FTX Future Fund researchers piped up to make a countervailing point, referring, presumably, to donations that Thiel made to the campaigns of J. D. Vance and Blake Masters: “Might be a useful ally at some point given he is trying to buy a couple Senate seats.”)

There’s a sense in which reputational harm from a vignette like this is justified. People who read it can reasonably guess that the speaker has few instinctive misgivings to ally with a “semi-fascist” who’s buying political power and violating widely held “common sense” American morality.

One would certainly hope that deontological considerations (beyond just PR) would come up at some point, were EA considering an alliance with Thiel. But it concerns me that Lewis-Kraus quotes so much “PR” discussion, and so little discussion of deontological safeguards. I don’t see anything here that reassures me ethical injunctions would actually come up.

And instinctive misgivings actually matter, because it’s best to nip your own immoral behavior in the bud. You don’t want to be in a situation where each individual decision seems fine and you don’t realize how big their sum was until the end, as SBF put it in this interview (paraphrased). That’s where Lewis-Kraus’ references to Schelling fences and “momentum” come in.

The best time to get a “hey this could be immoral” mental alert is as soon as you have the idea of doing the thing. Maybe you do the thing despite the alert. I’m in favor of redirecting the trolley despite the “you will be responsible for the death of a person” alert. But an alert is generally a valuable opportunity to reflect.

Finally, some meta notes:

• I doubt I’m the only person thinking along these lines. Matt Yglesias also seems concerned, for example. The paragraphs above are an attempt to steelman what seems to be a common reaction on e.g. Twitter in a way that senior EAs will understand.

• The above paragraphs, to a large degree, reflect updates around my thinking about EA which have occurred over the past years and especially the past weeks. My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants.

• I’ve noticed it’s easy for me to get into a mode of wondering if I’m a good person and trying to defend my past actions. Generally speaking it has felt more useful to reflect on how I can improve. Growth mindset over fixed mindset, essentially. I think it is a major credit to EAs that they work so hard to do good. Lack of interest in identifying and solving the world’s biggest problems strikes me as a major problem with common-sense morality. So I don’t think of EAs in the Slack channel as bad people. I think of them as people working hard to do good, but the notion of “good” they were optimizing for was a bit off. (I used to be optimizing for that thing myself!)

• I’ve also noticed that when experiencing an identity threat (e.g. sunk cost fallacy), it’s useful for me to write a specific alternative plan, without committing to it during the process of writing it. This could look like: Make a big list of things I could’ve done differently… then circle the ones I think I should’ve done, in hindsight, with the benefit of reflection. Or, if I’m feeling doubtful about current plans, avoid letting those doubts consume me and instead outline one or more specific alternative plans to consider.

• I’m unsure what part of my comment you are replying to. I’m happy to own up to valuing “being right over optics/​politics”. I’m OK if you became aggressive or even hostile, through making good inferences about me.

However, many of the things you said are confusing to me. I don’t know how the blog post on “PR”/​”reputation” is relevant. Also, I agree with Matt Yglesias (I’m in touch with him!).

Importantly, I think it would be good for you to be aware of how your writing in your comment might present to some people.

Your comment begins with “my intuitions differ some here…” which implies I share these the views you are opposing below your comment. This seems confirmed throughout, e.g. “My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants”.

If I tried to reply, I think I would have be obligated to refute or deal with the the associations you imply for me, which include “ally with a semi-fascist”, “violating American morality”, “little discussion of deontological safeguards”, “nip [my] own immoral behavior in the bud”.

I don’t think the following idea is in the article, much less my comment: “You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better.”

I don’t think you intended this, but I think some people would find this strange and somewhat offensive.

I actually found your reply interesting and filled with content. I think you have interesting opinions to share.

• Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum.

This is true, but it’s about as good as can be expected since it’s an online New Yorker piece. Their online pieces are much closer to blog posts. The MacAskill profile that ran in the magazine was the result of months of reporting, writing, editing, and fact-checking, all with expenses for things like travel.

• 1 Dec 2022 22:57 UTC
2 points
0 ∶ 0

SoGive ran a grants programme earlier this year, and we plan to publish an update explaining how it went (should be published in the next few days).

We would be happy to:

• have a chat with funders who want to run their own grants process

• incorporate funds into our next grants round (which likely won’t happen until next summer, assuming it goes ahead)

• Do you have a sense of whether the case is any stronger for specifically using cortical and pallial neurons? That’s the approach Romain Espinosa takes in this paper, which is among the best work in economics on animal welfare.

• Nice post, I mostly agree.

The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5).

It’s worth noting that the specific different moral belief used in the study was that the “perpetrator holds the belief that striking a weak person to toughen him up is praiseworthy”, which seems quite different from e.g. a utilitarianism/​deontology divide. Like, that view may just seem completely implausible to most people, and therefore not at all extenuating. Other moral views may be more plausible and so you’d be judged less harshly for acting according to them. I’m speculating here, of course.

• Thanks for highlighting that. :)

I agree that this is relevant and I probably should have included it in the post (I’ve now made an edit). It was part of the reason that I wrote “it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs”. But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand “intentional harm” in a variety of ways. In any case, I think it’s important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught.

• Yes, this strikes me as an important point. It’s a bit like how ideologically-motivated hate crimes are (I think correctly) regarded as worse than comparable “intentional” (but non-ideologically-motivated) violence, perhaps in part because it raises the risks of systematic harms.

Many moral differences are innocuous, but some really aren’t. For an extreme example: the “true believer” Nazi is in some ways worse than the cowardly citizen who goes along with the regime out of fear and self-interest. But that’s very different from everyday “value disagreements” which tend to involve values that we recognize as (at least to some extent) worthy of respect, even if we judge them ultimately mistaken.

• I edited this in. Thank you!

• Thanks for posting this Will!

• I’d be interested in hearing from downvoters here; this seems to me like a fairly anodyne-ly beneficial post, so while I wasn’t expecting a bunch of strong upvotes I also wasn’t expecting downvotes. I’m curious what the disagreement is.

• https://​​www.theguardian.com/​​commentisfree/​​2022/​​nov/​​30/​​science-hear-nature-digital-bioacoustics

what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/​ proteins are deemed dangerous to human health?

Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures?

• [ ]
[deleted]
• It depends why you have those sympathies. If you think they just formed because you find them aesthetically pleasing, then sure. If you think there’s some underlying logic to them (which I do, and I would venture a decent fraction of utilitarians do) then why wouldn’t you expect intelligent aliens to uncover the same logic?

• [ ]
[deleted]
• This seems like a strange viewpoint. If value is something about which one can make ‘truthy’ and ‘falsey’ - or something that we converge to given enough time and intelligent thought if you prefer—then it’s akin to maths and aliens would be a priori as likely to ‘discover’ it as we are. If it’s arbitrary, then longtermism has no philosophical justification, beyond the contingent caprices of people who like to imagine a universe filled with life.

Also if it’s arbitrary, then over billions of years even human-only descendants would be very unlikely to stick with anything resembling our current values.

• I think about the microhumor section of SSC’s nonfiction writing advice is a good example of this. Scott Alexander is very easy to read for me despite covering pretty complex topics and does a very good job of making it both easy and enjoyable to read. I’ve started peppering things like this into my communications with non-technical people at my job and people really enjoy it.

• Love this example, thanks so much for sharing it. I know I mention John Oliver above, but realistically his style is almost certainly too extreme to be replicated in most cases by most people. I agree that Alexander’s microhumor is a perfect example of subtle humor that could potentially be employed in almost any context.

• Great podcast, with not just one but five highly informative experts covering different aspects of the crisis. The intro and timeline was a very clear overview.

Ozzie Gooen was especially helpful on the EA background and implications.

Highly recommend for all EAs.

• Thanks for the heads up! I suggest adding the following to the forum sidebar:

• A link to the TOU

• A statement like “Content published after December 1, 2022 is available under a CC BY 4.0 license” (unless there is already a license banner on each page).

• Thanks Eevee! The license information is included in “How to use the Forum” which is linked to in the sidebar, but yeah possibly we should consider a more prominent link.

• I appreciate the suggestions! I agree we should make this info easier to find—added these to our list for triage.

• The public debt to GDP ratio in Japan is over 260% and they still haven’t defaulted (it somewhat boggles my mind that they can sustain such high debt levels even though it seems that there are reasonable explanations for it). There are ostensibly some differences between the Japanese and American contexts but nonetheless it seems possible for developed countries to sustain high levels of public debt for a considerable amount of time. I think how long of a time is still an open question. I’d expect to see the Japanese situation unravel before the American one and maybe that might give an indication of how sustainable extreme high levels of debt are if there is ever such an unraveling.

• As the government debt reaches maturity, it needs to roll over and be repriced at current interest rates of 4% (but let’s take an average of 3%) - think of this as your fixed-term interest rates on your mortgage running out so you need to renegotiate a new rate with the bank. If we reprice the current $31.3T of debt at 3% interest rates, the interest expense would increase by ~$500 billion per year to almost $1 Trillion per year—That will overtake military spending as the biggest single line item for spending. Not all the debt matures at the same time, does it? If not then maybe only the portion that matures gets repriced? • Correct, that’s why it says it needs to roll over and be repriced at current interest rates of 4% (but let’s take an average of 3%) • Add Agreement Karma to posts. This comment suggesting this feature got 32 Agreement with 9 votes: • Perhaps it’s not clear whether adding agreement karma to posts is positive on net; but I think perhaps it would be worth adding for a month as an experiment. A counter-consideration is that many voters on the Forum may not understand the difference between overall karma and agreement karma still. Unconclusive weak evidence: This answer got 3 overall karma with 22 votes (at some point it was negative) and 18 agreement karma with 20 votes: (It’s unconclusive evidence because while the regular karma downvotes surprised me, people could have had legitimate reasons for not liking the meta-answer and downvoting it. My suspicion though is that at least some people down-voted this in an attempt to “Disagree” vote in the poll.) • Cool work! Props that you allow for people using their own discount rate, as your first footnote is a good point. I think that for transparency and ease of reader understanding, you ought to link the 80K’s article on grantmaking for most-pressing problems. Similarly, linking info on Squiggle would be good. Also, best to clarify that this review is only about “working at a government agency that funds relevant research”, and that this is only one out of their 3 mentioned highly-effective careers related to grantmaking (at the bottom), and that the other two would need a different analysis. I also think that if the intent is to advise on careers, you need to do some analysis of the team of E-ARPA. Variables that come to mind are: size of team, how many of each role, how senior each person seems (for thoughts on how soon a person can get hired in such a role, and maybe even, in the case of junior employees, where they could go from there career-capital-wise), and a rough guesstimate of how much each role contributed to the overall annual grantmaking decisions of E-ARPA. Also a minor wording note.. You say: ”We chose ARPA-E because other ARPA agencies are explicitly called out by 80,000 Hours profile of grantmaking (i.e., DARPA, IARPA)” “Called out” has negative connotations, so I’d probably say “mentioned”, “referred to”, or “brought up” instead. That terminology confused me—I thought you were saying you only chose ARPA-E because the others had been essentially ruled out. I was a bit aghast thinking you’d chosen an example in a category where others had already been shown to be moot or something , and that’s why I dug for and read the original 80K piece >.> Phew, sorry that was so much seemingly-critical feedback, but to clarify I (not a researcher or data scientist) think what you did do is good and I’m happy you reviewed this career path, which tends to, I think, be unfortunately skipped in many career discussions. I strong upvoted the post. • On AI quietism. Distinguish four things: 1. Not believing in AGI takeover. 2. Not believing that AGI takeover is near. (Ng) 3. Believing in AGI takeover, but thinking it’ll be fine for humans. (Schmidhuber) 4. Believing that AGI will extinguish humanity, but this is fine. 1. because the new thing is superior (maybe by definition, if it outcompetes us). 2. because scientific discovery is the main thing (4) is not a rational lack of concern about an uncertain or far-off risk: it’s lack of caring, conditional on the risk being real. Can there really be anyone in category (4) ? • Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared… ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be. • Hinton: “the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.” I expect this cope to become more common over the next few years. • (4) was definitely the story with Ben Goertzen and his “Cosmism”. I expect some “a/​acc” libertarian types will also go for it. But it is and will stay pretty fringe imo. • 1 Dec 2022 18:05 UTC 4 points 0 ∶ 0 “2. Secondly, we assume that all benefits—including health benefits, reduction of monetary costs associated with climate change, reduction of existential risk associated with climate change, and spillover benefits into other industries—are incorporated into the market valuation. We believe that markets likely price in the aforementioned externalities (e.g., health and enviro benefits) (support for this claim here). Furthermore, the valuations listed on ARPA-E’s site are from after the Inflation Reduction Act passed, which, itself, internalized a significant chunk of US emissions. For these reasons, we believe it’s reasonable to assume a significant amount of external benefits have been internalized into markets, but, perhaps, not all benefits. Thus, we believe that, all else equal, this assumption leads to an underestimation of benefits. ” Thanks for this article! The above assumption feels quite wrong to me and, as such, I expect this makes your estimate a vast underestimate everything else being equal. You seem to assume that climate risk and other externalities are priced in market valuations. Even if that were true for the US (which seems unlikely) it would certainly not be true for most places in the world that have very little in terms of pricing energy-related externalities. Given the role of US in the global energy innovation system, it seems reasonable to assume that the benefits that ARPA-E creates are way larger than market valuations, at least if market valuations are—as you suggest—reflective of expected policy returns. • [ ] [deleted] • Going to “New Post” now, it looks like you have to explicitly consent: EDIT: This seems to have disappeared for me, and generally be inconsistent/​buggy right now (see also Elliot’s comment below). I think fixing this is pretty important. • Do you have the link to the terms of use? I have been unable to find it so far. EDIT: https://​​forum.effectivealtruism.org/​​termsOfUse • That said, this is not the case for comments currently. • I want to make an analogy to personality types. Lots of humans believe there is one single personality type. “Everyone thinks and reacts more or less like me.” Given this starting point, upgrading to thinking there are 4 or 16 or whatever types of people is a great update. Lists of different conflict resolution styles, or different love languages, etc is helpful in the same way. However, the same system can become harmfull if after a person learns about them, they get stuck, and refuse to move on to even more nuanced understandings, and insist that the dimensions covered by the system they learned, is the only ones that exists. Overall, I think Scott Aronsons post is good. I expect outsiders who read it will update from thinking there are 1 AIS camp to thinking there are 2 AIS camps. Which is an update in the right direction. I expect insiders who read it to notice “hey, I agree with one side on some points and the other side on some point” and correctly conclude that the picture of two camps is an oversimplification. • Please do reach out if you have any questions! I’ve already had at least one email about this—we’re very happy to answer inquiries. • [ ] [deleted] • You appear to be in violation of the game rules because you haven’t debated or opted out of debating. • Introduction I’m no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It’s not just status signalling. There’s also e.g. social networking.) I’m assuming you can electronically search the book to read additional context for quotes if you want to. Error One For a single individual, education pays. You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier. So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university. This works in some industries, like software, better than others. Caplan made a universal claim so there’s no need to debate how many industries this is viable in. Another option is starting a company. That’s a lot of work, but it can still easily be a better option than going to university just so you can get hired. Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don’t. If lots of people stop going to university, there’s a big problem. But if you individually don’t go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn’t hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we’re only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.) When they calculate the selfish (or “private”) return to education, they focus on one benefit—the education premium—and two costs—tuition and foregone earnings.[4] I’ve been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn’t. Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you’re only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn’t change the mathematical result. See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs: Contemporary economics, in contrast, continually ignores the vital connection of income and cost with the receipt and outlay of money. It does so insofar as it propounds the doctrines of “imputed income” and “opportunity cost.”[26] The doctrine of imputed income openly and systematically avows that the absence of a cost constitutes income. The doctrine of opportunity cost, on the other hand, holds that the absence of an income constitutes a cost. Contemporary economics thus deals in nonexistent incomes and costs, which it treats as though they existed. Its formula is that money not spent is money earned, and that money not earned is money spent. That’s from the section “Critique of the Concept of Imputed Income” which is followed by the section “Critique of the Opportunity-Cost Doctrine”. The book explains its point in more detail than this quote. I highly recommend Reisman’s whole book to anyone who cares about economics. Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn’t find it, but I haven’t read most of the book so I could have missed it. I primarily looked in chapter 5. Error Two The answer would tilt, naturally, if you had to sing Mary Poppins on a full-price Disney cruise. Unless you already planned to take this vacation, you presumably value the cruise less than the fare. Say you value the$2,000 cruise at only $800. Now, to capture the 0.1% premium, you have to fork over three hours of your time plus the$1,200 difference between the cost of the cruise and the value of the vacation.

The full cost of the cruise is not just the fare. It’s also the time cost of going on the cruise. It’s very easy to value the cruise experience at more than the ticket price, but still not go, because you’d rather vacation somewhere else or stay home and write your book.

BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).

Error Three

Laymen cringe when economists use a single metric—rate of return—to evaluate bonds, home insulation, and college. Hasn’t anyone ever told them money isn’t everything! The superficial response: Economists are by no means the only folks who picture education as an investment. Look at students. The Higher Education Research Institute has questioned college freshmen about their goals since the 1970s. The vast majority is openly careerist and materialist. In 2012, almost 90% called “being able to get a better job” a “very important” or “essential” reason to go to college. Being “very well-off financially” (over 80%) and “making more money” (about 75%) are almost as popular. Less than half say the same about “developing a meaningful philosophy of life.”[2] These results are especially striking because humans exaggerate their idealism and downplay their selfishness.[3] Students probably prize worldly success even more than they admit.

First, minor point, some economists have that kind of perspective about rate of return. Not all of them.

And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn’t all that matters. Money is nice but it doesn’t really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don’t focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably “prize worldly success even more than they admit”. I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don’t get a bunch of money, which is a risk).

But here’s the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn’t show that students agree with Caplan.

The issue, highlighted in the first sentence, is “economists use a single metric—rate of return”. Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don’t even have to look the study up. He says ‘Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”’ Let’s assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren’t about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren’t independent so you can’t just use math to estimate this scenario btw).

Bonus Error

Self-help gurus tend to take the selfish point of view for granted. Policy wonks tend to take the social point of view for granted. Which viewpoint—selfish or social—is “correct”? Tough question. Instead of taking sides, the next two chapters sift through the evidence from both perspectives—and let the reader pick the right balance between looking out for number one and making the world a better place.

This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for “conflict”, “harmony”, “interests” and “classical” but didn’t find this covered elsewhere.)

I do think errors of omission are important but I still didn’t want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.

Bonus Error Two

The deeper response to laymen’s critique, though, is that economists are well aware money isn’t everything—and have an official solution. Namely: count everything people care about. The trick: For every benefit, ponder, “How much would I pay to obtain it?”

This doesn’t work because lots of things people care about are incommensurable. They’re in different dimensions that you can’t convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://​​forum.effectivealtruism.org/​​posts/​​K8Jvw7xjRxQz8jKgE/​​multi-factor-decision-making-math

A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.

Potential Error

If university education correlates with higher income, that doesn’t mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn’t causation counter-arguments that could be made. Is this addressed in the book? I didn’t find it, but I didn’t look nearly enough to know whether it’s covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn’t really check. So I don’t know if there’s an error here but I wanted to mention it. If I were to read the book more, this is something I’d look into.

Screen Recording

Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:

• My response to Error 1:

As I understand it your key points are this:

1. Some portion of jobs pay you like you’re a college graduate but don’t hire based on signalling. The marginal individual would be better served by going after those jobs instead of going to college.

2. Starting a business is another way for the marginal individual to outperform college as an investment.

3. Caplan doesn’t consider alternatives to college besides jumping into the labor force (I believe you would agree that, as an example, taking a welding course is one such alternative).

4. You can’t just count lost earnings as a “cost” if you’re going to actually consider these other options.

5. People go to college for other reasons besides money.

Here’s my response:

1.

I get what you’re saying here—in fact I was offered a software engineering job out of high school and turned it down. I a friend who made the same decision. I don’t think this argument works overall, though, for three reasons. First, getting your foot in the door is decently challenging. Second, it limits your employment options in a way that’s not practical. Third, college is an extremely good value proposition for the sort of person who could get a high paying job out of high school.

So how could you get your foot in the door? In my case, a former teacher got me an internship that was meant for a college student—then I had to interview well. In the case of my friend, he had some really impressive projects on GitHub which got him noticed (for a summer job). So there’s an element of luck (having connections) or perhaps innate talent (not many people, regardless of if they have a degree, build a really interesting solo project). Luck is luck, but perhaps a motivated person of average talent could build a solo project good enough to land them a job with no degree. Doing so, however, is a risky proposition. You’d be investing a lot of time and effort into a chance for a job. At the same time, you wouldn’t have a good understanding of the odds because it’s such an uncommon path.

Even if you landed one of those jobs, there’s a good chance it would be far away from your family because companies that are willing to hire someone straight out of high school are so few and far between. Even for someone who’s willing to move away from family, they’d then need to have the money saved up to make that leap. And if you do get the job? Better hope you don’t get fired. If so, you’ll have extremely limited employment options compared to someone with a degree because 99% of employers are simply going to throw away your resume. You may have to uproot your life and move again.

Lastly, for the few people who are in a really good position to get a high paying job out of high school, college is a really good value proposition. If you’re impressive enough to land that job, you can probably also get a merit-based scholarship. You can also go to a top-tier school where you’ll be able to marry rich (as Caplan discusses), network, and take advantages of opportunities for research and entrepreneurship. Alternatively, you might be able to skate by without putting a lot of hours in and use your free time for something else, further reducing the opportunity cost of college.

2.

Only 40% of small businesses turn a profit (https://​​www.chamberofcommerce.org/​​small-business-statistics/​​). A 60% chance of making no money or losing money is an unacceptable risk for 18 year olds. Where are the savings accounts that are going to pay for their food and housing if they aren’t making an income?

Plus, they’d need funding. A business loan for a new entrepreneur out of high school is not a thing. They look at your personal credit score. They may require collateral. SBA loans look at invested equity.

VC-backed ventures are even riskier. Founders typically work for nothing for years (which recent high school grads just can’t do because they don’t have money saved up to live on) for a slim shot at getting rich.

Overall, entrepreneurship is a high risk, high reward option which is not a similar value proposition to college.

3.

Caplan mentions this in chapter 8. Caplan essentially argues that vocational education also pays. Comparing between vocational and collegiate education is challenging due limited data.

4.

If you don’t count opportunity costs, doesn’t that make college look even better?

5.

I agree with you—Caplan is way too dismissive of this.

• Are you looking to have a debate with me or just sharing your thoughts? Either way is fine; I just want to clarify.

• Should have specified. That was meant as my debate response under the rules.

• OK. Would you write a thesis statement that you think is true, and expect me to disagree with, that you’d like to debate? (Or a thesis for me that you want to refute would also work.) So we can clarify what we’re debating.

• I didn’t understand that by “debate” you meant an extended back and forth. I considered my response to be the debate. Sorry for the misunderstanding, but I am not interested in what I think you are looking for.

• [ ]
[deleted]
• You appear to be in violation of the game rules because you haven’t opted into a debate or opted out of debating.

• Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It’s ambiguous/​confusing about whether by “quality” you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don’t use the word “quality” to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/​kinds/​categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can’t be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there’s some confusion or lack of clarity about this.

I didn’t read linked material to try to clarify matters, except to notice that this linked paper abstract doesn’t use the word “quality”. I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I’ve now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word “quality”. What is going on!? PhilPapers is broken? Also this paper, despite using the word “quality” in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn’t clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you’re using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don’t know you’re doing this or see the connection between the issues.

I take a “strong point in favor” to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there’s overlap in the reasons for why they’re wrong, so it’s problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn’t particularly matter – the field’s premises should be challenged instead.

And another related issue is the lack of any consideration of win/​win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism’s social harmony vs. Marxism’s class warfare) that have already been debated a bunch, and it’d make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it’s no wonder it’s gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that’s different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I’d have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don’t think would be especially useful/​valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn’t provide many potential targets for criticism. So that’s why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I’d suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people’s ideas. (When looking for criticism, consider if I’m right, does it mean you’re wrong? If so, then it’s a claim by me about an error, even if I’m actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don’t know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it’s a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it’s not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn’t know but would have wanted to think of but didn’t. To me it looks pretty important and relevant, and problematic to just ignore like there’s no issue here.

If offsetting actually works – if you’re a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I’ll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets$100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn’t have one even if you pay it off in full every month, but I am not one of those people. I don’t think debt is very repugnant when it’s offset by assets like cash.

And similarly, spreading out the assets doesn’t particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn’t find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven’t read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that’s really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren’t familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://​​www.curi.us/​​1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/​axiology. I’ve been complaining about them for years. There’s a huge, widespread issue where people basically ignore criticism – don’t engage with it and don’t give counter-arguments or solutions to the problems it raises – because it’s easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn’t written down in this post.

• If I post a quote here, the quoted text won’t be CC BY licensed, right? Even if I’m the author and I’m quoting myself?

What if quote an entire article written by myself? Could I then post the full article text here without it becoming CC BY licensed, and without enabling anyone else to quote the entire thing (it’s a fair use violation for them to do it, but not for me to)?

• I just wrote some comments today without being informed they would use the CC BY license, and I don’t want to license them that way. I guess I should go delete them? But I don’t think the license is revokable, so does that even work? But I never consented to the license so it shouldn’t be in force in the first place...

Then I went to put up a post, which I’d already finished writing, and a thing about the license popped up. I did not agree to it, the popup is gone now, and I can’t get it back by reloading the page or starting a new post again.

I see nothing by the posting or commenting forms that persistently notifies people about the license. Just the one-time popup that disappears forever(?) even if not agreed to.

This is not OK.

Also basically I don’t want to use the forum anymore because I don’t want to use the CC BY license :(

• Oh, yeah, the tick-box also disappeared for me without my ticking on it, but I can still submit a new post. Weird. Probably a bug?

I do think making sure the licensing requirement is clear to all posters is pretty important.

• Thanks for reporting this! I believe the issue is now fixed. I am looking into what to do for people who tried to post earlier today while the issue was active. Edit: We’ve reset the opt-in for anyone who used it this morning, so you will have to check the box again. I apologize for the inconvenience.

• My understanding is that persistent higher inflation may actually be very good for the U.S. government as it’ll essentially erode the debt as inflation eats away at the value of the loans, so that should be taken into account. Of course, it’s bad for stability, especially if you runaway inflation, but with high employment mitigating some of the downside this seems like a major positive factor for the U.S. gov given the amount of debt it holds.

Edit to add: Thanks for taking the time to write this up, found it enjoyable and was a fun thought experiment!

• Re: Betterment, as far as I can tell, the biggest downside is that the list of charities on the list is quite limited, or am I missing something from https://​​www.betterment.com/​​help/​​tag/​​charitable-donations ?

• [ ]
[deleted]
• Did this ever happen?

• 1 Dec 2022 15:55 UTC
3 points
2 ∶ 0

So are you shorting US stonks or the USD?

• At this point I’d think higher interest rates have knocked many overinflated stonks down to a reasonable level (at least based on the bloodbath that is the tech stock market over the last few months), that’s not to say of course that other risks haven’t been adequately priced in… like the most valuable company in the world for instance being hugely dependent on the manufacturing of a geopolitical competitor to the U.S.

• As both a member of the EA community and a retired mediocre stand-up act, I appreciate that you took the time to write this. You rightly highlight that some light-heartedness has benefited some writers within the EA community, and outside of it. My intuition is that the level of humour we can see being used is, give or take, the right level given the goals the community has. A lot of effort and money has been spent on making the community, along with many job opportunities within it, seem professional in the hope that capable individuals will infer that we mean business and consider EA on those terms.

A concept I referred to a lot when planning comedic performances, and public speaking occasions in general, is that an audience (dependent on the context and their reasons for being there) will have a given threshold for the humour they expect to find in your communication. To be funny, you must go beyond this threshold. Some way above that threshold is another boundary, a humour ceiling, defined by the social norms of the setting beyond which you no longer seem funny. Instead, you signal that you don’t understand the social norms around communication in that context. In stand-up, the humour threshold is really high, so it’s hard to qualify as funny at all, but nigh on impossible to be too funny. In presenting a dry subject to your boss and colleagues, the humour threshold is low and anyone could exceed it with a bit of practice, but landing safely between this threshold and the marginally greater humour ceiling is genuinely hard. You will too easily be too funny and seem a liability. When reading an obituary at a funeral, the threshold is set at essentially zero and the ceiling is coincident with it, only allowing the exemption of jokes told to highlight the cherished memories you have of the deceased.

I explain this because it seems to me untenably hard to commit to using humour all-out, or anywhere close to that, as a communicative and persuasive aid for EA without signalling that we do not “mean business”. Stick man illustrations and starchy acronyms, used sparingly, fall within the threshold-ceiling window for the work MacAskill and Karnofsky are trying to publicise, so these gags play out well. I don’t think they’ve got that much overhead clearance before readers would infer a lack of appreciation for the aesthetics of academic writing, and thus that they shouldn’t be taken seriously.

Since the advent of democracy and ancient Greek plays using jokes to point out the mistakes made by politicians of the day, comedy has proven a very effective method for poking holes in bad ideas and forcing people to change them, lest they be further laughed at. This seems to be the running theme of the cases you mention from John Oliver’s career. Much harder, I think, to propose an idea of your own that you wish people to believe is good and use humour to enhance that perception.

• Thanks so much for your insights. Can’t really argue with what you say here: I think you articulated the idea of subtlety and the importance of correct application with humor far better than I did. Admittedly, John Oliver is an extreme example of humor, perhaps so extreme as to be unhelpful as a model for EA. Overall, maybe my use of the word “humor” in this post was too strong. I really liked Tiger Lava Lamp’s comment below on “microhumor,” which Scott Alexander describes as “things that aren’t a joke in the laugh-out-loud told-by-a-comedian sense, but still put the tiniest ghost of a smile on your reader’s face while they’re skimming through them.” This seems to be a more accurate description of the MacAskill and Karnosfky examples I gave. It seems like we both have a sense that something like Alexander’s microhumor can fall within EA’s humor threshold and be an effective tool for EA to an extent.

• Agreed—Scott Alexander does this very well, as does Yudkowsky in Rationality: A-Z. Both of these also benefit from being blogs of their own creation, where they can dictate a lot of the norms, and so I expect to have a fair bit more slack in how high the ceiling is.

• Our impact market platform might help with this.

Obviously I’m biased here, and there are a number of other good approaches too (funds, Eigentrust, topping up Open Phil grants, donor lotteries, etc.).

Our platform allows anyone to publish a project proposal. Soon we’ll also have a Q & A system to replace the various Google forms that are currently used for grant applications.

If there’s no prize contest going on, it’s basically a centralized platform for grant applications, like a Facebook fundraiser but more geared toward using market mechanisms to highlight particularly promising projects.

If there’s a prize contest going on, it’s a proper impact market where even profit-oriented investors can seek to seed-invest into projects where they can make a sufficiently big profit in expectation.

This is a much too condensed summary, but this article that Amber Dawn has written for us should be more accessible.

• 1 Dec 2022 15:21 UTC
16 points
2 ∶ 1

I’m sure others can do a better job responding to this than I can, but a few thoughts:

• It’s true that the high US debt/​GDP ratio is not harmless, especially insofar as it leaves less fiscal headroom to deal with future recessions, wars etc.

• If investors thought the US were likely to need to default or inflate away debt, then 30-year treasury rates would be extraordinarily high (since otherwise they wouldn’t purchase them). That’s the opposite of what we see, where long-run interest rates are less than short-run interest rates.

• The US economy is growing faster than inflation. So cost of living adjustments aren’t such a big deal since tax revenue is growing as well

• I’d expect US debt to grow much less in a potential upcoming recession than in 2008 or 2020. It made sense to be doing a lot of fiscal stimulus in response to the 2008 financial crisis, but if the fed intentionally triggers a recession as part of controlling inflation, fiscal stimulus doesn’t make as much sense to do.

• One more note. You say

The US gov tax revenue is $4.9T. According to the CBO (Congressional Budget Office), Mandatory spending will cost$3.7 trillion dollars in 2022. This includes all entitlements and expenditures that are signed into legislation and are considered absolute obligations, such as social security and medicare. Then we add in the estimated $800B of defense spending (which is contractual), and we get a total of$4.5T.

$4.9T revenue -$3.7T entitlements - $800B defense =$400B leftover for interest expense.

The problem is interest expense itself is currently costing $482 Billion dollars. So we have at least$82 Billion in deficit, which means we have to borrow even more money to pay back the extra interest expense that we can’t pay back. Note that this doesn’t even take into account some of the other discretionary spending as well as unfunded liabilities.

This is true of the current budget, but the US could raise taxes to cover interest payments in the future. The US has relatively low taxes compared to other rich countries. Raising the US tax/​GDP ratio to that of Germany would raise about \$2T more per year in taxes than the US currently takes in.

• Could I get a list of every current/​former billionaire who has committed a lot of money to EA (this is for a post I will probably never finish writing)? The ones I know off the top of my head are:

• SBF (obviously)

• Dustin Moskovitz

• Jaan Tallinn

have I missed anyone?

• Ben Delo, before his scandal.

I think Musk part funded OpenAI.

Founder’s Pledge has some paper billionaires

• Out of curiosity, what would the post be about?

• The angle is “maybe existing rich people becoming EAs is better than the other way round”. You can probably guess the argument...

• Wow. I hadn’t realised Jaan Tallinn was a billionaire.

• I’d be interested to know, if any of the powers that be are reading, to what extent the Long Term Future Fund could step in to take up the slack left by FTX in regard to the most promising projects now lacking funding. This would seem a centralised way for smaller donors to play their part, without being blighted by ignorance as to who all the others small donors are funding.

• I’m confused about how you’re dividing up the three ethical paradigms. I know you said your categories were excessively simplistic. But I’m not sure they even roughly approximate my background knowledge of the three systems, and they don’t seem like places you’d want to draw the boundaries in any case.

For example, my reading of Kant, a major deontological thinker, is that one identifies a maxim by asking about the effect on society if that maxim were universalized. That seems to be looking at an action at time T1, and evaluating the effects at times after T1 should that action be considered morally permissible and therefore repeated. That doesn’t seem to be a process of looking “causally upstream” of the act.

When I’ve seen references to virtue ethics, they usually seem to involve arbitrating the morality of the act via some sort of organic discussion within one’s moral community. I don’t think most virtue ethicists would think that if we could hook somebody up to a brain scrambler that changed their psychological state to something more or less tasteful immediately before the act, that this could somehow make the act more or less moral. I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

And of course, we do have rule utilitarianism, which doesn’t judge individual actions by their downstream consequences, but rules for actions.

Honestly, I’ve never quite understood the idea that consequentialism, deontology, and virtue ethics are carving morality at the joints. That’s a strong assertion to make, and it seems like you have to bend these moral traditions to the categorization scheme. I haven’t seen a natural categorization scheme that fits them like a glove and yet beating distinguishes one from the other.

• You’re absolutely right to criticize that section! It’s just not good. I will add more warning labels/​caveats to it ASAP. This is always the pitfall of doing YAABINE.

That said, I do think the three families can be divided up based on what they take to be explanatorily fundamental. That’s what I was trying to do (even though I probably failed). The slogan goes like this: VE is “all about” what kind of person we should be, DE is “all about” what duties we have, and Consequentialism is “all about” the consequences of our actions. Character, duty, consequences – three key moral terms. (And natural joints? Who knows). Theories from each family will have something to say about all three terms, but each family of theory takes a different term to be explanatorily fundamental.

So you’re absolutely right that, in their judgments of particular cases, they can all appeal to facts up and down the causal stream (e.g. there is no reason consequentialists can’t refer to promises made earlier when trying to determine the consequences of an action). Maybe another way to put this: the decision procedures proposed by the various theories take all sorts of facts as inputs. You give a number of examples of this. But ultimately, what sorts of facts unify those various judgments under a common explanation according to each family of theory? That’s what I was trying to point at. I thought one way to divvy those explanatorily fundamental facts was by there position along the causal stream but maybe I was wrong. I’m really not sure!

I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

I completely agree that actual virtue ethicists would not do so, but the theory many of them are implicitly attached to (“do as the virtuous agent would do, for all the reasons the virtuous agent would do it”) does seem to judge people based on how you were feeling/​what you were thinking right before you did it.

• Thanks for clarifying!

The big distinction I think needs to be made is between offering a guide to extant consensus on moral paradigms, and proposing your own view on how moral paradigms ought to be divided up. It might not really be possible to give an appropriate summary of moral paradigms in the space you’ve allotted to yourself, just as I wouldn’t want to try and sum up, say, “indigenous vs Western environmentalist paradigms” in the space of a couple paragraphs.

• Will the results of this research project be published? I’d really like to have a better sense of biosecurity risk in numbers.

• 1 Dec 2022 13:11 UTC
6 points
1 ∶ 0

For anyone interested, especially university students, here’s my (unsolicited) story of working at SoGive:

Two years out my three main takeaways were probably (1) getting feedback on my writing and practice writing for EA contexts, (2) experience with charity evaluation, and (3) support exploring topics of my own interest, plus (3.5) I really liked working with Sanjay.

I volunteered with SoGive during the last year of my bachelors and later went on to work as an RA for the Founders Pledge Climate Team. During undergrad years prior to SoGive, I was an RA for an academic research lab at my Uni (sciences), had a campus job as a tour guide, held a leadership position with my student co-op, and did a data science internship.

Critically, the things I benefitted from most while volunteering with SoGive were things my other roles didn’t provide. I think specializing in EA research too early probably isn’t a great longterm career move, and diversifying your extracurriculars to get a healthy mixture of community, fun, and targeted skill/​career capital building is really important for both well-being and intellectual growth. Because my university had strong research programs for undergraduates, academic labs were probably a more direct way of “testing my fit” for research, but I expect this won’t be the case for most students. This work was a good fit for me as an undergraduate, but especially so because it met criteria others didn’t and provided mentorship from someone I respect (Sanjay).

TLDR; I’d encourage interested students to check out this program and listen to Alex Lawsen’s 80k episode on advice for students.

• 1 Dec 2022 13:10 UTC
3 points
0 ∶ 1

Big ask. Humour is incredibly difficult.

• I’m quite skeptical of post-hoc articles with titles like ‘X was no surprise’, they’re usually full of hindsight bias. Like, if it was no surprise, did you predict it coming?

Although there’s almost nothing about SBF here, is this part 1 of a series?

• You’re right that post-hoc articles are usually full of hindsight bias, making them a lot less valuable. That’s why I tried not to make the article about SBF too much (no this is not part 1 of a series). I laid that out from the beginning:

Please don’t read too much into the armchair psychological diagnosis from a complete amateur – that isn’t the point.

If you want a prediction I give one right after this:

The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if many EAs suffer (in varying degrees) from “moral schizophrenia”

I reiterate this when I say “I fear it is widespread in this community” where “it” is a certain coldness toward ethical choices (and other choices that would normally be full of affect).

SBF is topical and I thought this was a good opportunity to highlight this lesson about not engaging in excessive reasoning. But I agree my title isn’t great. Suggestions?

• Ok, I didn’t pick up that was where the prediction was in the article. I think of (good) predictions as having a clear, falsifiable hypothesis. Whereas this seems to be predicting … that virtue ethicists continue believing whatever they already believed about EAs?

a) Super unclear as a descriptive term. I understand in mainstream culture it’s seen as a kind of Jeckyll/​Hyde split personality thing, so maybe it’s meant to describe that. But I’m pretty sure that’s an inaccurate description of actual schizophrenia.

b) Harmful to those who have schizophrenia when used in this kind of negative fashion. Especially as it seems to be propagating the Jeckyll/​Hyde false belief about the condition.

Lastly, the ‘moral schizophrenia’/​coldness described here seems much more like a straw-man of EAs than an accurate description of an EAs I’ve met. The EAs I know IRL are warm and generous towards their families and friends, and don’t seem to associate being that way as at all incompatible with EA kind of reasoning. Sure, online and even irl discussions can seem dry, but it would be hard to have any discussions if we had to express, with our emotions, the magnitude of what was being discussed.

• Regarding the term “moral schizophrenia”:
As I said to AllAmericanBreakfast, I wholeheartedly agree the term is outdated and inaccurate! Hence the scare quotes and the caveat I put in the heading of the same name. But obviously I underestimated how bad the term was since everyone is telling to change it. I’m open to suggestions! EDIT: I replaced it with “internal moral disharmony.” Kind of a mouthful but good enough for a blog post.

Regarding predictions:
You’re right, that wasn’t a very exact prediction (mostly because internal moral disharmony is going to be hard to measure). Here is a falsifiable claim that I stand by and that, if true, is evidence of internal moral disharmony:

I claim that one’s level of engagement with the LW/​EA rationalist community can weakly predict the degree to which one adopts a maximizer’s mindset when confronted with moral/​normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one’s decision (or the object at the center of their decision) in such scenarios.

More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/​EA community correlates with an increase in the maximizer’s mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.

The hypothesis for why I think this correlation exists is mostly at the end of here and here.

But more generally, must a criticism of/​concern for the EA community come in the form of a prediction? I’m really just trying to point out a hazard for those who go in for Rationalism/​Consequentialism. If everyone has avoided it, that’s great! But there seems to be evidence that some have failed to avoid it, and that we might want to take further precautions. SBF was very much one of EA’s own: his comments therefore merit some EA introspection. I’m just throwing in my two cents.

Regarding actual EAs:
I would be happy to learn few EAs actually have thoughts too many! But I do know it’s a thing, that some have suffered it (personally I’ve struggled with it at times, and it’s literally in Mill’s biography). More generally, the ills of adopting a maximizer’s mindset too often are well documented. I thought it was in the community’s interest to raise awareness about it. I’m certainly not trying to demonize anyone: if someone in this community does suffer it, my first suspect would be the culture surrounding/​theory of Consequentialism, not some particular weakness on the individual’s part.

Regarding dry discussion on topics of incredible magnitude:
That’s fair. I’m not saying being dry and calculating is always wrong. I’m just saying one should be careful about getting too comfortable with that mindset lest one start slipping into it when one shouldn’t. That seems like something rationalists need to be especially mindful of.

• The Mayo Clinic says of schizophrenia:

“ Schizophrenia is characterized by thoughts or experiences that seem out of touch with reality, disorganized speech or behavior, and decreased participation in daily activities. Difficulty with concentration and memory may also be present.”

I don’t see the analogy between schizophrenia and “a certain coldness toward ethical choices,” and if it were me, I’d avoid using mental health problems as analogies, unless the analogy is exact.

• The term is certainly outdated and an inaccurate analogy, hence the scare quotes and the caveat I put in the heading of the same name. It’s the term that Stocker uses though and I haven’t seen another one (but maybe I missed it). The description “tendency to suffer cognitive dissonance in moral thinking” is much more accurate but not exactly succinct enough to make for a good name. I’m open to suggestions!

• The term I’d probably use is hypocrisy. Usually, we say that hypocrisy is when one’s behaviors don’t match one’s moral standards. But it can also take on other meanings. The film The Big Short has a great scene in which one hypocrite, whose behavior doesn’t match her stated moral standards, accuses FrontPoint partners of being hypocrites, because their true motivations (making money by convincing her to rate the mortgage bonds they are shorting appropriately) don’t match their stated ethical rationales (combating fraud).

On Wikipedia, I also found definitions from David Runciman and Michael Gerson showing that hypocrisy can go beyond a behavior/​ethical standards mismatch:

According to British political philosopher David Runciman, “Other kinds of hypocritical deception include claims to knowledge that one lacks, claims to a consistency that one cannot sustain, claims to a loyalty that one does not possess, claims to an identity that one does not hold”.[2] American political journalist Michael Gerson says that political hypocrisy is “the conscious use of a mask to fool the public and gain political benefit”.[3]

I think “motivational hypocrisy” might be a more clear term than “moral schizophrenia” for indicating a motives/​ethical rationale mismatch.

• Thanks for the suggestion. I ended up going with “internal moral disharmony” since it’s innocuous and accurate enough. I think “hypocrisy” is too strong and too narrow: it’s a species of internal moral disharmony (closely related to the “extreme case” in Stocker’s terms), one which seems to imply no feelings of remorse or frustration with oneself regarding the disharmony. I wanted to focus on the more “moderate case” in which the disharmony is not too strong, one feels a cognitive dissonance, and one attempts to resolve the disharmony so as not to be a hypocrite.

• I think that’s fine too.

• I think “hypocrisy” is too strong and too narrow

Fwiw I consider “hypocrisy” to be a much weaker accusation than “schizophrenia”

• I meant strong relative to “internal moral disharmony.” But also, am I to understand people are reading the label of “schizophrenia” as an accusation? It’s a disorder that one gets through no choice of one’s own: you can’t be blamed for having it. Hypocrisy, as I understand it, is something we have control over and therefore are responsible for avoiding or getting rid of in ourselves.

At most Stocker is blaming Consequentialism and DE for being moral schizophrenia inducing. But it’s the theory that’s at fault, not the person who suffers it!

• Yeah I think this is fair. I probably didn’t read you very carefully or fairly. However, it is hard to control connotations of words, and I have to admit I had a slightly negative visceral reaction for what I believed to be my sincerely held moral views (that I tried pretty hard to live up to, and I made large sacrifices for) medicalized and dismissed so casually.

• Yikes! Thank you for letting me know! Clearly a very poor choice of words: that was not at all my intent!

To be clear, I agree with EAs on many many issues. I just fear they suffer from “overthinking ethical stuff too often” if you will.

• Thanks for responding! (upvoted)

On my end, I’m sorry if my words sounded too strong or emotive.

Separately, I strongly disagree that we suffer from overthinking ethical stuff too much. I don’t think SBF’s problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations. I would guess that if he actually consulted senior EA leaders or researchers on the morality of his actions, this would predictably have resulted in less fraud.