LessWrong for EA

TagLast edit: 3 May 2022 2:45 UTC by

Use the LessWrong for EA (or LW4EA) tag for any EA-relevant LessWrong post, including posts from the weekly LessWrong repost & low-commitment discussion group.

My slack bud­get: 3 sur­prise prob­lems per week.

1 Jun 2022 19:46 UTC
55 points

LW4EA: Be­ing the (Pareto) Best in the World

22 Mar 2022 2:58 UTC
14 points
(www.lesswrong.com)

LW4EA: How Much is Your Time Worth?

19 Apr 2022 2:03 UTC
9 points
(www.lesswrong.com)

Low-Com­mit­ment Less Wrong Book (EG Ar­ti­cle) Club

10 Feb 2022 15:25 UTC
39 points

LW4EA: Hu­mans are not au­to­mat­i­cally strategic

28 Feb 2022 18:39 UTC
14 points
(www.lesswrong.com)

LW4EA: Priv­ileg­ing the Question

7 Mar 2022 16:38 UTC
13 points
(www.lesswrong.com)

14 Mar 2022 17:26 UTC
18 points
(www.lesswrong.com)

29 Mar 2022 3:16 UTC
5 points
(www.lesswrong.com)

LW4EA: Yes Re­quires the Pos­si­bil­ity of No

4 Apr 2022 16:00 UTC
13 points
(www.lesswrong.com)

LW4EA: Can the Chain Still Hold You?

12 Apr 2022 18:26 UTC
9 points
(www.lesswrong.com)

LW4EA: Philo­soph­i­cal Landmines

26 Apr 2022 2:53 UTC
9 points
(www.lesswrong.com)

LW4EA: How to Not Lose an Argument

3 May 2022 2:43 UTC
5 points
(www.lesswrong.com)

LW4EA: Beyond Astro­nom­i­cal Waste

10 May 2022 14:50 UTC
9 points
(www.lesswrong.com)

LW4EA: Some cruxes on im­pact­ful al­ter­na­tives to AI policy work

17 May 2022 3:05 UTC
11 points
(www.lesswrong.com)

LW4EA: 16 types of use­ful predictions

24 May 2022 3:19 UTC
14 points
(www.lesswrong.com)

LW4EA: How to Be Happy

31 May 2022 22:42 UTC
8 points
(www.lesswrong.com)

LW4EA: Sab­bath hard and go home

7 Jun 2022 2:46 UTC
10 points
(www.lesswrong.com)

LW4EA: Value of In­for­ma­tion: Four Examples

14 Jun 2022 2:26 UTC
5 points
(www.lesswrong.com)

LW4EA: Is Suc­cess the Enemy of Free­dom? (Full)

21 Jun 2022 15:07 UTC
5 points
(www.lesswrong.com)

LW4EA: To listen well, get curious

4 Jul 2022 15:27 UTC
10 points
(www.lesswrong.com)

LW4EA: When Money Is Abun­dant, Knowl­edge Is The Real Wealth

9 Aug 2022 13:52 UTC
25 points
(www.lesswrong.com)

LW4EA: Epistemic Legibility

16 Aug 2022 15:55 UTC
5 points
(www.lesswrong.com)

LW4EA: Six eco­nomics mis­con­cep­tions of mine which I’ve re­solved over the last few years

30 Aug 2022 15:20 UTC
8 points
(www.lesswrong.com)

LW4EA: Science in a High-Di­men­sional World

6 Sep 2022 3:06 UTC
11 points
(www.lesswrong.com)

Ques­tion­ing the Foun­da­tions of EA

27 Aug 2022 20:44 UTC
97 points

LW4EA: In­cor­rect hy­pothe­ses point to cor­rect observations

23 Sep 2022 12:52 UTC
5 points
(www.lesswrong.com)

LW4EA: A game of mattering

27 Sep 2022 2:32 UTC
16 points
(www.lesswrong.com)

LW4EA: Fact Posts: How and Why

10 Oct 2022 12:08 UTC
11 points
(www.lesswrong.com)
• If you haven’t extensively, successfully dealt with the media, someplace where the media do not start out nicely inclined towards you (i.e., your past media experience at the Center for Rare Diseases in Cute Puppies does not count), you are not qualified to give this advice. It should be given by somebody who understands how bad journalism gets and what needs to be done to avoid the usual and average negative outcome, or not at all.

• 5 Dec 2022 2:41 UTC
2 points
0 ∶ 0

Thanks for sharing this. While I’m less on board with the idea that EA folks should have been able to detect the fraud in advance than some people here, “charity badly burned by fraudster” happens often enough to be an ascertainable risk, and “crypto wealth explodes, sometimes with at least an odor of foul play” is a regular occurence. So I don’t think this was as unforeseeable as the black swan label would suggest.

• 5 Dec 2022 2:33 UTC
2 points
1 ∶ 0

Any definitive “you aren’t getting any money back” statement is a while off, and SBF/​FTX will probably be oldish news by then. I think the more likely media focus points will be his likely indictment, plea, trial, and/​or sentencing.

• I’ve had vaguely similar thoughts about youth and levels professional experience, but you have articulated them much better than I have. Thanks for writing this.

I’d be very happy to see a “EA managers” Slack (or some other forum/​conversation space/​community), and I would be very happy to join.

• Funds are a relatively new way for donors to coordinate their giving to maximise their impact.

I don’t think the idea of a charitable fund is especially new? More: https://​​forum.effectivealtruism.org/​​posts/​​9Axdtzusq9wSKixEZ/​​historical-notes-on-charitable-funds

• This post hit at a good topic and gave it good nuance, minus the part about “society lead by the elite” and “self obvious statements”

• Working on a forum post about animals and longtermism. I have an outline on a google doc and would love to have collaborators or just people to give feedback about the content.

• [ ]
[deleted]
• This is a great question. I think the answer depends on the type of storage you’re doing.

If you have a totally static lump of data that you want to encode in a harddrive and not touch for a billion years, I think the challenge is mostly in designing a type of storage unit that won’t age. Digital error correction won’t help if your whole magnetism-based harddrive loses its magnetism. I’m not sure how hard this is.

But I think more realistically, you want to use a type of hardware that you regularly use, regularly service, and where you can copy the information to a new harddrive when one is about to fail. So I’ll answer the question in that context.

As an error rate, let’s use the failure rate of 3.7e-9 per byte per month ~= 1.5e-11 per bit per day from this stack overflow reply. (It’s for RAM, which I think is more volatile than e.g. SSD storage, and certainly not optimised for stability, so you could probably get that down a lot.)

Let’s use the following as an error correction method: Each bit is represented by N bits; for any computation the computer does, it will use the majority vote of the N bits; and once per day,[1] each bit is reset to the majority vote of its group of bits.

If so...

• for N=1, the probability that a bit is stable for 1e9 years is ~exp(-1.5e-11*365*1e9)=0.4%. Yikes!

• for N=3, the probability that 2 bit flips happen in a single day is ~3*(1.5e-11)^2 and so the probability that a group of bits is stable for 1e9 years is ~exp(-3*(1.5e-11)^2*365*1e9)=1-2e-10. Much better, but there will probably still be a million errors in that petabyte of data.

• for N=5, the probability that 3 bit flips happen in a single day is ~(5 choose 2)*(1.5e-11)^3 and so the probability that the whole petabyte of data is safe for 1e9 years is ~99.99%. And so on this scheme, it seems that 5 petabytes of storage is enough to make 1 petabyte stable for a billion years.

Based on the discussion here, I think the errors in doing the majority-voting calculations are negligible compared to the cosmic ray calculations. At least if you do it cleverly so that you don’t get too many correlations and ruin your redundance (which there are ways to do according to results on error correcting computations — though I’m not sure if they might require some fixed amount of extra storage space to do this, in which case you might need N somewhat greater than 5).

Now this scheme requires that you have a functioning civilization that can provide electricity for the computer, that can replace the hardware when it starts failing, and stuff — but that’s all things that we wanted to have anyway. And any essential component of that civilization can run on similarly error-corrected hardware.

And to account for larger-scale problems than cosmic rays (e.g. local earthquake throws harddrive to the ground and shatters it, or you accidentally erase a file when you were supposed to make a copy of it), you’d probably want backup copies of the petabyte on different places across the Earth, which you replaced each time something happened to one of them. If there’s an 0.1% chance of that happening in any one day (corresponding to once/​3 years, which seems like an overestimate if you’re careful), and you immediately notice it and replace the copy within a day, and you have 5 copies in total, the probability that one of them keeps working at all times is ~exp(-(0.001)^5*365*1e9)~=99.96%. So combined with the previous 5, that’d be a multiple of 5*5=25.

This felt enlightening. I’ll add a link to this comment from the doc.

1. ^

Using a day here rather than an hour or a month isn’t super-motivated. If you reset things very frequently, you might interfere with normal use of the computer, and errors in the resetting-operation might start to dominate the errors from cosmic rays. But I think a day should be above the threshold where that’s much of an issue.

• Just want to say that I love this format:

1. Clear title

2. Summary

3. Clear introduction to the concept

4. Clear description of the relevance and limitations of the concept

5. Application of the concept accompanied by clearly marked recommendations

• 4 Dec 2022 23:33 UTC
−2 points
4 ∶ 1

On the forum in particular and in EA discourse in general, there is a tendency to give less weight/​be more critical of posts that are more emotion-heavy and less rational.

You say that like it is a bad thing. I would suggest reading [Against Empathy](https://​​en.wikipedia.org/​​wiki/​​Against_Empathy#Empathy_versus_Compassion) to understand why it is a very good thing.

We need less emotion-driven decisions and more rational decisions, that is the whole point of EA.

• GiveWell has identified several new giving opportunities! They’re just not listed under “top charities” at the moment. See info on their aims for next year here: https://​​blog.givewell.org/​​2022/​​11/​​23/​​giving-recommendations-2022/​​

• Thanks for writing all of this Ben. I agree with everything you have said and like your ideas. I also think that we should:

• nudge more EAs to get research etc management experience in industry before returning to work in new EA organisations.

• have a management consultancy type org or network for providing fractional management oversight/​advice to new organisations. For instance, 10 hours a week to advise how to set up a research team or to sit in on supervision meetings and pass on best practices.

• have a norm that managers at places like Rethink occassionally do placements at other research organisations to pass on their knowledge and best practice

• have some people who are involved across many established and new research orgs (maybe as funders) and incentivised to understand and speed up the dissemination of collective best practice (e.g., via talk with CEOs/​founders or writing up what they have learned)

• I love the donation live-streams on your website!

• I of course agree that we should take into account the size of the future. I somewhat disagree with this:

the goals “minimize existential risk” and “minimize existential risk conditional on a possibly big future” are likely to conflict in practice

Do you have examples in mind? I can think of a couple related to anthropics, but their decision-relevance is unclear.

No matter what the universe is like, or whether we’re in a simulation, or whatever, averting x-risk seems roughly equivalent to increasing future option value, which seems roughly equivalent to being able to make the most of the universe, whatever it’s like.

• Mostly no, I think: an AI not capable of taking over without descriptions of AI takeover isn’t much more threatening after it reads a few descriptions of AI takeover. (In fact, most such descriptions are not optimized for realism, so I think it’s not even clear that AI reading such descriptions is net negative! The marginal description from this community is probably reasonably realistic, but lacks details on the hard steps, because we don’t know how to do them.) That said, if you think of an excellent tactic to help an AI take over the world, publishing it could cause a rogue AI to learn about it.

• I endorse all of this, but would add that the AI in the last sentence “learning about it” still wouldn’t actually matter, and hence it’s not “mostly no” but rather just “no”.

• 4 Dec 2022 20:47 UTC
3 points
0 ∶ 0

Thanks for writing this!

Your pre-screening steps look a little different to mine, which could be a regional thing given we were both in the process at around the same time.

Your guide is excellent! (And I have added a link to it from my much more hurried post :))

• 4 Dec 2022 20:30 UTC
2 points
0 ∶ 0

Markus Amalthea Magnuson at https://​​altruistic.agency/​​

• I would argue that the majority of software products are aiming at making a positive impact, and my guess would be that the products that have the biggest impact are wholly unrelated to EA. Having said that, here are a few that seem in line with the spirit of the question:

• Spark Wave

• QURI

• Computation Democracy Project

• Various other prediction markets, like Manifold

• There are people who think that building any kind of ML-intensive product is not a good thing, even if the model isn’t SOTA, and so might disagree that what Ought are doing is good. I neither endorse nor disendorse this opinion, but if you disagree with it then you might also like co:here

• Toby—interesting essay. But I’m struggling to find any rational or emotive force in your argument that ‘strong longtermism tells us to look at the set of possible theories about the world, pick the one in which the future is largest, and, if it is large enough, act as if that theory were true’

The problem is that this leads to a couple of weird edge cases.

First, if we live in a ‘quantum multiverse’, in which there are quadrillions of time-lines branching off every microsecond into new universes, then the future is very very large indeed, but any decisions we make to influence it seem irrelevant, insofar as we’d make any possible decision in some branching time-line.

Second, the largest possible futures seem associated more with infinite religious afterlives than with scientifically plausible theories. Should ‘strong longtermists’ simply adopt Christian metaphysics, on the assumption that an infinite afterlife in heaven would be really cool, compared to any atheist metaphysics?

I’d welcome any thoughts about these examples.

• 4 Dec 2022 19:17 UTC
4 points
0 ∶ 0

I don’t come here often, and don’t plan to write something verbose whatsoever. I have one simple thing to say, speaking as a neuroscientist—why on earth would you think something like the number of neurons in a brain is a good proxy for moral weight?

What age group has the most neurons in humans? Babies. It’s babies. The ones you constantly have to tell what is right and what is wrong? To stop hitting their siblings, being selfish, etc? This is not a nuanced conversation, it is one that, if you subscribe to scientific materialism, is quickly and efficiently done away with.

To utilize logic in parallel with the way it’s been done here so far : in humans, decreasing neural count is correlated with increased moral intelligence (incl. the development of empathy, moral reasoning, etc.)

• (tldr: We might not be psychologically capable of handling highly emotionally compelling portrayals of our largest-scope, most important cause areas.)

Eleni & Luis—this is a fascinating and thoughtful post, and raises some insightful pros and cons of EA adopting more mainstream emotionally appealing marketing & PR methods.

As someone who’s worked in psychology for almost 40 years, and done a fair amount of consulting with market research companies, ad agencies, and consumer product companies, I support the general idea of some EAs becoming a bit more familiar with the psychology of persuasion, marketing, and influence.

However, I want to point out another possible downside of turning our rational interest in tractable, neglected, scope-sensitive problems into more emotionally compelling images and narratives.

The key problem is that a lot of EA deals with such large-scope problems (e.g. animal suffering, nuclear war, pandemics, longtermist cosmic stakes) that any emotionally impactful, truly compelling, highly memorable presentation of these problems could be extremely depressing, psychologically paralyzing, and traumatically damaging. It would be like being trapped in the ‘Total Perspective Vortex’ in the Hitchhiker’s Guide to the Galaxy series.

I think the current EA messaging/​influence strategies embody a tacit understanding that we’re willing to engage with large-scope problems that would seem utterly overwhelming if we really felt the true suffering impact of the problems we’re fighting against, all day, every day.

Other charities have the luxury of presenting their relatively small-scale cause areas using emotionally compelling narratives, precisely because ordinary people can handle the scale of those problems without losing their sanity. For example, the Make a Wish Foundation highlights the suffering of individual kids with terminal illnesses; that’s sad, tragic, and poignant, but emotionally engaging with their plight isn’t psychologically crippling.

By contrast, any truly emotionally compelling portrayals I’ve seen of truly large-scope, high-stakes, global-scale or cosmic-scale EA cause areas might be so traumatizing to most people that they’d become counter-productive. Just as our brains didn’t evolve to reason clearly about how to reduce suffering beyond the scale of prehistoric hominid clans, our brains might not be prepared to handle emotionally compelling presentations of suffering beyond that scale.

Consider some specific examples.

Wild animal suffering: In my ‘Psychology of Effective Altruism’ class, I’ve learned that I simply can’t assign students the Brian Tomasik essay on wild animal suffering, because it can induce serious depression, extreme anxiety, and even panic attacks. It’s just too emotionally compelling, and the scale of the problem is too overwhelming. Similar considerations apply for sharing with students any truly compelling depictions of animal suffering under factory farming. (Vegan activists have learned from bitter experience that there’s an optimal degree of factory farming gore that they can show to non-vegans to convince them to give up meat, and that degree is non-zero, but it’s not very high.)

S-risks: the Black Mirror episode ‘White Christmas’, which depicts a future in which uploaded criminals are tormented in virtual hells for many subjective millennia, is simply too distressing for most people to handle; it haunts their dreams for weeks. I still wish I’d never seen it. (The Iain M. Banks novel ‘Surface Detail’ (2010) raises similar issues.) The S-risks of virtual hells are very important to consider, but it might be most productive to consider those issues at a somewhat abstract, rational level, rather than a truly visceral level.

Nuclear war: Most people have seen science fiction movie depictions of global thermonuclear war, which induce varying levels of fear, horror, disgust, and dread. But very few movies, TV series, books, etc really try to capture the full scope and scale of the impact on a nuclear holocaust on billions of people. Any such attempt would simply be too depressing and horrifying to engage with. (Consider also movies like The Road (2009) that try to do an unflinchingly realistic portrayal of post-apocalyptic life—they’re often praised by critics, but rarely re-watched by ordinary folks.)

In short, I think a distinctive strength of EA is that we can set aside the highly emotive portrayals of the largest-scope cause areas that we rationally know are most important. We can keep the suffering impacts of these cause areas off to the side, in our peripheral vision, so to speak, with some degree of awareness—but without the paralyzing level of emotional response that we’d feel if they were always front and center in our imaginations.

I would caveat this point in two ways.

First, there can still be crucial roles for positive, inspirational, optimistic portrayals of success in solving various large-scope problems—e.g. how awesome life would be if we solved longevity, achieved nuclear security, promoted happier animal lives, and secured a great long-term future for our descendants. I think Nick Bostrom’s Letter from Utopia is a good example of this genre. We need more such fictions and inspiring tales of success. Engaging with our positive emotions can be great; relying on negative emotions elicited by accounts of mass suffering can be much trickier.

Second, at the outreach level of recruiting talent and money, some slightly more emotionally compelling narratives and influence methods could be useful. We just have to pitch them very carefully—eliciting an optimal degree of concern that’s somewhere in the middle between apathetic indifference and paralyzing horror. And discovering those optimal degrees of concern, as the original post mentioned, is very much a matter for empirical psychological research, rather than armchair reasoning.

• This was really well written! I appreciate the concise and to the point writing style, as well as a summary at the top.

Regarding the arguments, I think they make sense to me. Although this is where the whole discussion of longtermism does tend to stay pretty abstract, if we can’t actually put real numbers on it.

For ex, in the spirit of your example—does working on AI safety at MIRI prevent extinction, while assuming a sufficiently great future compared to, say, working on AI capabilities at OpenAI? (That is, maybe a misaligned AI can cause a greater future?)

I don’t think it’s actually possible to do a real calculations in this case, and so we make the (reasonable) base assumption that a future with alligned AI is better than a future with a misaligned AI, and go from there.

Maybe I am overly biased against longtermism either way, but in this example it seems to me like the problem you mention isnt really a real-world worry, but only really a theoretically possible pascals mugging.

Having said that I still think it is a good argument against strong longtermism

• 4 Dec 2022 18:48 UTC
1 point
1 ∶ 0

What are some negative and positive sentiments about OpenAI within the EA community?

I know OpenPhil made them a 30m grant in 2017, but Holden no longer seems to be on the board and a bunch of people left to create Anthropic. Whats up with that?

• Similarly, if you look at Will’s Tweetstorm decrying SBF’s actions, the majority of responses are angrily negative responses to a movement that consorted with crypto billionaires, as though that’s all EA has ever been about.

Social media systems are notorious for finding galaxy-brained ways to detect unsanctioned research and researchers, and thwart their research without their knowledge, e.g. by feeding the researchers false information that strongly appears to be accurate (this has even been known to happen with sanctioned researchers as well). I’m not saying this is the wrong way to look at things in this particular situation, but in general it is a losing strategy for research.

It seems we’ll to have to deal with this being the first impression many people have formed of the movement for quite some time.

We’ll still need to wait and see, because odds are high that the scope of this phenomenon is being mismeasured by orders of magnitude, in either direction. Both scenarios are worth investment, especially extreme-seeming versions of the scenarios.

• How does GiveWell’s All Grants Fund compare to EA Funds’s Global Health and Development Fund? My understanding is that both funds are looking primarily at global health interventions and have a greater tolerance for risk than GiveWell’s Top Charities Fund. Are there major differences between the funds?

• You’re correct about all of this. I developed a fellowship program to help orgs specifically with upskilling and having the support they need to do well. I believe that the 3 critical ingredients to running an org successfully are: a) having the right knowledge b) having peer support and c) having a mentor and accountability. My personal mission is to help orgs succeed. You can find out more information on my website, or shoot me a PM /​ email.
I’m also working on developing an organization to consolidate all the org support resources—I’ve done this in the small business sector, and am applying the principles to EA. Would love to connect with anyone who wants to be a part of it.

• Hi!

I’m Minh, current intern at Nonlinear. Since this thread is clearly picking up again, I’d like to provide my perspective as someone who’s worked at Nonlinear, and on the Emergency Fund. First, context:

1. All this is my own experience and opinion. The fact that I had overall positive experiences does not invalidate someone else’s negative experience. I admit it’s … not a good look to have such a high rate of strong negative sentiment.

2. I’ve been working with Nonlinear for ~3 months. About 80% of my interaction is with Drew Spartz [1]and Luca (the other intern). The other 20% is Kat who I’ve had one 2-hour meeting with when planning the Nonlinear Emergency Fund and text correspondence/​coordination as Nonlinear processes applications. I have never interacted with Emerson.

What I can do is provide additional context. So far, we’ve received ~40 applications for the Nonlinear Emergency Fund. Kat has been personally following up on requests, coordinating with external funders and handling relevant documents for dozens of grantees. To my knowledge, ~15 of the most time-sensitive requests have been approved. The responsibility is kind of insane, since improper follow-up/​delay has immediate consequences on the grantee’s end. About 8 requests indicated they needed to hear back in less than a week, and the shortest was less than three days.

Motivations behind the Emergency Fund
Keep in mind that none of this is something Nonlinear actually has to do. There’s practically no direct impact to us, and the emergency funding we give out are, to my knowledge, unconditional. I did see one grant that stipulated additional non-emergency funding contingent on producing policy-actionable research, but that doesn’t benefit Nonlinear either.

During the 2-hour call when Kat, Luca and I discussed the Emergency Fund, the 2 main focuses were:

1. Prioritising immediate financial/​personal emergencies of grantees

2. Prioritising counterfactual impact on EA work This means bridge-funding projects/​work that would otherwise have to be stopped as grantees have to cease their work. I won’t give names, but let’s just say I was very surprised to see projects I’d used before/​applied to.

At no point did I hear ulterior motives. And frankly, I feel like launching an emergency fund of your own money, when you don’t have to, isn’t the kind of thought process a selfish opportunist jumps at.

Implications
Also, keep in mind that, at the time:

1. Nonlinear also lost 250k in funding from FTX Future Fund. I have been told some of the team also had larger personal funds on FTX. If anything, FTX is paying Nonlinear if clawbacks occur. 2. Kat was entirely focused on the community. Video calling someone on a treadmill while she earnestly details plans to distribute six figures in emergency funding, right after a third of EA funding vanished overnight, is a very surreal experience. 3. Kat had no reassurance that any other funders would also step up for any of the grantees. Hence the explicit assumption was that Nonlinear’s founders would self-fund the grantees out of their own pockets. I distinctly recall her saying “If we need to fund them, that’s a good thing to do. Conclusion I’ve told this story because: 1. This thread about the Nonlinear Emergency Fund has gotten wildly off tangent, so I might as well update the community with positive news. Hi! 2. I wanted to provide a contrasting perspective. The team I’ve interacted with has been talking about helping the EA community this entire time, actively coordinating and willingly paid out of pocket for people directly. While this doesn’t invalidate or excuse alleged mistreatment, I just figured my perspective would be significantly belief-updating to people who have only read accusations of morally unscrupulous behaviour for the past month. 1. ^ Drew is great btw, but the accusations so far aren’t directed at him. • Good to know Drew treats you well, though you’re right that the allegations so far aren’t directed at him. Given your impressions of Kat are largely based off a 2 hour call, I don’t know if I’d consider this “significantly belief-updating” compared to other claims on this thread, though I’m glad you shared your experience. Out of curiosity, did anyone ask you, or Luca to make comments on this post, or was this completely unprompted (aside from seeing the recent forum comments? • There’s a pretty major difference here between EA and most religions/​ideologies. In EA, the thing we want to do is to have an impact on the world. Thus, sequestering oneself is not a reasonable way to pursue EA, unless done for a temporary period. An extreme Christian may be perfectly happy spending their life in a monastery, spending twelve hours a day praying to God, deepening their relationship with Him, and talking to nobody. Serving God is the point. An extreme Buddhist may be perfectly happy spending their life in a monastery, spending twelve hours a day meditating in silence, seeking nirvana, and talking to nobody. Seeking enlightenment is the point. An extreme EA would not be pursuing their beliefs effectively by being in a monastery, spending twelve hours a day thinking deeply about EA philosophy, and talking to nobody. Seeking moral truths is not the point. Maybe they could do this for a few months, but if they do nothing with their understanding, it is useless—there is no direct impact from understanding EA philosophy really well. You have to apply it in reality. Unlike Christianity and Buddhism, which emphasise connection to spiritual things, EA is about actually going out and doing things to affect the world. Sequestering an EA off from the world is thus not likely to be something that EA finds satisfying, and doesn’t seem like something they would agree to. • Does anyone have a recommendation for a resource to read on dependency injection? I’m considering whether to add the pattern to the ForumMagnum codebase. My current state is that I have a tentative grasp of the concept and roughly the pros and cons, but not really a practical grasp of it, or of how it’s played out in a typescript codebase. • I notice that I’m confused. I read your previous post. Nobody seemed to disagree with your prediction that a misaligned AI could do this, but everybody seemed to disagree that this was a problem such that fixing it was worth the large costs in coordination. Thus, I don’t really understand how this is supposed to be an update for those who disagreed with you. Could you elaborate on why you think this information would change people’s minds? • For those who don’t want to follow links to a previous post and read the comments, the counterargument as I understand it (and derived, independently, before reading the comments) is: For this to be a threat, we would need an AGI that was - Misaligned - Capable enough to do significant damage if it had access to our safety plans - Not capable enough to do a similar amount of damage without access to our safety plans I see the line between 2 and 3 to be very narrow. I expect almost any misaligned AI capable of doing significant damage using our plans to also be capable of doing significant damage without needing them. By contrast, the cost of not posting our plans online is a likely drastic reduction of effectiveness of the AI alignment field, both coordinating among existing members and bringing new members in. While the threat that Peter talks about is real, it seems that we are in much more danger by slowing down our alignment progress than we are by giving AI’s access to our notes. • Thank you so much for the clarification, Jay! It is extremely fair and valuable. I don’t really understand how this is supposed to be an update for those who disagreed with you. Could you elaborate on why you think this information would change people’s minds? The underlying question is: does the increase in the amount of AI safety plans resulting from coordinating on the Internet outweigh the decrease in secrecy value of the plans in EV? If the former effect is larger, then we should continue the status-quo strategy. If the latter effect is larger, then we should consider keeping safety plans secret (especially those whose value lies primarily in secrecy, such as safety plans relevant to monitoring). The disagreeing commenters generally argued that the former effect is larger, and therefore we should continue the status-quo strategy. This is likely because their estimate of the latter effect was quite small and perhaps far-into-the-future. I think ChatGPT provides evidence that the latter should be a larger concern than many people’s prior. Even current-scale models are capable of nontrivial analysis about how specific safety plans can be exploited, and even how specific alignment researchers’ idiosyncrasies can be exploited for deceptive misalignment. For this to be a threat, we would need an AGI that was - Misaligned - Capable enough to do significant damage if it had access to our safety plans - Not capable enough to do a similar amount of damage without access to our safety plans I see the line between 2 and 3 to be very narrow. I expect almost any misaligned AI capable of doing significant damage using our plans to also be capable of doing significant damage without needing them. I am uncertain about whether the line between 2 and 3 will be narrow. I think the argument of the line between 2 and 3 being narrow often assumes fast takeoff, but I think there is a strong empirical case that takeoff will be slow and constrained by scaling, which suggests the line between 2 and 3 might be larger than one might think. But I think this is a scientific question that we should continue to probe and reduce our uncertainty about! • Great list, thanks for sharing! Animal Advocacy Careers also has a Job Board. If you don’t mind me plugging my own org, Vegan Hacktivists specializes in building software for other animal advocacy/​protection orgs, along with offering tech, design and advisory services, all for free. Will keep an eye out on this thread, would love to learn about other organizations using technology as a catalyst for good! • The vegan hacktivists sounds really cool and would love to help out! What kind of work/​impact do the projects yall do have? Is it mainly with other ea animal orgs? • 4 Dec 2022 13:59 UTC 0 points 0 ∶ 0 This and increasingly EA interest in funding American politics indicate a willingness to make compromises (like decreasing impact/​dollar) to benefit Americans. I think we should not make compromises that decrease impact/​dollar to benefit Americans. Americans are not more valuable than people from Poland or Pakistan or Kenya. • I agree that Americans are not more valuable than any other people, but EA orgs like GiveDirectly have made the choice to support impoverished Americans because they are based in the US and can reach them more easily. • I am skeptical that that’s the reason for it. I haven’t seen numbers on this but I would be shocked if the cost of delivery of any intervention to Americans is less than delivery in developing countries, because operating costs for any intervention are higher in the US—hiring people to identify recipients, for example, would be an order of magnitude more expensive than doing the same in Kenya. I suspect the real reason, which is maybe worth centering in this post, is that the pool of money that people are potentially willing to donate to Americans is a) very large, b) not fungible with money they would otherwise donate to global poverty. • Hi, Field studies are outside of the scope of Reslab, right? • Very good article! Many EA orgs are start-ups so it’s natural for them to start with a relatively inexperienced team. However, it’s normal for start-ups to bring in senior people at the top as they grow rather than hiring from the bottom, which is what most EA orgs seem to. The best CEO for a 5-person organisation is rarely the same as the best CEO for a 100-person organisation. CEOs need to get better at recognising their own weaknesses and Boards need to get better at transitioning founders out of the CEO role. In my opinion, EA orgs also need to increase the weight of experience and decrease the weight of moral philosophy /​ being in certain cliques. • Would one possible solution to some of these problems be to hire much more outside EA? Move “familiarity with EA” to the “bonus” part of the job requirements, and instead look for experienced managers as a main criterion? • Hi Nathan, Thanks for writing this! I only saw your post now, but I had made a draft for a question related to the karma system about 1.5 months ago. Feel free to have a look. Interestingly, the estimates I got for the correlation coefficient between karma and impact based on this analysis from Nuño are pretty similar to the 0.47 you obtained for the relationship between the inflation-adjusted karma and ranking of the posts of the Decade Review. • I’ve been studying religions a lot and I have the impression that monasteries don’t exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don’t cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I’m just naive. • 4 Dec 2022 2:55 UTC 10 points 6 ∶ 1 Thanks for sharing! The speakers on the podcast might not have had the time to make detailed arguments, but I find their arguments here pretty uncompelling. For example: • They claim that “many belief systems they have a way of segregating and limiting the impact of the most hardcore believers.” But (at least from skimming) their evidence for this seems to be just the example of monastic traditions. • A speaker claims that “the leaders who take ideas seriously don’t necessarily have a great track record.” But they just provide a few cherry-picked (and dubious) examples, which is a pretty unreliable way of assessing a track record. • Counting Putin a “man of ideas” because he made a speech with lots of historical references—while ignoring the many better leaders who’ve also made history-laden speeches—looks like especially egregious cherry-picking. So I think, although their conclusions are plausible, these arguments don’t pass enough of an initial sanity check to be worth lots of our attention. • 4 Dec 2022 2:36 UTC 3 points 1 ∶ 2 Suppose the Speculation Grantors will consider a grant that is extremely risky. Suppose that grant has a 10% chance to be evaluated in the next round as far more beneficial than all the rest of the Speculation Grants of that round combined; and yet, the grant is net-negative (due to its potential to cause extreme accidental harm, which is not unlikely for extremely-impactful anthropogenic x-risk interventions). If in your implementation of “impact futures” the “fd_value” of an impact certificate cannot be negative, then to the extent that a Speculation Grantor acts in a way that is aligned with maximizing their expected speculation grant budget, they may bet on that 10% chance and fund the risky intervention. Especially if they think that all the other Speculation Grantors will not fund it, thereby allowing their own budget to receive most of the “settlement_budget” (which seemingly exacerbates the unilateralist’s curse problem). I co-authored a post about the general risk of impact markets incentivizing risky, net-negative projects. Please consider taking a look at it if you haven’t already. • 4 Dec 2022 1:41 UTC 15 points 5 ∶ 0 I think this is perfectly good to try, but I’m personally skeptical that it will end up being especially useful. My sense is that right now, there isn’t a shortage of frontpage content on the forum. Rather, there seems to often be a shortage of deep reading, engagement, and discussion when someone writes a long object-level post. I would be interested to see initiatives aimed at fostering that kind of deeper engagement with content, rather than at trying to get more frontpage posts. • I think it’s important to kindly point out that white people, let alone people of any racial or ethnic or gender background, are not a monolith. Diversity is more than skin deep. It concerns values, possessed expertise/​skills/​knowledge, epistemic systems, philosophical and ontological beliefs, spiritual and educational backgrounds, health and disability statuses, age, sexual orientation, occupations, goals, dietary preferences, socioeconomic factors, life experiences. Arguably many of these categories, particularly ones concerning ethical or epistemic matters alongside cause prioritization, are far more important than what a person looks like or is perceived to be from the outside. EA should aspire to be accommodating to people of all ethnic and racial backgrounds. Certainly. It should seek to understand why there is a disparity in membership for certain racial classifications relative to US/​UK population values or %s and seek to rectify it. But it is difficult to defend a tendency to equate people who are white or groups with membership that happens to be majority white with a ‘lack of diversity’. That simply isn’t the case when all categories that constitute a diverse community are considered, not just skin tone or gender. Truly, if all the white persons in EA thought and acted the same, had the same diet, had the exact same life experiences, age, sexual orientations, causes, held the same or similar occupations, shared the same values, health or disability statuses, epistemic systems, philosophical and ontological beliefs, religious and spiritual backgrounds, educational backgrounds, personal knowledge and skills, and socioeconomic backgrounds then I would say EA has a diversity problem. But that isn’t simply the case. Every EA is different and unique in all these respects. You are too. An individual person who is fully unlike any other. That’s what makes us diverse. Not a blanketing reduction of people to skin tone or race that missed out on all else that makes us human :) • Thank you for posting this! I’ve been frustrated with the EA movement’s cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for “longtermism” was a Torres hit piece. I understand and agree with the view that media outreach should be done by specialists—ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren’t the only people with those qualifications! There’s no reason they need to be the public face of all of EA—they represent one faction out of at least three. EA is a general concept that’s compatible with a range of moral and empirical worldviews—we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement’s behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was. Perhaps more EA orgs—like GiveWell, ACE, and FHI—should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting. • Perhaps more EA orgs—like GiveWell, ACE, and FHI—should have their own publicity arms that operate independently of CEA I think GiveWell (at least, and maybe ACE too) is already very independent of CEA. The fact that they also haven’t done mass media outreach is probably a result of an independent assessment that it isn’t particularly in their interests to do so. They do have significant marketing and media outreach, e.g. I’ve seen YouTube ads, and I know they’re on podcasts sometimes, so I feel safe guessing that media exposure they haven’t had is because they haven’t pursued it rather than because they don’t have the expertise to do so. • Perhaps more EA orgs—like GiveWell, ACE, and FHI—should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting. More spending and effort placed into publicity arms makes sense. Less cohesion and coordination is a hard sell though, that’s more points of failure at best, and at worst risks exploitation/​playing both sides by clever outsiders, or inter-org conflict/​retaliation that is triggered by accident instead of deliberately. There needs to at least be an apparatus for negotiation. • Great points. There’s an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it’s better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: “Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world’s poorest!” But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people outside EA don’t give 10% of their earnings to any effective charity. Most AI work outside EA is focused on making money or producing “cool” results, not mitigating disaster or planning for the long-term benefit of humanity. Practically all EAs agree people should give 10% of their earnings to effective developing-world charities instead of 1% to ineffective developed-world ones. And practically all EAs agree that AI development should be done with significantly more thought and care. (I think even Émile Torres may agree on that! Could someone ask?) It’s unfortunate that the internal nearterm vs longterm debate gets so much coverage, given that what we agree on is way more action-relevant to outsiders. In any case, I mention this because it could play into your “ideologically diverse group of public figures” point somehow. Your idea seems interesting, but I also don’t like the idea of amplifying internal debates further. I would love to see public statements like “Even though I have cause prioritization disagreements with Person X, y’all should really do as they suggest!” And acquiring a norm of using the media to gain leverage in internal debates seems pretty bad. • So, proposing that we give everyone equal voting power gives those on the forum with more voting power an incentive to lessen mine (by downvoting this). So how about this: we make the agreement karma democratic. That way we can see what people actually agree or disagree on and since it doesn’t affect karma we can make it democratic without angering those with disproportionate voting power. • 3 Dec 2022 23:40 UTC 1 point 0 ∶ 15 Remember the anti-work subreddit moderator’s disastrous unsanctioned interview? Let that be a cautionary tale of how interacting with the media can go. It should be self evident that only CEA sanctioned individuals should be allowed to speak to the media. • That mod was as central to r/​antiwork as it was possible to be: https://​​tracingwoodgrains.medium.com/​​r-antiwork-a-tragedy-of-sanewashing-and-social-gentrification-56298af1c1a7 The anti-work interview was kind of horrible, but not because Ford misrepresented what the sub was about, or wasn’t the right person to speak on its behalf, or wasn’t sanctioned to speak to the media. Doreen Ford built the sub. There was no one better placed to do that interview or to approve anyone else to do it. What the interview actually did was reveal what the sub was actually about to the recent flood of newcomers. They thought they were there to demand better rights for workers or something, but r/​antiwork actually existed to oppose the concept of work itself. You see these things sometimes—a group gets a lot of new entrants who haven’t really internalised the values of the group, and then there’s some kind of drama between the new blood and the old guard over the group’s core beliefs. There was a much lower-key thing on r/​wallstreetbets back when GameStop stock shot up in value—a bunch of people flooded in wanting to stick it to the greedy hedge funds. This was all good for a while while they were all posting about how GME was going to the moon etc. But as time went on, you saw a disconnect—newbies saw Martin Shkreli a.k.a the Pharma Bro as emblematic of the kind of greedy capitalist they hated, while to the old guard Shkreli was a hero (they made him a mod). The antiwork drama isn’t a lesson about the perils of media engagement—it’s an example of a community partially losing its values due to rapid growth. • [ ] [deleted] • I also strongly agree! I think this an important topic that needs to be discussed more frequently within this community. I’m curious whether most EA participants are on the same page that the lack of demographic diversity is harmful to effectiveness. The EA events I have attended have been much whiter (and more predominantly male) than the general population of my area, and many conversations have had an exclusive, elitist vibe to them. (This is obviously subjective but to me this manifested as people immediately asking people about their credentials, and folks initiating group conversations with narrow intellectual topics that are not inclusive.) • First, I’ve been somewhat surprised recently to see a number of very direct headhunting attempts from people in the EA community, directed at key staff members of our organization. This is not a one-off, this is attempts to recruit multiple staff from a number of hiring organizations. This is useful information to know! Thanks for sharing it. I just want to send good vibes for “if you see something you see is weird, lean towards transparency.” My personal intuition is that it’s good to have a culture where people are politely, and perhaps lightly, headhunted. However, there definitely could be more issues here: 1. It seems bad if some groups/​orgs get highly targeted. This can create an uncomfortable situation if not handled well. 2. It’s hard to be polite. I could easily imagine recruiters being pushy or rude. I imagine that perhaps in this case at least, some orgs used similar logic to target IDinsight (lots of great talent and training, but work seems less directly aligned for specific EA goals), but didn’t realize how many others were doing it. I’d recommend messaging the Community Health team at CEA to get a bit of coordination. Or just directly send an email to the various orgs flagging the issue. I imagine they might well be able to find a more reasonable solution. • I’m nervous that readers might conflate “the specific situation of IDInsight, which we still know little about, is justified” with, “it’s generally good to have more headhunting”. I mostly agree with the latter, but can’t say much on the former. • 3 Dec 2022 23:19 UTC 92 points 23 ∶ 2 I really want to be in favor of having a less centralized media policy, and do think some level of reform is in-order, but I also think “don’t talk to journalists” is just actually a good and healthy community norm in a similar way that “don’t drink too much” and “don’t smoke” are good community norms, in the sense that I think most journalists are indeed traps, and I think it’s rarely in the self-interest of someone to talk to journalists. Like, the relationship I want to have to media is not “only the sanctioned leadership can talk to media”, but more “if you talk to media, expect that you might hurt yourself, and maybe some of the people around you”. I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything. So, overall, I am in favor of some kind of change to our media policy, but also continue to think that the honest and true advice for talking to media is “don’t, unless you are willing to put a lot of effort into this”. • I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything. I upvoted this post, however I think this section is exaggerrated. When this was brought up on Twitter, someone brought up a survey for how much people who were involved in events felt journalists accurately characterized them. iirc it was something like 20% substantively/​entirely accurate, 60% minor errors but broad gist is true, 20% majorly false. I couldn’t find the study again and I don’t know how good it was. But at least your comment seems maybe an overestimate. • I appeared on a radio program on behalf of EA London in 2018 and don’t regret it. I thought the coverage was fair to positive. • It sounds like there are two main issues: 1. When is talking to journalists in the self-interest of an individual or very small org? 2. When is talking to journalists going to contribute positively to the world? My gut instinct is that the latter will hold more often than the former, since it develops a wider public discussion, implying that sometimes altruists might want to talk to the media even if they feel it will cast them in a poor light. Over time, as a community we can share our experiences and and collectively decide which publications and individuals we trust to have a conversation with, possibly on which topics or in which broader contexts. One of CEA’s roles could then be to seek to build new such relationships to share with the community, widening our set of options rather than restricting it. • I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice Would be interested in hearing more, like what those interviews were about and whether the interviewed people were mostly from the Bay area and/​or part of the rationality community. Could imagine that I wouldn’t want to strongly extrapolate from those experiences to potential media interviews for learning about EA in Germany, for instance. • In 2014 or 2015, several of us in Seattle talked to a journalist who we were told was doing an article on young philanthropists. 3 or 4 people had long interviews with her, and she also took over an EA meeting she’d been invited to observe. When the article came out, it was about how awful young people were for caring about 3rd world poverty instead of the opera. I also sounded like a goddamn idiot. The journalist asked an absolutely ridiculous question, I worked to answer in a way that wasn’t “I’m sorry, you think what?”, and the quote got used. It accurately reflected my opinion (“no, opera outreach programs aren’t more important than malaria nets”) but I sound stupid because I couldn’t think fast enough to sound smart and not-hostile. • FWIW I remember reading that article and thinking that the net takeaway (from people we want to attract) is neutral or positive towards us. Like if someone doesn’t even believe in the idea of cause prioritization, we are not the right community for them. • Yeah I think this was a relatively gentle introduction to misleading journalists, in that the article’s slant was so obvious and enough people were not on its side that it wasn’t damaging. • Yes I don’t think you sound stupid at all Elizabeth, I think EA comes across reasonably well in the piece and the kind of person who’d be interested in effective giving might Google it because of you. • This seems like the sort of thing where it would really help to have a public database of ‘journalists who we’ve discovered are sucky and journalists who we’ve discovered aren’t sucky’, both as a very mild deterrent and more importantly so future EAs can avoid talking to this particular person. • In terms of understanding the causal effect of talking to journalists, it seems hard to say much in the absence of an RCT. Someone ought to flip a coin for every interview request, in order to measure (a) the causal effect of accepting an interview on probability of article publication, and (b) the direction of any effects on article accuracy, fairness, and useful critique. (That was meant as a bit of a joke, but I would honestly be delighted to see a bunch of articles about EA which include sentences like “Person X did not offer any comment because we weren’t assigned to the interview acceptance group in their RCT”. Seems like it sends the right signal to the sort of people we want to attract.) In any case, until that RCT gets run, maybe it would be worthwhile to compare articles informed by interviews and articles uninformed by interviews side-by-side, and do what we can with the data we have. It’s easy to say “I talked to the journalist and the article was inaccurate”. But claiming that the article ended up worse than it would’ve been in the absence of an interview is harder. (There are also complicating factors: an article with quotes from relevant people may seem more legitimate to readers; no interview might mean no article.) • I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything. Do you have thoughts about the idea of creating a thread on a site like the EA Forum or Less Wrong where someone takes questions from the media and responds in writing publicly? 3 birds with one stone: written responses can be more considered, public source material discourages misrepresentation, and less need to respond to the same question multiple times. (This was Wei Dai’s idea for handling journalist questions about Bitcoin.) • I think something like that is a better idea. Or separately, for people to just write up their takes in comments and posts themselves. I’ve been reasonable happy with the outcomes of me doing that during this FTX thing. I think I’ve been quoted in one or two articles, and I think those quotes have been fine. • I agree that public communication is risky, but I think that plenty more people are qualified to do it than just CEA and the movement’s “big three” public intellectuals (MacAskill, Ord, and Singer). My comment here was partly a response to this one. • As Byrne points out, and some notable examples testify, some people manage to: 1. “Go to the monastery” to explore ideas as a hardcore believer. 2. After a while, “return to the world”, and successfully thread the needle between innovation, moderation, and crazy town. This is not an easy path. Many get stuck in the monastery, failing gracefully (i.e. harmlessly wasting their lives). Some return to the world, and achieve little. Others return to the world, accumulate great power, and then cause serious harm. Concern about this sort of thing, presumably, is a major motivation for the esotericism of figures like Tyler Cowen, Peter Thiel, Plato, and most of the other Straussian thinkers. • One thing this reminds me of is a segment of Holden Karnofsky’s interview with Ezra Klein. HOLDEN KARNOFSKY: At Open Philanthropy, we like to consider very hard-core theoretical arguments, try to pull the insight from them, and then do our compromising after that. And so, there is a case to be made that if you’re trying to do something to help people and you’re choosing between different things you might spend money on to help people, you need to be able to give a consistent conversion ratio between any two things. So let’s say you might spend money distributing bed nets to fight malaria. You might spend money [on deworming, i.e.] getting children treated for intestinal parasites. And you might think that the bed nets are twice as valuable as the dewormings. Or you might think they’re five times as valuable or half as valuable or ⅕ or 100 times as valuable or 1100. But there has to be some consistent number for valuing the two. And there is an argument that if you’re not doing it that way, it’s kind of a tell that you’re being a feel-good donor, that you’re making yourself feel good by doing a little bit of everything, instead of focusing your giving on others, on being other-centered, focusing on the impact of your actions on others,[where in theory it seems] that you should have these consistent ratios. So with that backdrop in mind, we’re sitting here trying to spend money to do as much good as possible. And someone will come to us with an argument that says, hey, there are so many animals being horribly mistreated on factory farms and you can help them so cheaply that even if you value animals at 1 percent as valuable as humans to help, that implies you should put all your money into helping animals. On the other hand, if you value [animals] less than that, let’s say you value them a millionth as much, you should put none of your money into helping animals and just completely ignore what’s going on factory farms, even though a small amount of your budget could be transformative. So that’s a weird state to be in. And then, there’s an argument that goes […] if you can do things that can help all of the future generations, for example, by reducing the odds that humanity goes extinct, then you’re helping even more people. And that could be some ridiculous comic number that a trillion, trillion, trillion, trillion, trillion lives or something like that. And it leaves you in this really weird conundrum, where you’re kind of choosing between being all in on one thing and all in on another thing. And Open Philanthropy just doesn’t want to be the kind of organization that does that, that lands there. And so we divide our giving into different buckets. And each bucket will kind of take a different worldview or will act on a different ethical framework. So there is bucket of money that is kind of deliberately acting as though it takes the farm animal point really seriously, as though it believes what a lot of animal advocates believe, which is that we’ll look back someday and say, this was a huge moral error. We should have cared much more about animals than we do. Suffering is suffering. And this whole way we treat this enormous amount of animals on factory farms is an enormously bigger deal than anyone today is acting like it is. And then there’ll be another bucket of money that says: “animals? That’s not what we’re doing. We’re trying to help humans.” And so you have these two buckets of money that have different philosophies and are following it down different paths. And that just stops us from being the kind of organization that is stuck with one framework, stuck with one kind of activity. […] If you start to try to put numbers side by side, you do get to this point where you say, yeah, if you value a chicken 1 percent as much as a human, you really are doing a lot more good by funding these corporate campaigns than even by funding the [anti-malarial] bed nets. And [bed nets are] better than most things you can do to help humans. Well, then, the question is, OK, but do I value chickens 1 percent as much as humans? 0.1 percent? 0.01 percent? How do you know that? And one answer is we don’t. We have absolutely no idea. The entire question of what is it that we’re going to think 100,000 years from now about how we should have been treating chickens in this time, that’s just a hard thing to know. I sometimes call this the problem of applied ethics, where I’m sitting here, trying to decide how to spend money or how to spend scarce resources. And if I follow the moral norms of my time, based on history, it looks like a really good chance that future people will look back on me as a moral monster. But one way of thinking about it is just to say, well, if we have no idea, maybe there’s a decent chance that we’ll actually decide we had this all wrong, and we should care about chickens just as much as humans. Or maybe we should care about them more because humans have more psychological defense mechanisms for dealing with pain. We may have slower internal clocks. A minute to us might feel like several minutes to a chicken. So if you have no idea where things are going, then you may want to account for that uncertainty, and you may want to hedge your bets and say, if we have a chance to help absurd numbers of chickens, maybe we will look back and say, actually, that was an incredibly important thing to be doing. EZRA KLEIN: […] So I’m vegan. Except for some lab-grown chicken meat, I’ve not eaten chicken in 10, 15 years now — quite a long time. And yet, even I sit here, when you’re saying, should we value a chicken 1 percent as much as a human, I’m like: “ooh, I don’t like that”. To your point about what our ethical frameworks of the time do and that possibly an Open Philanthropy comparative advantage is being willing to consider things that we are taught even to feel a little bit repulsive considering—how do you think about those moments? How do you think about the backlash that can come? How do you think about when maybe the mores of a time have something to tell you within them, that maybe you shouldn’t be worrying about chicken when there are this many people starving across the world? How do you think about that set of questions? HOLDEN KARNOFSKY: I think it’s a tough balancing act because on one hand, I believe there are approaches to ethics that do have a decent chance of getting you a more principled answer that’s more likely to hold up a long time from now. But at the same time, I agree with you that even though following the norms of your time is certainly not a safe thing to do and has led to a lot of horrible things in the past, I’m definitely nervous to do things that are too out of line with what the rest of the world is doing and thinking. And so we compromise. And that comes back to the idea of worldview diversification. So I think if Open Philanthropy were to declare, here’s the value on chickens versus humans, and therefore, all the money is going to farm animal welfare, I would not like that. That would make me uncomfortable. And we haven’t done that. And on the other hand, let’s say you can spend 10 percent of your budget and be the largest funder of farm animal welfare in the world and be completely transformative. And in that world where we look back, that potential hypothetical future world where we look back and said, gosh, we had this all wrong — we should have really cared about chickens — you were the biggest funder, are you going to leave that opportunity on the table? And that’s where worldview diversification comes in, where it says, we should take opportunities to do enormous amounts of good, according to a plausible ethical framework. And that’s not the same thing as being a fanatic and saying, I figured it all out. I’ve done the math. I know what’s up. Because that’s not something I think. […] There can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like […] it’s trying to be as weird as possible. It’s being completely hard-core, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in. And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should. […] I also just want to endorse the meta principle of just saying, it’s OK to have a limit. It’s OK to stop. It’s a reflective equilibrium game. So what I try to do is I try to entertain these rigorous philosophical frameworks. And sometimes it leads to me really changing my mind about something by really reflecting on, hey, if I did have to have a number on caring about animals versus caring about humans, what would it be? And just thinking about that, I’ve just kind of come around to thinking, I don’t know what the number is, but I know that the way animals are treated on factory farms is just inexcusable. And it’s just brought my attention to that. So I land on a lot of things that I end up being glad I thought about. And I think it helps widen my thinking, open my mind, make me more able to have unconventional thoughts. But it’s also OK to just draw a line […] and say, that’s too much. I’m not convinced. I’m not going there. And that’s something I do every day. • Thank you so much for putting in the trouble to put this together!!! I appreciate it a lot! • My attempt at a reasonable AI/​semis portfolio: MSFT − 10% INTEL − 10% Nvidia − 15% SMSN − 15% Goog − 15% ASML − 15% TSMC − 20% Interested if anyone thinks I got this hugely wrong. • Thank you for writing this. I’m not sure whether I agree or disagree, but it seems like a case well made. While I do not mean to patronise, as many others will have found this, the one contribution I feel I have to make is an emphasis on how very differently people in the wider public may react to ideas/​arguments that seem entirely reasonable to the typical EA. Close friends of mine, bright and educated people, have passionately defended the following positions to me in the past: -They would rather millions die from preventable diseases than Jeff Bezos donate his entire wealth to curing those diseases if such donation was driven by obnoxious virtue-signalling. The difference made to real people didn’t register in their judgements at all, only motivations. Charitable donation can only be good if done privately without telling anyone. -It is more important that money be spent on the people it is most costly and difficult to help than those whose problems can be cured cheaply because otherwise the people with expensive problems will never be helped. -Charity should be something that everyone can agree on, and thus any charity dedicated to farmed animal welfare is not a valid donation opportunity. -The Future of Humanity Institute shouldn’t exist and people there don’t have real jobs. I didn’t even get to explaining what FHI is trying to do or what their research covers; from the name alone they concluded that discussion of how humanity’s future might go should be considered an intellectual interest for some people, but not a career. They would not be swayed. Primarily, I think the “so what?” of this is trying to communicate EA ideas, nuanced or not, to the wider public is almost certainly going to be met with backlash. The first two anecdotes I list imply that even “It is better to help more people than fewer people.” is contentious. Sadly, I don’t think most of what this community supports fits into the “selfless person deserving praise” category many people have, and calling ourselves Effective Altruists sounds like we’ve ascribed ourselves virtues without justification that a person on the street would acknowledge. Accepting some people will react negatively and this is beyond our control, my humble recommendation would be for any more direct attempt to communicate ideas to the public gets substantial feedback beforehand from people in walks of life very different to the EA norm. People are really surprising. • I’m not surprised people with those sorts of views exist, but to some degree I’d expect them to diminish with familiarity. It’s easier to sneer at things when you don’t have multiple friends openly doing or supporting them. There’s also a question of how much harm comes from such people hearing more about EA, even assuming they don’t change or just reinforce their views. It seems unlikely they’d have been won over by a slower, more guarded approach, so the question would probably be something like ‘are those people likely to become more proactively anti EA such that they turn people away from making effective donations on net, despite the greater discussion around the idea of doing so?’ That certainly seems possible, but nonetheless I would bet at pretty good odds against it. Writing this, it occurs to me that one effect of a more open media policy might be to blur the lines further between EA as ‘a social movement’ and EA as ‘a certain way of donating money, that loads of people do without getting engaged’. That again is plausibly bad, but I would bet on being net good. • The first point here seems very likely true. As for the second, I suspect you’re mostly right but there’s a little more to it. The first of the people I quote in my comment was eventually persuaded to respect my views on altruism, after discussing the philosophy surrounding it almost every night for about three months. I don’t think shorter timespans could have been successful in this regard. He has not joined the EA community in any way, but kind of gets what it’s about and thinks it’s basically a good thing. If his first contact with the community he had was hearing someone express that they donate 10% of their income or try to do as much good as possible, his response in NATO phonetics could be abbreviated to Foxtrot-Oscar. In the slow, personal, deliberate induction format, my friend ended up with a respectful stance. Through any less personal or nuanced medium, I’m confident he would have thought of the community only with resentment. Of course, there’s no counterfactual of him donating or doing EA-aligned work so this has not been lost. The harm I see from this is a general souring of how Joe and Jane Public respond to someone identifying as an EA. Thus far, most people’s experience will be their friends and family hadn’t heard of it, don’t have a strong opinion and, if they’re not interested, live and let live. I caveat the next sentence with this being a system 1 intuition, but I fear that there’s only so much of the general public who can hear about EA and react negatively before admitting to being in the community becomes an outright uncool thing, that many would be reluctant to voice. Putting the number-crunching for how that would affect total impact aside, it would be a horrible experience for all of us. I don’t think you need a population that’s proactively anti-EA for this to happen, a mere passive dislike is likely sufficient. • 3 Dec 2022 21:01 UTC 5 points 0 ∶ 0 Thanks for this, Vasco, Hanzhang, Melissa! A couple of thoughts: 1. It seems to me that on reasonable quantifications of the ways in which direct tree planting efforts in the UK do not optimize for climate impact (utter non-neglecteness, lack of advocacy, trajectory change, low policy additionality, the factors you mention in the appendix) one would have a prior where tree planting is several orders of magnitude less cost-effective than strategies that seek to optimize for impact. As such, I find a posterior of a 1-order-of-magnitude difference within reach quite surprising. 2. While I am generally quite pessimistic on forestry interventions because of the utter non-neglectedness and the difficulty to get to credible additionality and permanence, it seems direct tree planting in the UK is kind of close to the worst thing one can do from a climate angle. So for donors that cannot be moved from tree planting it could be interesting to see what the best charities in this space might look like, e.g. advocacy to improve REDD+ or peatland protection etc. 3. Re Google Trends for neglectedness: A great datasource for climate philanthropy are the Climate Works reports—those show clearly not only that forests are very well funded compared to other areas, but also that funding is growing strongly (it has been an early focus of Bezos Earth Fund, the largest climate philanthropist). In addition, IIRC, conservation philanthropy more broadly, of which a significant share focuses on forests, is several X larger than climate philanthropy. • Hi Johannes, Thanks for sharing your thoughts! I find a posterior of a 1-order-of-magnitude difference within reach quite surprising. 1 t/​£ is estimated to be about 1 (= log10(1/​0.0722)) order of magnitude (OOM) higher than the cost-effectiveness of tree planting in the UK. However, the difference to the projects funded by CCF may be quite larger. Fitting a lognormal distribution with 2.5th and 97.5th percentiles equal to the lower and upper bound of the 95 % confidence interval you guessed (with the disclaimer that it should not be intended a resilient estimate) leads to a mean of 2.34 kt/​. This is about 5 (= log10(2.34 k /​ 0.0722)) orders of magnitude higher than the cost-effectiveness of tree planting for the UK.

• I’ve been wondering about cost-effectiveness in this space for a long time, so thanks for writing this and especially for releasing the quantitative model! At the top, it looks like you are saying that $100 million per year for 10 years could reduce x risk by about one percentage point, meaning about 100 basis points (0.01%) per billion dollars-is that correct? In the model column AF in tab X-risk, you say that the effort would be over a century, so does that mean spending$10 billion total? Elsewhere you say you consider spending $100 million per year just on one institution, so are you really talking about spending$100 billion this century on the top 10 institutions? Then this would be about 1 basis point per $billion. This is in the range of cost effectiveness values collected here. So if I’m understanding you correctly,$10 billion spent over the century would reduce the existential risk from the Chinese Communist Party Politburo by 10%, and 15% of that would have happened anyway, so you are reducing overall existential risk by 0.085%, which would be 0.85 basis points per $billion? • Hi David, thanks for your interest in our work! I need to preface this by emphasizing that the primary purpose of the quantitative model was to help us assess the relative importance of and promise of engaging with different institutions implicated in various existential risk scenarios. There was less attention given to the challenge of nailing the right absolute numbers, and so those should be taken with a super-extra-giant grain of salt. With that said, the right way to understand the numbers in the model is that the estimates were about the impact over 100 years from a single one-time$100M commitment (perhaps distributed over multiple years) focusing on a single institution. The comment in the summary about 100 million/​year was assuming that the funder(s) would focus on multiple institutions. Thus, the 100 basis points per billion figure is the “correct” one provided our per-institution estimates are in the right order of magnitude. We’re about to get started on our second iteration of this work and will have more capacity to devote to the cost-effectiveness estimates this time around, so hopefully that will result in less speculative outputs. • Thanks for the clarification. I would say this is quite optimistic, but I look forward to your future cost-effectiveness work. • Thank you for writing this. I have thought about writing a critical post making broadly similar arguments, but with a greater focus on how the FTX disaster played out in November. I don’t plan to do this right now. At least some of the people who are working on this have a reasonable read on my views, and there are other things I want to focus on for now. Again—thanks for writing this. I will follow the discussion with interest—and so will many journalists! • 3 Dec 2022 18:28 UTC 7 points 4 ∶ 1 As someone who is not a hedonistic utilitarian, most of the arguments in this post strike me as incredibly weak. For example it can certainly be argued, and I personally believe, that negative experiences are not bad in such a way that a world without them would be superior. Grief is unpleasant, but I would not prefer a world without grief. I realise that this is not itself an argument, but the possibility of dissent does undermine the idea that the elimination of suffering follows so obviously from its existence that it can violate the is-ought gap. The post is filled with the same sort of logical leaps, where the author’s beliefs “must” be true with no argument as to why. Most academic philosophers are not consequentialists. If you “find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism” then you probably don’t have a very strong understanding of normative ethics. I may be missing the argument in the post, and would welcome a clear restatement of the premises, but as far as I can tell there is no serious attempt to address criticisms or alternatives to hedonistic utilitarianism other than “if you thought about it hard enough, you’d agree with me”. edit: I hadn’t read it before making this comment, but this other post from today seems to provide a much better answer to the central premise of this post than I would be able to provide. https://​​forum.effectivealtruism.org/​​posts/​​7dGZnj7bwpM2kSJqm/​​against-meta-ethical-hedonism • As someone who leans towards hedonistic utilitarianism, I would agree with this impression. It seemed like the post asserted that utilitarianism must be true and that alternative intuitions could be dismissed without any good corresponding argument. I would also add that there are many different flavors of utilitarianism, and it’s unclear which, if any, is the correct theory to hold. This podcast has a good breakdown of the possibilities. https://​​clearerthinkingpodcast.com/​​episode/​​042 • If you “find it hard to imagine an ultimate ethical theory that isn’t based on some form of utilitarianism” then you probably don’t have a very strong understanding of normative ethics. FWIW I’d like to take this opportunity to advertise my list of recommended readings about non-utilitarian normative ethics, which some utilitarians may find educational. Maybe someone can write a similar list for metaethics. • 3 Dec 2022 18:16 UTC 45 points 8 ∶ 1 No inside knowledge, but as the article states they received a rather large grant from FTXFF, plus other funding from FTX and/​or Alameda going back several years. Much, up to potentially all, of that could have to be repaid depending on the facts and outcome of litigation. I think it likely the clawback risk is significant enough that the risk constitutes a serious incident under the UK charity rules. None of that relies on new information. In other words, I don’t think the fact of the report itself provides any useful information. However, it is worth noting that RP and OP have released statements characterizing their FTX clawback exposure (or lack thereof for OP). Unless CEA can provide some clarity, I’d assume the worst-case scenario for CEA would be very, very bad. One would need to see updated financials and a history of all FTX/​Alameda contributions to assess whether insolvency could result from the worst-case scenario. Do they have enough unrestricted assets to cover the worst-case outcome? Personally: I generally would not donate to an organization with potentially catastrophic FTX risk unless the organization had convinced me the risk didn’t exist (or was sufficiently small to disregard), or that my donations were legally restricted in a way that protected them from creditors in the event of an insolvency. • Personally: I generally would not donate to an organization with potentially catastrophic FTX risk unless the organization had convinced me the risk didn’t exist (or was sufficiently small to disregard), or that my donations were legally restricted in a way that protected them from creditors in the event of an insolvency. Given that CEA writ large includes Giving What We Can, EA Funds and the Donor Lottery, this would be a pretty big deal in terms of people’s giving this month. If I need to avoid donating through CEA, my life as a donor gets a lot harder. Given this, and given how many members of the community will be making big charitable donations around this time of year, some clarification from CEA on this front seems pretty valuable. • Right—and I totally get why organizations are hesitant to share certain information right now. If CEA is in a precarious situation and/​or cannot disclose enough information to reassure donors, it may at least be in a position to demonstrate that EA Funds and the Donor Lottery are either already safe from the claims of CEA’s general creditors or will be made safe, at least on a going forward basis by Dec 31. This is because, in many jurisdictions, certain donations can be restricted by the donor in ways neither the charity nor general creditors can breach. But it’s possible for the charity to intend to provide this protection yet screw it up . . . and has happened in at least one non-EA case I can think of. (all this is summarized from cell phone while out of town...I have some broader thoughts about risk containment in draft form on my home computer) • I agree—I personally really appreciate humor on the EA Form. Also, EA is generally not cold and calculating, it is “warm and calculating.” • Thanks for sharing this post, I absolutely agree. Hopefully critics of EA can come to see the genuinely warm-hearted motivations that I think most EAs have. • As I write this, the commenting guidelines say “Aim to explain, not persuade” and “Approach disagreements with curiosity”. It doesn’t feel like the media policy embodies those values! Whenever I’ve seen media/​outsiders criticize EA, EAs react defensively—which is a very normal human reaction, but hardly the kind of thing that should be coded into CEA policy. My two cents is that if anyone is contacted by the media to discuss EA, they have no obligation whatsoever to follow CEA’s media policy. This isn’t a political party. • if anyone is contacted by the media to discuss EA, they have no obligation whatsoever to follow CEA’s media policy. This isn’t a political party. This specific statement is an extreme mischaracterization. Political parties have strict media policies because it is instrumentally convergent to avoid letting random strangers control your fate, and they have sufficient experience with the media for most members to know several reasons why this is the case. It’s effectively the same class of fallacy as saying “EA is not an autocracy, therefore there should be no leadership at all”. EA requires a less strict media policy than political parties, not a total absence of a media policy at all. Whenever I’ve seen media/​outsiders criticize EA, EAs react defensively—which is a very normal human reaction. I want to clarify that this is true and helpful to introduce to the conversation. • The media is an extremely different discursive environment than the EA forum and should have different guidelines. I don’t want to assume that the public sphere cannot become earnestly truthseeking, but right now it isn’t at all and bad things happen if you treat it like it is. • Left unspoken? EA needs more head hunting aimed at senior non-EAs. • I’d counter that the focus on race and gender is very US-centric rather than culturally universal. I volunteer at a local charity, gender proportions are heavily skewed towards women being the bigger group. I neither find it a problem nor think any diversity measures should be introduced. It also seems fairly intuitive to me that it is the people who are the most privileged that can focus on such problems as AGI Safety and existential risk rather than those who struggle financially to live on the week to week basis. • I think a lot of the disagreements in the comments is coming down to different conceptions of headhunting. Dan, you refer to targeting/​specific/​direct outreach to particular individuals, but that doesn’t seem the crucial difference, its in the intent, tone and incentives. “Hey X person, you’re doing a great job at your current job. You might be totally happy at your current job, but I thought I’d flag this cool new opportunity that seems really impactful—happy to discuss why it might be a good fit” seems fine. Giving a hard sell, strongly denigrating the current employer or being strongly incentivised for switches (eg paid a commission) seems way less fine. • 3 Dec 2022 16:32 UTC 3 points 0 ∶ 0 Congrats on putting this up! • I think the problem of “EA leadership doesn’t consider it important to hire experienced people even for people who are leading a project (who in turn don’t consider it important to hire experienced people)” is a root cause problem for a lot of somewhat negative things going on in EA (which is nobody’s fault, but could be useful to improve, I think). • I regret that I have but one strong upvote to give this comment. I think this is a huge problem among EA orgs, to the extent that my impression is that moderate incompetence has basically been the norm at many of them. It obviously seems like a good idea to try and improve the resources we’ve got, but I desperately wish EA orgs would spend more effort trying to attract real experience. <Management talent + management experience> is always going to outperform <management talent>. • Why do you think people think it’s unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)? • Examples: • A funded, reputable, [important in my opinion] EA org that I helped a bit with hiring an engineer for a [in my opinion] key role had, on their first draft, something like “we’d be happy to hire a top graduate from a coding bootcamp” • I spoke to 2-3 senior product managers looking for their way into EA, while at the same time..: • (almost?) no EA org is hiring product people • In my opinion, many EA orgs could use serious help from senior product people (Please don’t write here if you can guess what orgs I’m talking about, I left them anonymous on purpose) From these examples I infer the orgs are not even trying. It’s not that they’re trying and failing due to, for example, an age skew in the community. I also have theories for why this would be the case, but most of my opinion comes from my observations. • I have somewhat of a problem writing such examples publicly since I’m afraid to imply that specific people are not good enough at their job, which I really don’t want to do. (And so this problem remains hidden from most of the community, which I think is a shame) Maybe you (Ben, the author) could figure out, for the people/​positions where you think it would be better to have someone with a lot of experience, how the hiring process looked. Did they try [reasonably in your opinion] reaching out to very senior people? • I feel like a lot of this is downstream from people being reluctant to hire experienced people who aren’t already associated with EA. Particularly for things like operations roles experience doing similar roles is going to make far more of a difference to effectiveness than deep belief in EA values. When Coke need to hire new people they don’t look for people who have a deep love of sugary drinks brands, they find people in similar roles for other things and offer them money. I feel like the reason EA orgs are reluctant to do this is that there’s a degree of exceptionalism in EA. • I agree that it’s downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don’t really need it, doing things that are short-term good but long-term bad (with the assumption that they’ll have moved on before the bad stuff kicks in), etc. (cf. the book Moral Mazes.) Hiring mission-aligned people is one of the best ways to provide a check on that type of behavior. *I think some orgs maybe should be more open to hiring people who are aligned with the org’s particular mission but not part of the EA community—eg that’s Wave’s main hiring demographic—but for orgs with more “hardcore EA” missions, it’s not clear how much that expands their applicant pool. • It’s pretty common in values-driven organisations to ask for an amount of value-alignment. The other day I helped out a friend with a resume for an organisation which asked for people applying to care about their feminist mission. In my opinion this is a reasonable thing to ask for and expect. Sharing (overarching) values improves decision-making and requiring for it can help prevent value drift in an org. • What qualifies as ‘a (sufficient) amount of value alignment’? I worked with many people who agreed with the premise of moving money to the worst off, and found the actual practices of many self-identifying EAs hard to fathom. Also, ‘it’s pretty common’ strikes me as an insufficient argument—many practices are common and bad. More data seems needed. • Wooo super happy you managed to organise this! • Very interesting thread. I’m an non-EA experienced manager with a successful team-building company and was looking into how to help EA orgs with team-building, but it turns out I might be more useful as a manager coach? I started managing teams 15 years ago and eventually left the corporate world to be a tour guide. Covid forced me back to the manager role and founded my current startup, woyago, which is almost on autopilot. My Linkedin Profile: https://​​www.linkedin.com/​​in/​​antomontani/​​details/​​experience/​​ I have free time and would be happy to offer advice for those of you looking for help on management. Example areas I might have useful input on (copying heavily from your post Ben!): My Calendly can be found in my bio. Happy to (finally!) find a way to add impact to my life by helping you. • Here’s a related thought that I’m curious for people’s views on: if org X has a reputation for being good at interviewing and hiring candidates, and org Y is hiring for a similar role, and org Y says to candidates “if you have an offer from X then we’ll hire you with no further process”, or similarly “if you work or have worked at X, we know you’re good and don’t need to assess you ourselves”. This can feel like org Y is misappropriating the products of org X’s work and expertise finding and assessing good people. Is this unethical? My inclination is to say something similar to the replies to this headhunting post: it sucks to have this happen to you, but trying to prevent it in a heavy-handed way would be worse, so it seems better to just be aware of the phenomenon and be mindful of how you are benefiting from the work of others. (And again, the dynamic is different in for-profit organizations in competition with each other vs. non-profits with at least some amount of goal alignment.) • I think this is a good conversation to have. I broadly agree with the majority voice of the comments, that though it can be difficult and unfair to have your employees headhunted away from you after you invested in their development and planned around them being here, ultimately it seems better to allow it to happen because of the benefits to the employee and their new employer. At the same time, I do want to acknowledge that there is a version of this behaviour that is a problem. To the extent that any headhunter is: • disrespectful of people’s time by trying to involve them in processes that aren’t suitable for or interesting to them, • persistent in the face of polite refusal from the employee in question, • in any aspect intentionally misleading or dishonest, then I’m sure we’d all agree they were doing something wrong. It’s harder to prevent this kind of behaviour, because it’s often subjective when the line has been crossed, but I’d support a general understanding that if a headhunter does this kind of thing, then we hold both them and the organization they are hiring for responsible, perhaps privately at first and then publically if the behaviour persists. Anyone using recruiters or headhunters should feel under an obligation to ensure their agents are acting in ways consistent with their own values. • Does anyone have thoughts on 1. How does the FTX situation affect the EV of running such a survey? My first intuition is that running one while the situation’s so fresh is worse than waiting a 3-6 months, but I can’t properly articulate why. 2. What, if any, are some questions that should be added, changed, or removed given the FTX situation? • Thanks for the article, it definitely seems like an important problem. This should get even worse because of the upcoming energy crisis : 1. Many energy sources need a lot of water to keep working 1. Nuclear plants and coals plant need water to cool down the pipes 2. Copper and lithium extraction are very water intensive, and the Chilean governement has already limited some of the use of water for mining 3. Shale oil uses millions of liter of water 4. Biofuels are extremely water intensive as well 2. With les energy, the water depletion becomes worse. Water gets harder to pump, and desalination becomes even less of an option. Just an additional note: I think the post would be better with a bit of formating. Keeping the bolding of the document, putting everything in justified. Keeping this quote : Our blue planet holds plenty of water, but only 2.5% of it is fresh. The amount of fresh water has fallen 35% since 1970, as ground aquifers have been drawn down and wetlands have deteriorated. Meanwhile, demand for water-intensive agriculture and energy is soaring. Overall water demand is on pace to overshoot supply by 40% by 2030. - Stuart Goldenberg for Barron’s, May 3, 2014 • Does anyone here know why Center for Human-Compatible AI hasn’t published any research this year even though they have been one of the most prolific AGI safety organizations in previous years? https://​​humancompatible.ai/​​research • Are there any reasons why groundwater depletion isn’t already higher on the list of EA priorities? Seems really big in scale, but I have no idea how tractable/​neglected it is. • Groundwater depletion is an important topic, and I’m glad you’re bringing attention to it. • Thank you so much for sharing Ben! I’m glad to hear the calls have been fun. What you described fits my observations so far. I also think that management coaching is probably one of the key “interventions” here. As with any skill, a great deal of learning how to be a (good) manager is learning a set of new behaviors. Having someone to reflect on the development of those behaviors and the respective decision-making can be extremely helpful. • tl;dr: In the context of interpersonal harm: 1. I think we should be more willing than we currently are to ban or softban people. 2. I think we should not assume that CEA’s Community Health team “has everything covered” 3. I think more people should feel empowered to tell CEA CH about their concerns, even (especially?) if other people appear to not pay attention or do not think it’s a major concern. 4. I think the community is responsible for helping the CEA CH team with having a stronger mandate to deal with interpersonal harm, including some degree of acceptance of mistakes of overzealous moderation. (all views my own) I want to publicly register what I’ve said privately for a while: For people (usually but not always men) who we have considerable suspicion that they’ve been responsible for significant direct harm within the community, we should be significantly more willing than we currently are to take on more actions and the associated tradeoffs of limiting their ability to cause more harm in the community. Some of these actions may look pretty informal/​unofficial (gossip, explicitly warning newcomers against specific people, keep an unofficial eye out for some people during parties, etc). However, in the context of a highly distributed community with many people (including newcomers) that’s also embedded within a professional network, we should be willing to take more explicit and formal actions as well. This means I broadly think we should increase our willingness to a) ban potentially harmful people from events, b) reduce grants we make to people in ways that increase harmful people’s power, c) warn organizational leaders about hiring people in positions of power/​contact with potentially vulnerable people. I expect taking this seriously to involve taking on nontrivial costs. However, I think this is probably worth it. I’m not sure why my opinion here is different from others[1]’, however I will try to share some generators of my opinion, in case it’s helpful: A. We should aim to be a community that’s empowered to do the most good. This likely entails appropriately navigating the tradeoff of both attempting to reducing the harms of a) contributors feeling or being unwelcome due to sexual harassment or other harms and b) contributors feeling or being unwelcome due to false accusations or overly zealous response. B. I think some of this is fundamentally a sensitivity vs specificity tradeoff. If we have a detection system that’s too tuned to reduce the risk of false positives (wrong accusations being acted on), we will overlook too many false negatives (people being too slow to be banned/​censured, or not at all), and vice versa. Consider the first section of “Difficult Tradeoffs In the world we live in, I’ve yet to hear of a single incidence where, in full context, I strongly suspect CEA CH (or for that matter, other prominent EA organizations) was overzealous in recommending bans due to interpersonal harm. If our institutions are designed to only reduce first-order harm (both from direct interpersonal harm and from accusations), I’d expect to see people err in both directions. Given the (apparent) lack of false positives, I broadly expect we accept too high a rate of false negatives. More precisely, I do not think CEA CH’s current work on interpersonal harm will lead to a conclusion like “We’ve evaluated all the evidence available for the accusations against X. We currently think there’s only a ~45% chance that X has actually committed such harms, but given the magnitude of the potential harm, and our inability to get further clarity with more investigation, we’ve pre-emptively decided to ban X from all EA Globals pending further evidence.” Instead, I get the impression that substantially more certainty is deemed necessary to take action. This differentially advantages conservatism, and increases the probability and allowance of predatory behavior. C. I expect an environment with more enforcement is more pleasant than an environment with less enforcement. I expect an environment where there’s a default expectation of enforcement for interpersonal harm is more pleasant for both men and women. Most directly in reducing the first-order harm itself, but secondarily an environment where people are less “on edge” for potential violence is generally more pleasant. As a man, I at least will find it more pleasant to interact with women in a professional context if I’m not worried that they’re worried I’ll harm them. I expect this to be true for most men, and the loud worries online about men being worried about false accusations to be heavily exaggerated and selection-skewed[2]. Additionally, I note that I expect someone who exhibit traits like reduced empathy, willingness to push past boundaries, sociopathy, etc, to also exhibit similar traits in other domains. So someone who is harmful in (e.g.) sexual matters is likely to also be harmful in friendly and professional matters. For example, in the more prominent cases I’m aware of where people accused of sexual assault were eventually banned, they also appeared to have done other harmful activities like a systematic history of deliberate deception, being very nasty to men, cheating on rent, harassing people online, etc. So I expect more bans to broadly be better for our community. D. I expect people who are involved in EA for longer to be systematically biased in both which harms we see, and which things are the relevant warning signals. The negative framing here is “normalization of deviance”. The more neutral framing here is that people (including women) who have been around EA for longer a) may be systematically less likely to be targeted (as they have more institutional power and cachet) and b) are selection-biased to be less likely to be harmed within our community (since the people who have received the most harm are more likely to have bounced off). E. I broadly trust the judgement of CEA CH in general, and Julia Wise in particular. I think their judgement is broadly reasonable, and they act well within the constraints that they’ve been given. If I did not trust them (e.g. if I was worried that they’d pursue political vendettas in the guise of harm-reduction), I’d be significantly more worried about given them more leeway to make mistakes with banning people.[3] F. Nonetheless, the CEA CH team is just one group of individuals, and does a lot of work that’s not just on interpersonal harm. We should expect them to a) only have a limited amount of information to act on, and b) for the rest of EA to need to pick up some of the slack where they’ve left off. For a), I think an appropriate action is for people to be significantly more willing to report issues to them, as well as make sure new members know about the existence of the CEA CH team and Julia Wise’s work within it. For b), my understanding is that CEA CH sees themself as having what I call a “limited purview”: e.g. they only have the authority to ban people from official CEA and maybe CEA-sponsored events, and not e.g. events hosted by local groups. So I think EA community-builders in a group organizing capacity should probably make it one of their priorities to be aware of the potential broken stairs in their community, and be willing to take decisive actions to reduce interpersonal harms. Remember: EA is not a legal system. Our objective is to do the most good, not to wait to be absolutely certain of harm before taking steps to further limit harm. One thing my post does not cover is opportunity cost. I mostly framed things as changing the decision-boundary. However, in practice I can see how having more bans is more costly in time and maybe money than the status quo. I don’t have good calculations here, however my intuition is strongly in the direction that having a safer and more cohesive is worth the relevant opportunity costs. 1. ^ fwiw my guess is that the average person in EA leadership wishes the CEA CH team does more (is currently insufficiently punitive), rather than wish that they did less (is currently overzealous). I expect there’s significant variance in this opinion however. 2. ^ This is a potential crux. 3. ^ I can imagine this being a crux for people who oppose greater action. If so, I’d like to a) see this argument explicitly being presented and debated, and b) see people propose alternatives for reducing interpersonal harm that routes around CEA CH. • Thanks for this shortform! Lots of great points raised + broadly agree. I (and others I have spoken to about this) also feel similarly RE: 1, especially where the uncertainty is around the tradeoff between the person’s harm and potential contribution /​ impact, instead of uncertainty around the veracity of the allegations. I especially think it would be easy to underestimate the less-tangible negative impacts of people choosing not to get involved or opting to leave the EA community because they feel unsafe. RE: 2) is there anything stopping the CH team from expanding, if you think capacity might be an issue here? It just seems like an important enough thing to get right. I think informal actions can be helpful, but if you’re at the stage where you’re explicitly warning every newcomer about someone, that basically seems bad enough to warrant some kind of more formal action IMO—I hope this isn’t very common, and to the extent that it is, this would be another indication a lower bar for taking action would be appropriate. I think 3) would be ideal, but I think the onus is less on the individuals and more on external /​ more upstream factors—it may be pretty disempowering and retraumatising to go to the effort of doing this if you think nothing’s going to happen anyway. RE: 4) what does this look like concretely? RE: limited purview While it might be hard to formally extend CH’s reach to say, local groups, I could see a scenario where they could have input into whether or not local groups seek funding, if there’s a history of complaints etc. I wonder whether this is something that’s within their scope /​ capacity at the moment? RE: the crux in footnote 2, I personally think this pretty weak—I think base rates push strongly in favour of women needing to be much more worried about sexual violence than men being worried about false accusations, and my guess is we’re pretty far off the scenario where false accusations are comparable to the harm from sexual violence etc (commenting in personal capacity etc) • Nice article but not a divine topic for me. I may think that friendship is something to care for in the next generations and we really should have a circle of trust with friends, family and romantic partner. Also I would promote a new app for finding friends only, not like a tinder or sex/​romance stuff. • Thanks so much for doing this! Nitpick: the “advice I’ve given people so far” link is broken. • Hi everyone. I’m Evan Harper, a community manager at Metaculus. It occurred to me that I spent way too much time writing a question about Top Gun: Maverick today that will probably just drop off the front page quickly, and I wanted to say hi to the EA Forums at some point anyway, so I’m taking the excuse to shamelessly promote. I can also more proudly promote our re-opened forecast on the Raphael Warnock /​ Herschel Walker Senate election in Georgia. Both these are intended to help bring in new forecasters as part of the ongoing Beginner Tournament. And just in general, we’ve opened something like 40 new public questions in the last 20 days so there’s no shortage of interesting topics in addition to the front page which many of you will already have seen. For example check out this amazing one on zero-carbon aluminium smelting that some mysterious stranger by “@not_an_oracle” just dropped on me almost exactly as-is, perfectly written. My hero. And hey, once you’ve forecasted on something, then you’ll be a Metaculus forecaster, and I have to listen to questions, concerns, ideas, and suggestions from Metaculus forecasters as part of my job. Say hi! • I couldn’t have said this better myself! Coaching provides huge value towards career and impact growth, and I would love to see more EAs investing in themselves. • For what it’s worth, I also share the intuitive aversion. Reading Habryka’s comment, I’m not sure that the aversion would stand up to reflection. But I could imagine it doing so after I thought more about it, e.g., if poaching employees would lead to unequal or loopsided mentoring or hiring costs, or if headhunters were paid per person and not less for people from organizations which are already doing valuable work. • Yeah, I think the strongest arguments against headhunting is training/​cultural-onboarding costs. I do think there is a thing where hiring someone right out of college is often net-negative, but if you train them, they become net-positive after a year or two. I think it would suck to invest so much in training someone, just for them to walk away to an organization that offered a better experience because they had to spend fewer resources training others. I do think it makes sense to have norms here. At Lightcone we have some norms that if you do accept an offer after a 3-month trial period that you do really try to make things work out for 2 years, though if you find something that seems genuinely more impactful you should do it (and the organization would encourage you to go and do it). • Just to add, the fact that it sucks to invest in people and have them leave could lead long-term to organizations being less keen to invest in people in the first place, which would be ultimately bad for both employers and employees. That said, within the EA community, training someone and then watching them leave is less of a dead loss than it would be at a for-profit firm, because there’s a pretty good chance that they’re going to go do something that you’re also in favour of, even if it’s not the thing you chose to work on yourself. I’ve actually heard the funding pitch before of “you should fund us because we hire people previously unknown to the EA community and many of them go on to be hired by OpenPhil or etc. and cite their experience with us as helpful for that”. I agree with you that the right way to deal with this is via flexible informal norms. I don’t so much recommend more rigid /​ coercive /​ formal tools, but probably among the least bad of them I’ve seen is “here is a starting bonus, but if you leave before your first year or so, you have to pay it back”, or guaranteed pay rises after certain periods of time, etc. • Joeri Kooimans is/​was teaching philosophy in prison, so maybe you could talk to him? • I can’t believe I didn’t read this until just now. You are attacking unstated assumption of the philanthropy community writ large, but which includes EA. One is that better psychology is an area for philanthropy and altruism minded people to care about. Most people in our society put the needs of the body far higher than psychological/​”spiritual” needs (and neglect taking care of the psychological distress of others as a work of charity). I think this argument would actually have to be won in order for the psychedelics argument to work as a promising new subset of that line, which I can buy that it may be. The metaphysical assumption that the mind matters as a separate issue from the needs of the body and that there are big gains to having better psychological tools for making the mind better or at least to not suffer. Once again, however, I suspect that most people have a hard time believing that better psychological states for people like them would have better visible real-world effects. They don’t believe in a tight coupling of greater psychological health and real-world improvement for people who are already doing fairly well on both fronts. • Open Phil is accepting applications from impacted FTX grantees (see our post here), and we’ve been giving donors the option to either contribute to that effort (effectively funging us), or to request that we forward particular kinds of applications to them (with the applicants’ permission). If you’re a 6-figure-or-more donor and are interested in either of the two, you can reach out to us at inquiries@openphilanthropy.org. (Note that we may have to prioritise earlier/​larger donors if we get a large volume of requests.) • 2 Dec 2022 23:27 UTC 170 points 24 ∶ 2 Maya, I’m so sorry that things have made you feel this way. I know you’re not alone in this. As Catherine said earlier, either of us (and the rest of the community health team) are here to talk and try to support. I agree it’s very important that no one should get away with mistreating others because of their status, money, etc. One of the concerns you raise related to this is an accusation that Kathy Forth made. When Kathy raised concerns related to EA, I investigated all the cases where she gave me enough information to do so. In one case, her information allowed me to confirm that a person had acted badly, and to keep them out of EA Global. At one point we arranged for an independent third party attorney who specialized in workplace sexual harassment claims to investigate a different accusation that Kathy made. After interviewing Kathy, the accused person, and some other people who had been nearby at the time, the investigator concluded that the evidence did not support Kathy’s claim about what had happened. I don’t think Kathy intended to misrepresent anything, but I think her interpretation of what happened was different than what most people’s would have been. I do want people to know that a lack of visible action doesn’t mean that no one looked into a situation or took it seriously. More here about why there may not be much visible action. I think these problems are really hard to deal with fairly and well. I’m sure my team doesn’t always have the balance right, but you can read more about our approach here. Cultural change also can’t all be handled by CEA or any one centralized source. We want to support organizers, employers, online spaces, and other EA spaces in building a healthy culture. To anyone who’s an organizer or other person shaping the culture of an EA space, we’re here to talk if you’d like to. • This is a very good post and is a great framework to discuss existential risks. • Thanks! • Interesting, my takeaway from FTX was exactly the opposite. That we should focus on getting away from venture capitalists/​acquiring as much money as possible/​other mindsets that got us into this mess, and instead cultivate talent that are so dedicated to EA that they’re willing to do altruistic work for very little money. • Posted this early, so excuse any notifications. instead cultivate talent that are so dedicated to EA that they’re willing to do altruistic work for very little money. As someone who is working at an EA org for free, I don’t agree with this. I come from a background of non-EA youth advocacy for multiple cause areas, including education, climate change and animal rights. I have had so many good co-founders go into non impact-focused, high paying roles like consulting because they don’t get paid anywhere near the value they provide. If you want good talent that knows how to plan, takes initiative and knows how to execute, that kind of talent knows enough to apply to dozens of other better-paying roles, and probably enough to secure very high paying roles. I work for free now because I’m in uni and it’s socially acceptable to not make full-time pay. If you underpay a competent person, they will not only face financial pressure, but also see it as a reflection of how they are valued. I don’t think this leads to healthy movement growth in the long run. • What percent of expenses in various cause areas are for professional staff in high-income countries? • My update from a case of fraud isn’t that money can’t be made ethically. This isn’t to dismiss the possibility of value drift etc, which we should take even more seriously than we have been. Having said that , a few things: 1. I generally am in favor of moving away from a vibes/​patronage based community to a more meritocratic professional-ish group. And the approach you suggested (ie not paying people well) doesn’t make it easy to hire people from the “outside world” whom we have a lot to learn from (like hmm corporate governance maybe? or accounting?)I think it’ll also make the diversity problem significantly worse—and continue selecting for privileged folks who can afford to actually do the work “purely altruistically” 2. Also, there are a bunch of ways in which labor can’t substitute for capital. I work in biosecurity and it seems like we can do significantly fewer things now , especially magaporjects that involve significant brick and mortar infrastructure. I wouldn’t be surprised if at some point down the road, AI Safety also requires significant spend on compute/​data., not to say anything of the myriad neartermist stuff that’s almost infinitely scalable. In general, my update is from the situation is more : we need money but we also need better ops , more interfacing with the real world, better corporate governance and generally fewer incentuousy lookign orgs. • 2 Dec 2022 22:15 UTC 39 points 19 ∶ 0 Yes, we all benefit (on average, in expectation) etc. from a more efficient labour market, and an important part of that is ensuring that workers hear about relevant opportunities for them. Not everyone is constantly refreshing the 80k job board, and many jobs are never listed, so it makes sense to do direct headhunter outreach to potential hires. Organisations should focus on trying to retain talent by being as positively impactful as possible, and by offering an attractive working environment /​ compensation package, not by keeping their employees in the dark about their alternatives. Obviously recruiting people based on misleading information is bad, but that’s true regardless of where you’re recruiting from, and similarly it’s bad to try to retain your current employees in misleading ways. • Minor readability suggestion: for very small probabilities, e.g. smaller than 0.01%, just state them as N out of 10^k, where N is between 1 and 10. Or as N*10^k, where k is negative. I think numbers smaller than 0.01% are more intuitive when presented these other ways than as percentages. I’d normally have to do the conversion out of percentage first to get an intuitive grasp of their magnitude. • 2 Dec 2022 20:18 UTC 41 points 13 ∶ 0 I’m not a lawyer, but my understanding is that even informal agreements against headhunting other EA organizations’ employees would likely violate US antitrust law. • Thanks for writing this! I agree that this is a useful exercise. Some other considerations that may count in favour of neartermist interventions: 1. Nonhuman animals. If we go extinct, factory farming ends, which is good for these farmed animals if their lives are bad on average, which seems to be the case. Impacts on wild animals could go either way depending on ethical and empirical assumptions. EA animal work is also plausibly much more cost-effective than EA global health and development work; my guess is hundreds or thousands of times more cost-effective based on estimates for corporate chicken welfare campaigns and GiveWell recommendations. 2. More speculatively, sentient beings in simulated worlds may be disproportionately in short-lived simulations. Altruistic agents in these simulations will have more impact in these simulations if they focus on the near term (since their influence will be cut short with the end of the simulation), and if their actions are acausally correlated with our own, we can choose for them to focus on the near term if we ourselves focus on the near term. This can multiply neartermist impact. (Of course, there are also other acausal considerations, like acausal trade. That might not favour neartermist work.) • Thank you for the comment, I agree wholeheartedly with point number 1. It didn’t come up in this particular conversation because the person I was talking to wasn’t considering the welfare of nonhuman animals (or the EV of pandemic prevention), though personally those are the considerations I’m making, and I hope that others make as well. Do you think I should just do the math out in this post (It’d be pretty simple I think, though assuming a moral weight for nonhuman animals seems tricky.) Point number 2 is very interesting, I haven’t seen a write up on this. Could you link any? Seems like maybe this makes it worth somebody’s time to get a good probability on us being in a simulation or not? (though I don’t know how they’d do it). • Also, pandemic prevention in particular may prevent far more human deaths in expectation than just through averting extinction because of non-extinction-level pandemics prevented, so just considering extinction risk reduction might significantly understate it. (But again, this is assuming nonhuman animals don’t flip the sign of the EV.) • I don’t think it’s necessary to do the math with nonhuman animals in the post. You could just mention the considerations I make and that you would use different numbers and get different results for animal work. I suppose there could also be higher leverage human-targeting neartermist work than ETG for GiveWell-recommendes charities, too, and that could be worth mentioning. The fact that extinction risk reduction could be bad in the nearterm because of its impacts on nonhuman animals is a separate consideration from just other neartermist work being better. On 2, I don’t think I’ve seen any formal writeup anywhere. I think Carl Shulman made this or a similar point in a comment somewhere, but it wasn’t fleshed out in the comment, and I’m not sure that what I wrote is what he actually had in mind. • SBF is watching this thread closely • Hi! I have absolutely no expertise in this, but it seems long-term good to maximize the quality of matches between employers and employees. So, formally, I suppose I disagree with the statement: Clearly, if a headhunter eases a bottleneck at a high-impact organization while creating a bottleneck at another equally high-impact organization, they are not having a positive effect. If an employee takes a job at another org, presumably they expect it to be a better match for them going forward. I’d count that as a positive effect, assuming (on average) it increases their effectiveness, decreases their chances of burnout, etc. Even if its just for money or location, its hard to know what intra-household bargains have been made to do EA-work, etc. There might also be positive general equilibrium effects: An expectation of a robust EA job market (with job-to-job transitions) increased my willingness to leave a non-EA job (academia) and enter this ecosystem. I would have been more hesitant had I felt there was a norm against hiring from other orgs. Though I’ll flag that I’m not confident I accurately understand the term ‘head-hunting’ here, as opposed to recruiting, as opposed to hiring. In any case, a strong ‘no head-hunting/​recruiting’ norm seems like it would weakly pressure orgs not to hire from other orgs (since they wouldn’t want to be seen as recruiting from other orgs). I get that there are costs associated with re-hiring, re-training, and re-integrating that would be avoided if the original org just directly hires from the non-EA-employed camp. Maybe I’m underestimating these! My uninformed guess is that they are small relative to the benefits of increasing match quality. Curious about others thoughts on this though! Thanks for writing it. • Good points- I take back my earlier “Clearly...” statement, and agree it needs to also include utility gains for the worker in the calculation. Just to clarify, I wouldn’t be advocating that orgs don’t hire from peer orgs. Of course, post jobs, make them widely known, take and consider applications from all place. But I think it’s different to spend money on dedicated staff to directly target and aggressively recruit staff from friendly orgs within your ecosystem. • I share the negative emotional reaction to headhunting candidates from ostensibly allied organisations—it does inevitably feel like an adversarial move. Ultimately, though, I find it quite hard to justify this opposition intellectually. The main effect of headhunting is to provide employees with information—e.g. that they seem like a good fit for this exciting role they might not have known about (or considered applying to) otherwise. I support people making their own employment decisions on the basis of the best possible information, and (in most cases) oppose hiding information from people because it might cause them to make decisions we don’t like. If you phrase an opposition to headhunting as “don’t make our staff aware of opportunities they might freely decide to pursue over their current job”, I think it sounds a lot more dubious as an organisational philosophy for an ostensibly altruistic organisation—it strongly suggests that management don’t have their employees’ best interests at heart. • Thanks for the comment- I see where you are coming from. As noted in a previous reply, I think a lot has to do with how much the headhunter informs vs convinces. There are a lot of parallels with advertising. Do we think that advertising performs a positive social function? Well, it could if it simply provides information about a new product and allows consumers to make more informed choices. But also the advertiser has incentives to increase sales, so why would we trust them to be truthful and have everyone’s best interests at heart? Headhunters/​recruiters have incentives to fill roles, so I don’t think we should assume that they are playing a neutral, information-providing role. • I don’t know nearly enough about headhunting to say anything definitive. But if we think they’re misleading—rather than informing—maybe the argument should be ‘EA orgs shouldn’t use headhunters’ for the reasons you laid out in these comments. It feels counter productive from the orgs side to trick someone into a job they wouldn’t have taken with full information (*especially* for a community trying to operate with integrity). That seems like a distinct point from ‘EA orgs shouldn’t poach from one another’ (which is what it seemed like the post was about). In general, my prior is that norms should be the same for hiring the EA-employed and the non-EA-employed, whether that’s using headhunting services or not. • Yeah, this also seems right to me. My experiences with headhunters in the broader world have been pretty bad, and many of them seemed pretty shady, so I would definitely dock an EA org a lot of points if I saw them reach out to people with deceptive marketing. • 2 Dec 2022 19:48 UTC 132 points 65 ∶ 0 Yes, I at least strongly support people reaching out to my staff about opportunities that they might be more excited about than working at Lightcone, and similarly I have openly approached other people working in the EA community at other organizations about working at Lightcone. I think the cooperative atmosphere between different organizations, and the trust that individuals are capable of making the best decisions for themselves on where they can have the best impact, is a thing I really like about the EA community. • Thanks for the comment- I understand where you are coming from, and see how this could go either ways. But I think I’d tend to disagree. I’m always happy for people to be aware of other opportunities and consider them, but I think there’s a difference when there are paid professionals targeting specific people to switch jobs. These professions tend to not just inform, but also convince. So in the situation of a job switch, you end up with a situation where the recruiting organization gains, the recruited organization loses, and actual job-seeker perhaps gains but this isn’t totally clear, depends on the amount that their decision was motivated by information vs convincing. And there’s a deadweight loss from the salary of the headhunter. Therefore, I think that the net effect of a headhunter could be positive or negative. Certainly it seems like they would have a higher impact if they recruited people from low-impact orgs to move to high-impact orgs. • I don’t know, this sounds to me like treating employees at EA organizations as children that have to be protected from “convincing misinformation”. My employees are totally capable of handling headhunters trying to convince them, and I think most other people in EA are too. These people are not children, and it’s not my right or job as an employer to protect them from harmful-to-me-seeming information, especially when I am obviously in a massive conflict of interest in regard to that information. • Perhaps obvious, but while I agree that your employer should not make it their business to protect you from misinformation of this kind, I still think that anyone who spread genuinely “convincing misinformation” would be doing something wrong and should stop. (I’m not necessarily expecting people to agree on whether a given headhunting pitch is misinformation or not, but in cases where it is, that’s obviously a problem.) • Wasn’t part of the general objection early on to Leverage over them appearing to ~headhunt (I don’t know details) from other orgs like MIRI? (That very well may not be part of your issues with them though?) • Indeed, I think that criticism (as well as the criticism that they recruited donors away from other organizations) was quite unjustified (and I contributed somewhat to it a few years ago). • How much would I personally have to reduce X-risk to make this the optimal decision? Shouldn’t this exercise start with the current P(extinction), and then calculate how much you need to reduce that probability? I think your approach is comparing two outcomes: save 25B lives with probability p, or save 20,000 lives with probability 1. Then the first option has higher expected value if p>20000/​25B. But this isn’t answering your question of personally reducing x-risk. Also, I think you should calculate marginal expected value, ie., the value of additional resources conditional on the resources already allocated, to account for diminishing marginal returns. • Hey thank you for this comment. We actually started by thinking about P(extinction) but came to believe that it wasn’t relevant, because in terms of expected value, reducing P(extinction) from 95% to 94% is equivalent to reducing it from 3% to 2%, or from any other amount to any other amount (keeping the difference the same). All that matters is the change in P(extinction). Also, in terms of marginal expected value, that would be the next step in this process. I’m not saying with this post “Go work on X-Risk because it’s marginal EV is likely to be X” I’m rather saying, “You should go work on X-Risk if it’s marginal EV is above X.” But to be honest, I have no idea how to figure the first question out. I’d really like to, but I don’t know of anyone who has even attempted to give an estimate on how much a particular intervention might reduce x-risk (please, forum, tell me where I can find this.) • How much would I personally have to reduce X-risk to make this the optimal decision? Well, that’s simple. We just calculate: • 25 billion * X = 20,000 lives saved • X = 20,000 /​ 25 billion • X = 0.0000008 • That’s 0.00008% in x-risk reduction for a single individual. I’m not sure I follow this exercise. Here’s how I’m thinking about it: Option A: spend your career on malaria. • Cost: one career • Payoff: save 20k lives with probability 1. Option B: spend your career on x-risk. • Cost: one career • Payoff: save 25B lives with probability p (=P(prevent extinction)), save 0 lives with probability 1-p. • Expected payoff: 25B*p. Since the costs are the same, we can ignore them. Then you’re indifferent between A and B if p=8x10^-7, and B is better if p>8x10^-7. But I’m not sure how this maps to a reduction in P(extinction). • 2 Dec 2022 19:34 UTC 11 points 1 ∶ 0 One thing to consider: it is possible that the ultimate real-world effect of a donation to certain organizations would be increasing the funds flowing back to the FTX bankruptcy estate rather than furthering the donor’s charitable intent. For one, donating any significant funds could increase the organization’s profile as a clawback target. Two, some organizations may need to close down whether you provide some additional funding or not. Three, some organizations could benefit from the cleansing waters of bankruptcy themselves (or settling any FTX claims first) before being infused with new money. Finally, if you want to donate to an organization with serious clawback risk, you may want to think about ways to structure that donation to minimize creditor risk. All of this is based on my view that prospective donors have no ethical obligations to FTX victims. Your mileage may vary if you hold a different view. • I have a friend in my program (not exactly EA, but EA curious and a great guy) who has done a good deal of work with an organization that teaches and discusses philosophy with prisoners. If you would like, I can ask him if he would mind being put in touch, as he might have some useful insights/​connections. • Yes, that would be great. Thank you! I am new to the forum so I don’t know what the usual level of contact info sharing is here. Let me know what works best for you. • No problem, welcome to the forum! You can feel free to share whatever you’re comfortable with, but personally I would recommend you don’t post your email address in the comments, as there was recently someone webcrawling the forum for email addresses to send a scam email to. I would reserve information like that to DMs, my own plan is to DM you his email address if and when he gives me his approval. If you’d prefer, feel free to give me your email address and I can send it to him instead, again, whatever works best for you. • Firstly, I want to flag that this prediction is in strong disagreement with market predictions: the rate on a 20-year treasury is 3.85% as I write this, suggesting that investors do not expect a dramatic increase in inflation. This is in one of the largest, most liquid, and most attended-to markets on the planet, the only competition I am aware of being other US Government bonds. Secondly, the weighted average maturity of US government debt is around five years, to give a concrete value for thinking about how long the US government can have much higher inflation before markets are able to fully react. That’s a moderate amount of time, but if you say that the US government is willing to accept multiple years of 15% inflation (an extremely bold claim), you could still only get a temporary 50% reduction in the debt without fixing the underlying entitlement issues. Which is why it is very strange that this post assumes as a hard constraint that the US government will fulfill its entitlement obligations. I’m not sure why that is assumed. Faced with the option set “inflation” and “cut Medicare and Social Security”, the government might easily choose Medicare and Social Security. Yes, there have been promises, but they are not very credible. Maybe the inflation target gets set to 3% or 4%, numbers that are still very small, but cuts to the commitments seem as or more plausible as spending expands. Once you drop that assumed constraint, the option set of the government expands to a wide variety of more acceptable solutions. Finally, “Inflation is going to be terrifyingly high any day now: buy gold/​crypto/​my special security” has been a recurrent promise of financial snake oil salesmen for decades. Always be careful when you see people claiming it, particularly if they’re also selling something. Debt fears have a similar pedigree: we might be told to be terrified of 130% now, but I remember back when it was 90%, which turned out to be an Excel error. They might be right this time, but you should look for a lot more than a single analysis without theoretical justification, which relies heavily on datapoints following legendarily expensive wars. In the period since the 1950s, attitudes towards government defaults have shifted. Monarchies act differently from independent central banks. • For the first and second example you listed, I think they fail the gender-reversal test. If it had been a woman who said she’d arranged a one-on-one with a man cause he was handsome, nobody would feel upset. Similarly if a group of girlfriends were privately ranking which of the men in the area they’d most like to sleep with. Interestingly, this actually happened at an EA workplace of mine once. I talked to the people involved and told them how it made me feel. They seemed surprised, then felt guilty, and after some discussion and debate (we are EAs after all), they decided to not do it anymore. I think this was more just a matter of low EQ and not thinking things through, rather than an objectification of women. • If it had been a woman who said she’d arranged a one-on-one with a man cause he was handsome, nobody would feel upset. Personally, I would be extremely upset, and report it to the community health team • I notice I’m confused. If a woman said “Ooh, he’s attractive. I should set up a one-on-one with him”, you would report that to the community health team? Why? This seems like ordinary and harmless behavior. Maybe not the most strategic way to have a good conversation or get a good long term partner, but hardly a threat to the community. (Although this is only one sentence. Maybe she only made this judgment after seeing that he had oodles of EA Forum karma, which is obviously the only correct way to evaluate mate quality 😛 ) I’m struggling to understand the thought process that would lead to this being reported. • As Kirsten mentioned, the context of it being an EA conference is key. If a woman said “Ooh, he’s attractive. I should set up a one-on-one with him”, you would report that to the community health team? I would assume it was a joke, if she was serious I would tell her not to, if she did it I would report it. Why? Because EAG(x) conferences exist to enable people to do the most good, conference time is very scarce, misusing a 1-1 slot means someone is missing out on a potentially useful 1-1. Also, these kinds of interactions make it much harder for me to ask extremely talented and motivated people I know to participate in these events, and for me to participate personally. For people that really just want to do the most good, and are not looking for dates, this kind of interaction is very aversive. This seems like ordinary and harmless behavior. Thankfully, in my experience, it’s not ordinary, the vast majority of people schedule 1-1s at EAGs to discuss ways to do more good. Also, as we can see from these posts and my personal reaction, it’s not always harmless. I really value EAG time! I really don’t want to ask my most altruistic and talented friends to come to EAGs and then have them hit on, especially young ones that are choosing careers! There are other conferences and meetups for people that are looking for that. • I’m sure other people have answers about why they’d prefer not to have people book meetings based on attraction, but I’d like to say I support this kind of thing being reported to the Community Health team. The EAG team have repeatedly asked people not to use EAG or the Swapcard app for flirting. 1-1s at EAG are for networking, and if you’re just asking to meet someone because you think they’re attractive, there’s a good chance you’re wasting their time. It’s also sexualizing someone who presumably doesn’t want to be because they’re at a work event. Reporting this kind of breach of EAG rules seems entirely appropriate! https://​​twitter.com/​​amylabenz/​​status/​​1558435599668895745?s=46&t=unZ0UrHR9pJNN03keeNXcw • Thanks for sharing this. I think people have a tendency to overgeneralise about what “men” or “women” care about when having these conversations. • For what it’s worth, I’m a 30 year old woman who’s been involved with EA for eight years and my experience so far has been overwhelmingly welcoming and respectful. This has been true for all of my female EA friends as well. The only difference in treatment I have ever noticed is being slightly more likely to get speaking engagements. Just posting about this anonymously because I’ve found these sorts of topics can lead to particularly vicious arguments, and I’d rather spend my emotional energy on other things. • Supporting the community with this new competition is quite valuable. Thanks! Here is an idea for how your impact might be amplified: For ever researcher that is somehow has full time funding to do AI safety research I suspect there are 10 qualified researchers with interest and novel ideas to contribute, but who will likely never be full time funded for AI safety work. Prizes like these can enable this much larger community to participate in a very capital efficient way. But such “part time” contributions are likely to unfold over longer periods, and ideally would involve significant feedback from the full-time community in order to maximize the value of those contributions. The previous prize required that all submissions be of never before published work. I understand the reasoning here. They wanted to foster NEW work. Still this rule drops a wet blanket on any part-timer who might want to gain feedback on ideas over time. Here is an alternate rule that might have fewer unintended side effects: Only the portions of ones work that has never been awarded prize money in the past is eligible for consideration. Such a rule would allow a part-timer to refine an important contribution with extensive feedback from the community over an extended period of time. Biasing towards fewer higher quality contributions in a field with so much uncertainty seems a worthy goal. Biasing towards greater numbers of contributors in such a small field also seems valuable from a diversity in thinking perspective too. • 2 Dec 2022 18:20 UTC 2 points 1 ∶ 0 The headlines on this one would be a nice distraction :) • As a few people have mentioned, I have a very different financial background than SBF in that everyone knows how I became wealthy. In cinematic detail, no less! Moreover, as an American citizen and California resident, audits aren’t just a fun intellectual exercise for me—they really happen 😲. Recently (culminating in 2021), the California Franchise Tax Board audited my 2016-2018 filings. Here’s what my accountant wrote me about that: they have issued a “No Change” report. This audit was so extensive they literally ticked and tied every number on your tax return back to original source documents. We responded to a number of Information Document Requests (IDRs) and had over 20 video calls with the FTB team to talk them through various reporting positions on investments you hold. In talking with the lead agent on our audit conclusion call he said “we didn’t even find a rounding error……….great work on the tax reporting.” For tax filings of this size this is the Super Bowl victory for us geeky accountants. We run self audits every year to prepare for the real ones, and I generally see no benefit to trying to get away with cheating at wealth generation when there are many legitimate options in front of me. (e.g. when I’m not fishing for karma on twitter, I run the enterprise software company Asana. Check it out!) Would an independent auditor go deeper somehow than the govt who’s trying to find fraud and generate revenue? I guess so, maybe? Maybe you’ll trust that I didn’t fabricate the above? I’m open to it, but generally skeptical it’s actually adding that much. Perhaps PR self-oppo (like politicians do) would be a more novel exercise. • (Not urgent, feel free to ignore) I don’t mean to derail the thread, but one question I’d have, at some point, is to get some idea of what you might feel comfortable with, regards to public forecasting of finances. I generally stay away from forecasts around individuals. However, in our (unusual) situation, a huge amount of the EA landscape is now highly dependent on your net worth over time. By chance, would you be okay or have reservations with any of the following being publicly forecasted? Is there anything you might actively be interested in being forecasted? 1. Total funding spent by Open Philanthropy over time. 2. Major shifts in spending between cause areas by Open Philanthropy /​ Good Ventures, over time. 3. Your public net worth over time. I could imagine some people who would want this to happen anyway (i.e., would just be interested in the results), and others who really prefer privacy (also totally fine). On the extreme end there are scandal markets. My guess is that they would show very low probabilities of any scandal around you, but these markets are also very new+experimental, and I realize they make lots of people feel kind of icky. (I’d also flag that if it would help, I’d be happy to publicly host any of these sorts of forecasts for myself/​QURI, for a while, first.) • Thanks so much for the information+links here! One tiny point: I have a very different financial background than SBF in that everyone knows how I became wealthy. In cinematic detail, no less! It’s looking like in a few years we’ll also have cinematic representations of SBF too, for better or worse. Looking forward to comparing them. :) https://​​news.bitcoin.com/​​hollywood-streaming-giants-scramble-for-movie-rights-to-ftx-saga/​​ • Thanks for commenting, this is very reassuring! • And as a public company, Asana financials are also heavily audited. Here’s a new, audited, public filing from just yesterday: https://​​investors.asana.com/​​financials/​​sec-filings/​​sec-filings-details/​​default.aspx?FilingId=16238301 • 2 Dec 2022 17:51 UTC 23 points 8 ∶ 0 Thank you for having the courage to say this out loud. Just remember that the “EA community” does not have a monopoly on doing good. If the atmosphere is toxic, people are just going to leave, and rightfully so. You will be better off, and do more good overall, in environments and spaces where you feel respected and safe. If EA is not committed to such a space, they will slowly die off, and be replaced by an organisation that is. • I had sort of the same reaction. To me, “doing the most good” is something I live by. I don’t identify as EA, but as altruistic. I find it sort of interesting when people refer to the “EA community” and make efforts to change it. I mean, they’re not wrong, but from another perspective, the “EA community” is almost an overgeneralization. Like for instance there are animal rights activists, longtermists, and climate change activists all getting to know each other through EA. There are going to be toxic people or cliques, and it’s sort of weird to say “EA is ____” when plenty of people “within EA” have never met each other and have nothing to do with each other. Just some thoughts. I don’t disagree with the original post. • In a world where the EA culture is bad, but where no-one else is really doing any better, we may not be able to be replaced in this way, and it becomes even more important to ensure that we get the culture right here. • I think an attitude of irreplaceablity can be dangerous: someone could easily make the mistake of thinking that bad apples need to be protected and covered up for in order to preserve the movement as a whole. (I certainly hope nobody thinks this way here, but this has happened before in other movements). In truth, the ideas aren’t going away. Individual people can be replaced, and new groups can form in the event of a blowup. Try and fix the culture, sure, but if it’s too far gone, don’t be afraid to blow the whistle and blow it up, in the long term it’s healthier. • That’s a true point—but I don’t a good objective. EA should strive to exist with the best, highly-aligned to doing good people and I think we need a culture the prioritises people’s lived experiences, feelings, and interactions for that to happen. • Of course. I’m not involved in any irl EA communities so I can’t really judge how bad/​good they are. It would be better if the current movement survived with good community norms in place, but if it doesn’t, it’s not the literal end of the world, a new, better community will replace it. • Man this is one of the best posts I’ve ever read on the forum. Extremely educational while remaining very engaging (rare to find both). Thank you for writing this, I hope you’ll do similar write-ups for other research you do! • Can’t wait for EAGuantanamo! Kidding of course, but I’m not sure how valuable it’d be given how difficult it is for former convicts to get jobs (e.g. low expected earnings to contribute to high impact charities down the line). But for groups doing work on recidivism and the like, I do hope they are recruiting out of pools of ex-cons to really understand what the problems are that folks face. • Maya—thanks for a thoughtful, considered, balanced, and constructive post. Regarding the issue that ‘Effective Altruism Has an Emotions Problem’: this is very tricky, insofar as it raises the issue of neurodiversity. I’ve got Aspergers, and I’m ‘out’ about it (e.g. in this and many other interviews and writings). That means I’m highly systematizing, overly rational (by neurotypical standards), more interested in ideas than in most people, and not always able to understand other people’s emotions, values, or social norms. I’m much stronger on ‘affective empathy’ (feeling distressed by the suffering of others) than on ‘cognitive empathy’ (understanding their beliefs & desires using Theory of Mind.) Let’s be honest. A lot of us in EA have Aspergers, or are ‘on the autism spectrum’. EA is, to a substantial degree, an attempt by neurodivergent people to combine our rational systematizing with our affective empathy—to integrate our heads and our hearts, as they actually work, not as neurotypical people think they should work. This has lead to an EA culture that is incredibly welcoming, supportive, and appreciative of neurodivergent people, and that capitalizes on our distinctive strengths. For those of us who are ‘Aspy’, nerdy, or otherwise eccentric by ‘normie’ standards, EA has been an oasis of rationality in a desert of emotionality, virtue-signaling, hypocrisy, and scope-insensitivity. Granted, it is often helpful to remind neurodivergent people that we can try to improve our emotional skills, sensitivity, and cognitive empathy. However, I worry that if we try to address this ‘emotions problem’ in ways that might feel awkward, alienating, and unnatural to many neurodivergent people in EA, we’ll lose a lot of what makes EA special and valuable. I have no idea how to solve this problem, or how to strike the right balance between welcoming and valuing neurodiversity, versus welcoming and valuing more neurotypical norms around emotions and cognitive empathy. I just wanted to introduce this concern, and see what everybody else thinks about it. • Thank you very much for your perspective! I recently wrote about something closely related to this “emotions problem” but hadn’t considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we ‘normies’ remember to keep you in mind! • Throwing out one possible approach: 1. People think about where they have blindspots around reading certain styles of writing, and acknowledge that in those areas, they may not get the point being made, even if there is an important point 2. When someone makes a post that communicates in a way that you identify as your blindspot, you think about whether you can respond in the same style that they communicated. 3. If you can—do so. If you can’t—you don’t have to respond to the post at all. This is the crux of my suggestion. If you just see the world differently from someone else, so much so that responding to it would involve a clash of your worldviews, it’s okay to just leave it alone. I think “let it go” is an undervalued approach on every internet forum, and especially so here. That’s my best guess at a strategy that works both for someone who systematizes a lot reading an “overly” emotional post, and for someone who systematizes very little reading an “overly” analytical post. But I agree this is something of a wicked problem and we need some way to tackle it. In the absence of an explicit approach, I think the OP is right to point out that people will just respond in an analytical way to emotional posts and that may not help anyone at all. • I really appreciated your comment and think it’s important to acknowledge and ensure neurodiverse people feel welcome, and I’m coming from a place where I agree with Maya’s reflections on emotions within EA and am neurotypical. Not sure I have time to post my thoughts in depth but I think the rational Vs. intuitive emotional intelligence tension within EA is something worth a lot more thought. It’s a tension /​ trade-off I’ve picked up on in the EA professional realm: where people aren’t getting on in EA organisations, where people aren’t feeling heard, and where the working culture becomes one that’s more afraid of losing status /​ threat mindset than supportive, to the detriment of employees. Maybe as a counter to what you’re saying, some of the people who helped me best own and articulate my emotions (in the context of another EA repeatedly undermining me) are bay-area rationalist EAs who you might describe as neurodivergent. Why? I think a lot of people from that community have just done the work on themselves to recognise emotions in themselves, and consequently in others. And this is driven by valuing emotions /​ internal worlds intrinsically—in that integrating head and heart way you write about—and then getting better in that domain. So to link this back to Maya’s post; 1. agree with making sure EA is truly inclusive and, in being better at responding to emotions and traumatic experiences, doesn’t swing to excluding neurodivergent people, 2. I think this tension /​ trade-off goes beyond social realm, and into the professional, and 3. I would like to play up how many neurodivergent people—especially those who might instinctively behave in a way that creates the culture Maya has highlighted as problematic - can actually be really good at creating an emotionally responsive and caring environment. Happy to discuss further time permitting (which is sadly not on my side!) • howdoyousay—thanks for this supportive post. I agree that many neurodivergent people can develop quite a good set of emotional skills (like some of your Bay Area rationalists did), and can promote emotionally responsive and caring environments. (When I teach my undergrad course on ‘Human Emotions’—syllabus here—one of my goals is to help neurodivergent students improve their understanding of the evolutionary origins and adaptive functions of specific emotions, so they take them more seriously as human phenomena worth understanding) My main concern is that EA should not become just another activist movement where emotions over-ride reason, where ‘lived experience’ gets prioritized over quantitative data, and where neurodivergent people get cancelled, shunned, and stigmatized for the slightest violations of social norms, or for ‘offending’ neurotypical people. You’re right that striking the right balance is worth a lot more discussion—although my sense is that, so far, EA as a community has actually done remarkably well on this issue! • 2 Dec 2022 16:57 UTC 26 points 6 ∶ 1 I’ve worked to pitch (and in some cases, been the target of) investigative pieces for the last 15 or so years of my life, and honestly, nothing here strikes me as particularly troubling. These are routine errors in communication, or cognitive biases (e.g., salience bias), and probably not indications of any sort of wrongful conduct. • Misstatements about Sam’s frugality. It’s possible there was an effort to mislead, but seems more plausible that this was just salience bias. A billionaire driving a Corolla is salient; owning a luxury condo in the Bahamas is not. Unless there is evidence that people actively misled the reporter, this is not particularly notable to me, other than reminding us all not to fall victim to that bias. • Warnings about Sam’s misbehavior. Say someone causes others to find them completely ethical in 99.9% of all interactions. That’s a very high rate! But once one becomes prominent, even a very high rate of perceived ethical behavior will lead to a high number of warnings because the number of interactions increases exponentially. Every prominent person has at least some people saying, “They’re unethical.” This is often because the prominent person merely turned a request down, when many requests are being made. (I am much less prominent than Sam but have had this happen to me many times.) I find the failure to respond to the exceedingly vague allegations against Sam unremarkable. • Sam’s contradictions on malaria nets. You can read this as a lie. You can also read this as changing sentiments. We all contradict ourselves, and this is especially true when we are talking about highly speculative questions, such as cause prioritization, where our evidentiary basis is often quite slim, and where much depends on relatively small differences in the assumptions we make (e.g., a small change in the probability of hostile AGI). It is possible Sam is lying about his commitment to malaria nets to cover up his crimes. It’s also possible that he just changed his mind, or at different moments, has a different emotional and rhetorical commitment to various causes. To give another example, Sam once stated that he was very committed to animal protection. Over time, he shifted his commitments and seemed more focused on concerns such as AI. I don’t see that as a lie, even though I disagreed with it. It’s just change. The main thing that I would find concerning in this piece is the excessive focus on PR by EA leaders. Don’t focus on PR. Focus on trying to get a true and accurate account out there in the media. It’s very hard to manipulate or even strategize about how to portray yourself. It’s much easier to be real, because you don’t have to constantly perform. That should be a norm within EA, especially among leaders. • I think your point about the various “warning flags” is well-taken. Of course, in retrospect, we’ve been combing the forums for comments that could have given pause. But the volume of comments is way too large to imagine we would have actually updated enough on a single comment to make a difference. That said, I think the mass exodus of Alameda employees in 2018 should have been a bigger warning flag, cause for more scrutiny on the business, to the extent where those with a concern for the risks should have tried to dig deeper on those employees, even with the complications that NDAs can pose. We can’t say we weren’t aware of it—that episode even made it into SBF’s fawning 80k interview, albeit mostly framed as “how do you pick yourself up after hardships?”. The best case scenario conclusion of such an investigation very likely wouldn’t have been “SBF is committing massive fraud” especially as that might not have happened until years later. But I think it still would have been useful for the community to know that SBF had a reckless appetite for risk, so we could anticipate at least the potential for FTX to just outright collapse, especially as the crypto industry turned sour earlier this year. • 2 Dec 2022 16:46 UTC −16 points 4 ∶ 17 I am unpleasantly surprised by what I hear in this and other posts that make it sound like there is a sort of EA “party scene” or something. It sounds like EA men (and maybe others) need to focus a lot more on working their butts off every single day to help others, and a lot less (or ideally not at all, but that can be tough) on trying to get laid/​dating. • Yeah, how dare they not dedicate every waking hour to helping others-the audacity! In all seriousness, pretty sure the problem here are things like people who don’t respect others’ boundaries/​not recognizing power dynamics, the culture that normalizes this, and the institutions that don’t adequately mitigate this risk, not the fact that people trying to do good in their life can also spend parts of their life socializing. • In all seriousness, pretty sure the problem here are things like people who don’t respect others’ boundaries/​not recognizing power dynamics, the culture that normalizes this, and the institutions that don’t adequately mitigate this risk That goes without saying. • 2 Dec 2022 16:31 UTC 17 points 10 ∶ 0 I feel the way you do. I feel your pain. Hugs and solidarity. • An extreme extension of utilitarian, rationalist, and effective altruist logic can blind us to the negative experiences of individuals and major flaws in the EA community. I fear that people within the EA community are not always taking allegations of harm seriously out of concern that (1) there is “more impactful” work that they could be doing than investigating such allegations, (2) investigating allegations of harm against prominent individuals may damage the reputation of Effective Altruism, and (3) some individuals are having such a “high-impact” that they don’t want to find them guilty of an act that may impede such effective work. I overall agree with the ideas presented in this post and I think they deserve more attention. I think the above part is especially true. Its true that discriminatory tendencies in a community doing good don’t erase its overall positive impact. HOWEVER. It does, how you state, exclude some people from helping. And if that “some people” is 50% (in some countries more) of college graduates, that seems like a real big problem. Thank you for writing this! • 2 Dec 2022 15:30 UTC 10 points 0 ∶ 0 Thank you for the important post! “we might question how well neuron counts predict overall information-processing capacity” My naive prediction would be that many other factors predicting information-processing capacity (e.g., number of connections, conduction velocity, and refractory period) are positively correlated with neuron count, such that neuron count is pretty strongly correlated with information processing even if it only plays a minor part in causing more information processing to happen. You cite one paper (Chitka 2009) that provides some evidence against my prediction (based on skimming the abstract, this seemed to be roughly by arguing that insect brains are not necessarily worse at information processing than vertebrate brains). Curious if you think this is the general trend of the literature on this topic? • I think this post is super valuable. The following is an illustration of my endorsement. Humor has much more to do with error culture than is generally assumed. People who have a sense of humor put distance between themselves and the things they work on, and they don’t immediately collapse if a mistake is made. “It is a damn serious thing to be funny,” and by this they mean not only that it takes a lot of brain power to invent a good joke, but that good humor is based on deep and balanced seriousness. We need more of the latter, and a little less of the cramped ambition to point out every blunder to others. Humor is an unheard-of advantage in politics, because it can be used to say many things that would be insulting if said seriously. You probably know the story of Winston Churchill, who considered his French to be quite passable, while French people who listened to him spoke without hesitation of a “massacre of the French language”. De Gaulle later wrote that he learned English listening to Churchill speak French. In any case, Churchill had a sense of humor, and he used it as often as he used his bumpy French. To make his point particularly forceful, he did not simply ask de Gaulle to give way to British troops in Africa, but stated curtly, “Si vous m’obstaclierez, je vous liquiderai.” And by humor I don’t mean a dull “permanent grin” or “making fun” at the expense of others. But the ability to laugh about the small and big shortcomings of life. Those who have a sense of humor can laugh at themselves. You wouldn’t believe how many people take themselves insanely seriously, and how much more pleasant it is when someone doesn’t take themselves so seriously for once. Don’t you feel the same way: at length, there is hardly anything more annoying and boring than colleagues who creep along the walls all year in a rainy state of listlessness and put on an expression for no reason, as if they were carrying a sign in front of them that says: “Anyone who wants to get along with me must first reveal the dark secret of my thoughtfulness. For I am insanely clever and not a simpleton like the good-humored majority in the room I am in at the moment.” Of course, you can get smarter without humor. Newton was everything but funny, and still brilliant. Schopenhauer was profound, but certainly not known for his laughing fits. Henrik Ibsen was an extremely creative mind and yet not a paragon of hilarity, quite the opposite. But the reverse conclusion is also not correct: just because you put on a wrinkled face and turn up your nose on principle, you are not yet insanely clever. I prefer Benjamin Franklin, who was so amused by the stilted titles of the scientific papers of his time that in a letter to the Royal Academy of Brussels he philosophized just as turgidly about the disadvantages of farting and proposed a prize for the discovery of a pill “that shall render the natural discharges, of wind from our bodies, not only inoffensive, but agreeable as perfumes”. • Thank you for this perspective! Once again, I think you’ve expressed better than I did the connection between humor and humility. I love all of your historical examples as well; it has me thinking that, rather than following current examples of comedy, looking further back in the past might be an even more fruitful approach to getting inspiration for EA-applicable humor that has stood the test of time. • Great write-up! Sometimes establishing policy on a local level can help build momentum to scale it up. In DC one similar policy we are advocating for is to have the Mayor establish an Animal Welfare Liaison. We have used candidate questionnaires to gauge interest in such an office. You can see an example of some results here: https://​​dcvfa.com/​​who-are-the-best-at-large-candidates-on-animal-issues/​​ • excited to give this a listen. Anyone else listen to this podcast? I love this podcast but this episode was a though listen. Really made me think. • Hey Maya, I’m Catherine - one of the contact people on CEA’s community health team (along with Julia Wise). I’m so so sorry to hear about your experiences, and the experiences of your friends. I share your sadness and much of your anger too. I’ll PM you, as I think it could be helpful for me to chat with you about the specific problems (if you are able to share more detail) and possible steps. If anyone else reading this comment who has encountered similar problems in the EA community, I would be very grateful to hear from you too. Here is more info on what we do. Ways to get in touch with Julia and me : • 2 Dec 2022 14:41 UTC 13 points 8 ∶ 0 Thank you for writing this. I’m sure it was very difficult to do, and so I really appreciate it. Effective altruism has an emotions problem. I strongly agree with this. Have you seen Michel’s post on emotional altruism? It doesn’t get to your points specifically, but is similarly the need for more open emotion in the movement. I also to add something that, in my experience, cannot really be ignored when speaking about the expression of emotion in particular in EA. EA has a lot of people who are on the autism spectrum who may relate to emotion differently, particularly in the way they speak about it. There are others who aren’t on the spectrum but similarly have a natural inclination to be less or differently publicly emotional. EA/​rationality can feel rather welcoming (like “my people”) to people like this (which is good—welcomingness of people whose brains work differently is good) and this may produce a feedback loop. This is not at all to deny your recommendations on this particular area. Rather, it is to acknowledge that some proportion (far from all) of what you call the “emotions problem” is probably just people being themselves in a way we should find acceptable, which means that I am a bit more confused about how to best address it. • thanks for pointing this out—I think this is a key point AND I think it is inflected by gender. My guess (not being an expert on autism, but being somewhat of an expert on gender) is that women who are autistic are more likely to learn, over time, how to display and react to emotion “like normal people”, because women build social capital through relational and emotional actions. Personal experience (I am a woman, to a first degree approximation): as a child I did not really understand emotion /​ generally felt aversive when other people expressed it. Over time I learned how to feel /​ respond to others’ emotions in a socially normative way, through observation and self-reflection and learning. This is not to say that those of us in EA who are naturally different w.r.t. our emotional processing should feel bad/​abnormal, but to say that EA would be a more welcoming community, especially to women, if people in EA learned how to process and respond to “normative” emotional expressions. Someone above said that EAs see debate as an expression of caring, and I (a) am the same way and (b) understand that most people are not! I’ve learned to ask “are you looking for discussion and finding solutions together, or are you not ready for that yet?” (Similarly, people with more normative emotional expression entering EA should learn to ask/​adapt to the person they’re talking to.) I’ve been in spaces that I think are very good at this and have a cultural norm of it. • Hi Nick—thanks for the thoughtful post! I think cash arms make a lot of intuitive sense, my main pushback would be a practical one: cash and intervention X will likely have different impact timelines (e.g. psychotherapy takes a few months to work but delivers sustained benefits, perhaps cash has massive welfare benefits immediately but they diminish quickly over time). This makes the timing of your endline study super important, to the point that when you run the endline is really what determines which intervention comes out on top, rather than the actual differences in the interventions. I have a post on this here with a bit more detail. Your point on the ethics here is an interesting one, I agree that medical ethics might suggest “control” groups should still receive some kind of intervention. Part of the distinction could be that medical trials give sick patients placebos, which control patients accurately believe might be medicine, which feels perhaps deceptive, whereas control groups in development RCTs are well aware that they aren’t receiving any intervention (i.e. they know they haven’t received psychotherapy or cash), which feels more honest? The downside is this changes the research question from “What is the impact of X?” to “How much better is X than cash”, and there are lots of cases were the counterfactual really would be inaction. A way around this might be to give control groups an intervention that we know to be “good” but that doesn’t affect the specific outcome of interest. e.g. I’ve worked on an agriculture RCT that gave control groups water/​sanitation products that had no plausible way to affect their maize yield but at least meant they weren’t losing out. This might not apply to broad measures like WELBYs I’m honestly not sure about the ethical side here though, interested to explore further. • Thanks so much Rory and for the links to your earlier post and the USAID stuff! I think your criticism is a good criticism of RCTs in general, but it seems to me more a criticism comment about RCT design then being a clear argument against comparing with cash transfers. RCTs on development NEED longer term outcome measurement, and surely need at a minimum 2 data points at 2 different times after the study. And of course the most important data point is after many months or even many years as you talked about in your article. I’m not at all sure about the ethical side either . Medical RCTs compare a new trial treatment against the most up-to-date treatment—not so much because we worry about “tricking” a patient like you say (there are still plenty of RCTs with sugar placebo pills which is deemed ethically OK), we are still OK with a kind of ‘deception’. What we AREN’T OK with is doing a trial where we give the control arm nothing at all, when we know there is a better option than nothing for the medical condition. And I’d argue that cash is usually a better option than nothing for many development conditions. That’s a great and sobering point about the counterfactual potentially being inaction if cash transfers won the day. Why should the counterfactual be inaction though? I would hope as development people we are good enough that if Cash was equivalent or better than intervention X, this wouldn’t lead us not to inaction but instead to give more cash instead. Maybe I’m naive and idealistic though, and maybe you’re right that there is actually a practical advantage in seeing a positive impact of intervention X, even if it is worse than a cash transfer. I don’t think that should be the case though. That’s the whole question really—should we spend our millions on RCTs asking “What is the impact of X”, or “Is X better than cash”. What we really want to know, the practical question which underlies the research question is is “Should we be implementing this intervention at scale”. I’d argue that to answer that, the question vs. Cash is the one that matters more. Thanks so much for your reply, I can see you’ve thought about this far more than me and I loved your original post—weird that searches on the forum didn’t bring it up, maybe they should employ google search on the site haha. • I think a lot of these tips will suit everyone who plans to speak publically. Except for audio equipment of course :) • I am open to the idea that private slack messages can be shared to make a point, but it seems that someone just shared a tonne of them (they range across Rob’s comments, FTX early warnings etc) and I dislike that—it damages people’s ability to communicate freely if they think a chunk of messages are gonna get shared. • Population growth is an existential risk. The HANDY model shows many regimes where self organizing systems grow to the point of catastrophic failure. These models attempt to explain the fall of prior isolated civilizations, including the complete loss of population such as Easter Island and the loss or civilization such as the collapse of Teotihuacan. https://​​www.sciencedirect.com/​​science/​​article/​​pii/​​S0921800914000615 It would be worthwhile to factor in such risks. • Truly a sensitive subject this is. Just read this post twice in a row. Thank you! It would be great to see a few real-life experiences mentioned in the text too. • EA: We should never trust ourselves to do act utilitarianism, we must strictly abide by a set of virtuous principles so we don’t go astray. Also EA: It’s ok to eat animals as long as you do other world-saving work. The effort and sacrifice it would take to relearn my eating patterns just isn’t worth it on consequentialist grounds. Sorry for the strawmanish meme format. I realise people have complex reasons for needing to navigate their lives the way they do, and I don’t advocate aggressively trying to make other people stop eating animals. The point is just that I feel like the seemingly universal disavowment of utilitarian reasoning has been insufficiently vetted for consistency. If we claim that utilitarian reasoning can be blamed for the FTX catastrophe, then we should ask ourselves what else we should apply that lesson to; or we should recognise that FTX isn’t a strong counterexample to utilitarianism, and we can still use it to make important decisions. • Thanks for the digest James! • Wow, looks like an empowering experience for a novice writer! Going to check those group writings in my spare time. • Thanks for info, friends! • Now THIS is truly an orginal post for this forum. Thanks! Enjoyed reading it, and it expanded my thinking! • Hey Maya! I just wanted to thank you for sharing your experience! I’m sure it wasn’t easy to write it up and it took a lot of courage, and I’m really glad you did it! • I want to agree with this, but I think that if SBF had “gotten away with it” we’d have taken his money, which makes me doubt our sincerity here. It sounds a lot more like “don’t get caught doing fraud” • [ ] [deleted] • As a note, while I agree people though that via Alamada, FTX was “Using exchange data to trade against their own customers”, the fact that Alamaeda lost so much money confuses me as to if this was actually true. • As another crypto insider: • the scam coin thing is true, the entire Solana ecosystem is full of such coins. Many VCs in the space knowingly fund projects they know to be likely scams, Solana took this to an extreme, all backed by SBF funding. People in the space for a while knew they should basically stay away from the whole ecosystem. New users ofcourse would have gotten ***ed. • frontrunning was a common accusation that could be true and no one cared. I had money on FTX and my reasoning for that was not “these people won’t frontrun me” but rather “so what if these people eat 0.5% fees on each trade, my trades are still positive EV”. No one had an incentive to care about frontrunning because the profits (and genreal risk tolerance) in crypto are ludicrously high across the board. (Unless you’re a naive retail investors who is unaware how high your risk exposure is when you buy random coins on FTX.) • paying for tweet influencing is common across the board, many twitter users were caught red-handed by other twitter users. (Good luck getting successful lawsuits though.) I don’t know if SBF’s Solana projects engaged in this but it would be surprising to me if not even one of them did it. And in general, VCs often aren’t aware and don’t care about all the scammy behaviour that projects they fund are involved in. More generally, there are strong incentives in the crypto space to not call out other investors and projects doing shady stuff because this reduces the probability you will be invited to funding rounds, given access to dealflow etc. If you’re pursuing earning to give in crypto, you should be aware that staying quiet about all the scams you see is almost a requirement if you wanna make big money. (This is assuming ofcourse that you’re not involved in the scams yourself.) There is not a single rich person in crypto (starting with vitalik himself, and every exchange CEO) who is unaware of this, most people keep quiet about it and do not name-and-shame individual projects who scam. I myself chose to stay quiet about this for a long time after I left the space because “what if I later realise I want access to the ludicrous amounts of money here”. I made a post about it at the time: https://​​forum.effectivealtruism.org/​​posts/​​KPy4yuSsGk4qMwK3g/​​ea-opportunity-cryptocurrency-and-defi • 2 Dec 2022 12:05 UTC 9 points 1 ∶ 0 I really appreciate the summaries, thank you! • [ ] [deleted] • I completely agree with this. As a (Americans read: neo) Liberal that thinks the Green movement does far more harm than good, some of the political campaigning I’ve seen EAs do really puts me off and makes me question the entire movement. SBF’s lobbying of politicians in the US is another example of egregious misuse of funds. Until those checks and balances are in place, we should be focusing on directing funds to the most impactful causes. That should be the beginning and end of EA in my opinion. Politics is almost never the best ROI approach to anything, using EA’s own methodology to calculate impact. There will of course be exceptions, but I find it hard to believe any amount of money will be better spent trying to influence a government as opposed to buying malaria nets. We also need to avoid thinking and framing our actions as a group identity. It’s to be expected that people come to different and opposing conclusions even within a movement with clear stated principles. As such, political action shouldn’t be done in the name of the group as a whole. • We also need to avoid thinking and framing our actions as a group identity. It’s to be expected that people come to different and opposing conclusions even within a movement with clear stated principles. As such, political action shouldn’t be done in the name of the group as a whole. YES • Duarte—I agree with your additional points here. FWIW, I was always uneasy with SBF’s massive donations to (mostly) Democratic politicians, and with his determination to defeat Trump at any cost, by any means necessary. It just didn’t make sense in terms of EA reasoning, values, and priorities. It should have been a big red flag. But I think the lack of political diversity in EA, and many EAs’ tacit agreement with SBF’s partisan political views, led too many EAs to think it was no big deal that SBF was mixing EA and politics in unprincipled and somewhat bizarre ways. In the future, I think we should have stronger skepticism about anybody who tries to link EA to partisan political activism. • FWIW, I was always uneasy with SBF’s massive donations to (mostly) Democratic politicians, and with his determination to defeat Trump at any cost, by any means necessary. It just didn’t make sense in terms of EA reasoning, values, and priorities. It should have been a big red flag. I thought it was not super consistent with EA but easily explained by Sam’s parents’ careers and values. I often expressed worry about how it would affect our epistemics for EAs to become politicians bankrolled by Sam or for the community as a whole to feel pressure not undermine political moves that they would have to make to succeed, but I gave him personally a pass for wanting to spend some of his seemingly unlimited funds on political stuff because I assumed he had strong beliefs about politics as a lever for good from his upbringing. • Thanks for the submission, much appreciated! • Cheers for the entry, much appreciated! • I particularly appreciated: • That this looks to an organization outside the EA community • The brief pointers on limitations: they would make it easier for someone to build on this The code formatting could use a bit of work (you can format code in the forum editor), but that’s the least important factor. • Thanks for the entry, much appreciated! • Some comments about the approach: Heh, reminds me of some past work. In particular, see here. When you say: So if capital allocated to EA is growing at a faster rate than labour (β>γ), our discount rate should be negative with respect to time: if labour is growing faster, it should be positive… Intuitively, this occurs because capital and labour are varying at some rates exogenously and we wish our level of capital per worker to be as close to constant over time as possible due to diminishing marginal returns to all inputs. I’m not sure whether this is the case. In particular, what does this assume about the return to capital and the return to labor? See equation 3 here: How low does r have to be for that conclusion to hold? It’s very possible that I am missing something. Labour growth is considerably more stable than capital growth, but still volatile, so will be assumed to be a constant rate of 10% with a standard deviation of 5%, with the lower bound taken as the mean due to difficulties in higher growth rates (30% would imply that 4% of the world would be engaged in EA-relevant work by 2060, which seems highly implausible) Arguably, labor growth is endogenous, not exogenous, and a function of both labor and capital? α will be assumed to be the same level as the economy as a whole, at 0.4. Why? It’s possible that it might be very different, and that this depends on the type of existential risk. E.g,. some types of AI safety seem like they can be done while capital constrained, some types of biorisk might be particulary capital heavy (e.g,. funding better protective equipment) These results counterintuitively imply that the current marginal individual would be substantially higher marginal impact working to expand effective altruism than working on maximising the reduction in existential risk today, with 99.7% confidence One interesting thing to look at might be what under what modelling assumptions this holds. Overall I like the approach. I think that most of the uncertainty is going to come from model error, though. • I’ve heard a number of stories of women feeling uncomfortable in EA spaces and they sadden me every time. • 2 Dec 2022 11:38 UTC 1 point 0 ∶ 0 FWIW, Richard Pettigrew has written a condensed version of their paper on the EA Forum. • 2 Dec 2022 11:36 UTC 102 points 23 ∶ 2 Hey Maya, I like your post. It has a very EA conversational style to it which will hopefully help it be well received and I’m guessing took some effort. A problem I can’t figure out, which you or someone else might be able to help suggest solutions to - -If I (or someone else) post about something emotional without suggestions for action, everyone’s compassionate but nothing happens, or people suggest actions that I don’t think would help -If I (or someone else) post about something emotional and suggest some actions that could help fix it, people start debating those actions, and that doesn’t feel like the emotions are being listened to -But just accepting actions because they’re linked to a bad experience isn’t the right answer either, because someone could have really useful experience to share but their suggestions might be totally wrong If anyone has any suggestions, I’d welcome them! • Maybe one way to address this would be separate posts? The first raises the problems, shares emotions. The second suggests particular actions that could help. • I think you may be on the right track with how you wrote this comment actually—taking a moment to let the person know they were heard before switching to problem-solving mode. IMO social media websites should sometimes give users a reminder to do this after they hit the “submit” button, but before their comment is posted. Perhaps the submit button could check whether a particular tag is present on the original post? • This is well put. I think people can say that debating is their way of trying to care. Not a full solution but I think people don’t sometimes realise this. • What do you think of ACE’s recent recommendations? • Some suggested remedies. I know some of these are weird, but I honestly think they are good. Many solutions don’t attempt to manage appropriate to scale, in a distributed way or with correct incentives, I think these do: • Poll to understand the scale of the problem • Let’s know how many people feel this way. Is there a link between this and the number of women the community? We don’t have to guess this stuff, we can just know • People at EAGs can report people who used meetings to try and flirt with them in a way they didn’t like. Slowly increase punishments (I suggest probabilistic bans from EAGs eg 5% you are banned for 6 months) until the harms to women are less than the cost of the bans. I like flirting at EAG parties, so I think there is a different tone there, but seems fine for during the day for there to be a high risk to flirting without someone appreciating it. • I like probabilistic bans because most of the time they are just a warning but they still sometimes have bite. • (This image is from the last time we had this discourse. I guess it would replicate in a representative poll. Most women don’t want to be flirted with at a conference during the day, though some do. As I say, seems we should increase the cost of doing so) • People sometimes argue that I’m too harsh on this. But currently I think the harms from people being flirted with who dont’ want to be are greater than the harms from those who would have their freedom curtailed, so I suggest we try it. • I unironically support people who have been harmed gossiping about people who have done so. If you hear a bad rumour about someone, by all means check it, but I think it’s okay to share what someone has said to you. • There are costs to this in terms of community trust so consider carefully if rumours are true, but I still think we undergossip tbh. • There should be a clear process for what happens around bad behaviour in relation to EAGs in particular and a way for people to be forgiven of bad behaviour (given credible change and timescales based on badnesss). EA should not operate on reasonable doubt, but on balance of harms (and I say this as someone who sometimes falls afoul of this stuff—but the harms to all involved matter equally, where as “beyond reasonable doubt” generally ignores harms to the accuser imo) • Scandal markets. I unironically think there should be a manifold market on whether any EA above a certain reputation level is found guilty of harassment by and independent investigator. Then people can share their information by betting privately. Investigations happen at random • This sounds mechanistic and weird, but imagine if it was normal, would we remove it? I doubt it • Prediction markets are distributed whistleblowing • “What if the accused manipulates the market?” This increases liquidity and draws attention to the market • “Wouldn’t it feel awful/​be tasteless for powerful people to have markets on whether they would harass someone?” I think the status quo is worse. I don’t mind putting additional burdens on people in positions of power. And I am confident that this would decrease the likelihood of some big scandal such as destroys other communities. • Because I believe you can’t advocate for this without having a market yourself, mine is here. • I’m worried a lot of this is missing the point, and potentially missing important solutions. I’m going to use EAG for my examples here, as I think it is the strongest case of what I’m describing, but I think my argument generalises to a lot of scenarios and spaces in the EA community. In my mind, there are two competing things going on here: 1. At an EAG, you are likely to meet people who are at a similar stage in their life to you, who have similar interests, and who are likely to be both intelligent and altruistic, both attractive qualities. If you meet one of these people, and they feel similarly about you, you could enjoy some flavour of romance together, and it would be mutually fulfilling. Things being mutually fulfilling between parties is self-evidently a good thing. 2. At an EAG, some people, primarily women, have bad experiences as a result of others’ romantic attention. These experiences can range from uncomfortable to traumatic. I think these negative experiences can then be grouped into two further categories: 1. Those that are are the result of malicious intent 2. Those that are the result of power dynamics, and can arise despite positive intentions. I think your solutions are primarily concerned with the 2a category, and when reading it I was reminded of this comment, which I think puts it better than I could. There are people with malicious intent in every community, and I don’t think EA requires any particularly novel solution to deal with them. I agree with Isabel in that I’m also worried that when these threads come up, people will spend their efforts trying to either gauge the size of the problem, or theorise the optimal solution, rather than take any meaningful action. I think 2b can be equally as damaging, and more should be done about it. Because EA is such a small, well-resourced community, there are especially strong power dynamics at play between individuals. As discussed in the blog post linked above, the EA community does not have strong boundaries between professional and romantic lives, in fact it seems especially tolerant of this intermingling—I claim this is a strongly negative thing. If someone a prospective future employer/​grantmaker/​”senior leader” starts flirting with me at EAG, even if they are being incredibly respectful and only have good intentions, I am under a lot of pressure to cooperate, even if that’s not what I want at all. If at the start I do genuinely reciprocate that attraction, and we engage in some kind of romantic interaction, and I later change my mind, there is again a huge pressure on me not to leave the arrangement, even though that’s what I want to do. I’m not suggesting that EAs shouldn’t date one another, but I am suggesting a much stronger acknowledgement of power dynamics at play, both on an individual community level. Due to the lack of community emphasis, I suspect many beneficiaries of power dynamics in these situations do not think of themselves that way, and so may inadvertently do harm (this isn’t aimed at you personally—I don’t know whether you are or aren’t aware of this). It seems plausible to me that this would also help with 2a, as well as make the community feel more inclusive. • How about an opt-in speed dating event in the evening? That way the 40+% of women who desire flirts can obtain them, and there is no need or excuse to flirt with people during professional 1-on-1s. If the conference organizers aren’t comfortable organizing a speed dating event, maybe one of the women who wants to be flirted with could step up and organize it unofficially. Could do lottery admission to keep the gender ratio even. Edit: An EA matchmaking service is another idea 2nd edit: Amanda Askell says she likes ambiguity. Maybe you women should put your heads together on this • I apologize for making so many edits instead of submitting separate comments the way Nathan did. Based on checking the vote tallies on this comment repeatedly, I think it got most agreevotes after the first edit and before the second one. (I believe the agreevote was at around +8 at one point.) Suggesting that matchmaking is the idea that people like the most. Also, by “maybe you women should put your heads together on this” I was essentially suggesting a panel or focus group. I find myself increasingly unenthusiastic about participating in this thread. I think it could use a little more assumption-of-good-faith and sense of humor instead of what feels like eagerness to take offense. • Maybe you women should put your heads together on this I’m sure they’ll discuss it at the next big meeting. • Scandal markets are a good idea • Transparent process with room for forgiveness but that considered harms to all parties (rather than underrating accused) • Gossip is good • Punishments for people who make people uncomfortable at EAGs seem like a good idea • idk about “punishments” exactly; I would like EAG organizers to prioritise preventing harm, rather than acting as a justice system. Preventing harm is sometimes going to mean making clear to people that they should stop doing what they’re doing, and sometimes going to mean temporarily or permanently excluding people. These things look like punishments but I don’t know if I’d describe them as such. • Polling seems like a good idea • When talking about Sam Bankman-Fried I read a bunch of times the claim that EA failed because it didn’t put sufficient effort into checking his background. It might be worthwhile to fund a new organization, ideally as independent as possible from other orgs whose sole reason for existance is to look into the powerful people in EA and criticize them when warrented. While it might be great if CEA would be able to fill that role, they happen to be an org that in the past didn’t honor a confidentiality promise when people came to them with critizism of powerful people in EA and don’t think this was enough of a problem to list it on their mistakes page. • I very much doubt the reason it’s won’t be made privately available is due to Pfizer thinking it wouldn’t be worth it. More likely it’s down to sufficient stock being available in the NHS for the cohort that will be receiving it, and the government not wanting to add more demand, which would increase the cost per dose for the NHS. It’s perverse, but a likely consequence of the Beveridge style universal healthcare system used in the U.K. • The suggestion is to treat the COVID vaccine like the flu vaccine, and make it free for those who need it most, and available to buy for those who don’t. Making it available for sale doesn’t increase costs to the NHS. • What’s stunning to me is the following: There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group. Leaking private slack conversations to journalists is a 101 on how to destroy trust. The response to SBF and FTX betrayal shouldn’t be to further erode trust within the community. EA should not have to learn every single group dynamic from first principles—the community might not survive such a thorough testing and re-learning of all social rules around discretion, trust and why its important to have private channels of communication that you can assume will not be leaked to journalists. If the community ignores trust, networks and support for one another—then the community will not form, ideas will not be exchanged in earnest and everyone will be looking over their shoulder for who may leak or betray their confidence. Destroying trust decimates communities—we’ve all found that with SBF. The response to that shouldn’t be further, even more personal and deep betrayals. I will now have to update against how open I am in discussions with other EAs—which is a shame as the intellectual freedom, generosity, honesty and subtlety are what I love about this community—but it seems I will have to consider “what may a journalist think of this if this person leaked it?” as a serious concern. • Even if you’re not concerned about leaks, the possibility of compelled disclosure in a lawsuit has to be considered. So if would be seriously damaging for information to show up in the New Yorker, then phone, in-person, and Inspector Gadget telegram should be the preferred method of communication anyway. I definitely appreciate the point about trust, just wanted to add that people should consider the risks of involuntary disclosure through legal process (or hacking) before putting stuff in writing. • There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group. I don’t know about others, but I find it deeply uncomfortable there’s an invite-only conference and a private slack channel where, amongst other things, reputational issues are discussed. For one, there’s something weird about, on the one hand, saying “we should act with honesty and integrity” and also “oh, we have secret meetings where we discuss if other people are going to make us look bad”. • The thing that most keenly worries me about this is the lack of openness and accountability of this. We are a social movement, so of course we will have power dynamics and leadership. But with no transparency or accountability, how can anyone know how to make change? • I think its wrong to say there’s no transparency or accountability (this isn’t to say we should just assume all checks now are enough, but I don’t think we should conclude that none so far exist). Obviously for anything actually criminal then proper whistleblowing paths exist and should be used! At the moment, I think even checks like this discussion are far more effective than in most other communities because EA is still quite small, so it hasn’t got the issues of scale that other institutions or communities may experience. On transparency: Transparency is a part of honesty, but has costs and I don’t think its at all clear in this instance that that cost was remotely required to be paid. Again, this will only cause future discussions to be slower, more guarded and less honest—the community response to this will similarly decide how much we should guard ourselves when talking with other EAs. As a side point: its also the case that this instance isn’t actual “transparency” but fed lines to a journalist, then selectively quoted and given back to us. The cost of transparency in every discussion at a high-level of leadership (for example) is that the cost of new ideas becomes prohibitively high as everyone can pick you apart, weigh in, misrepresent or re-direct discussion entirely. Compare e.g. local council meetings with the public and those without, and decisions made in committee vs those made by individual founders. Again transparency is a part of honesty but I can put my trust in you—for example—without needing you to be transparent about every conversation you have about me. If, however, the norm is that we expect total transparency of information and constant leaks—then we should expect a community of paranoia, dishonest conversation and continuous misrepresentations of one another. • I think you may be assuming what I am calling for here is much more wide-ranging. There still doesn’t seem to be good justification for not knowing who is in the coordination forum or on these leadership slack channels. Making the structures that actually exist apparent to community members would probably not come at such a prohibitively high cost as you suggest • I think its completely fine for invite-only slacks to exist and for them to discuss matters that they might not want leaked elsewhere. If they were plotting murders or were implicated in serious financial crime, or criminal enterprise, or other such awful unforgiveable acts, then yes I can see why we would want to send a clear signal that anything like that is beyond the pale and discretion no longer protects you. In that instance I think no one would object to breach of trust. However, we aren’t discussing that scenario. This is a breach of trust, which erodes honest discussion in private channels. The more this is acceptable in EA circles, the less honesty of opinion you will get—and the more paranoia will set in. Acting with honesty and integrity does not mean opening up every discussion to the world, or having an expectation that chats will leak in the event that you discuss “if other people are going to make us look bad”. Nevermind the difficulty that arises in then attempting to predict what else warrants leaks if that’s the bar you’ve set. • This strikes me as weirdly one-sided. You’re against leaking, but presumably, you’re in favour of whistleblowing—people being able to raise concerns about wrongdoing. Would you have objected to someone leaking/​whisteblowing that e.g. SBF was misusing customer money? If someone had done so months ago, that could have saved billions, but it would have a breach of (SBF’s) trust. The difference between leaking and whistleblowing is … I’m actually not sure. One is official, or something? • This fundamentally misunderstands norms around whistleblowing. For instance, UK legislation on whistleblowing does not allow you to just go to journalists for any and all issues—they have to be sufficiently serious. This isn’t just for “official” reasons but because it’s understood that trust within institutions is necessary for a functioning society/​group/​community/​company and norms that encourage paranoia over leaks lead to non-honest conversations filtered through fears of leaks. Even in the event that a crime is being committed, you are expected to first go to authorities, rather than journalists—to journalists only if you believe authorities won’t assist. In the SBF example I’d hope someone would have done precisely that. Moreso, to protect trust, whistleblowing is protected only for issues that warrant that level of trust breach—ie my comment is that this is disproportionate breach of trust, with long term effect on community norms. Furthermore, whistleblowing on actual crimes is entirely different to leaking private messages about managing PR. And—again—is something one should do first to authorities—not necessarily to journalists! Essentially you are eliding very serious whistleblowing of crimes to police or public bodies, to leaked screenshots to journalists of private chats about community responses. • I’d guess the distinction would be more ‘public interest disclosure’ rather than ‘officialness’ (after all, a lot of whistleblowing ends up in the media because of inadequacy in ‘formal’ channels). Or, with apologies to Yes Minister: “I give confidential briefings, you leak, he has been charged under section 2a of the Official Secrets Act”. The question seems to be one of proportionality: investigative or undercover journalists often completely betray the trust and (reasonable) expectations of privacy of its subjects/​targets, and this can ethically vary from reprehensible to laudable depending on the value of what it uncovers (compare paparrazzi to Panorama). Where this nets out for disclosing these slack group messages is unclear to me. • You don’t have to agree with or understand someone to extend compassion! Speaking for myself, sometimes I fear compassion will be used as an attack to push for concessions, so before I outwardly express it, I check whether I agree with criticisms. In that sense, discussion can be me taking something more seriously, not less. Now I’m not saying that’s helpful but I do think there are different communication styles at play here. I’m sad to hear that you’ve felt the way you describe. • Discussion can be compassionate. Disagreement can be compassionate. In fact, I’d argue that failing to have compassion and empathy for someone making a point is going to pretty seriously impair your ability to engage with the point, and even if you do you’re going to have a hard time communicating about it in a way that will be heard. I think seeing these things as in tension is a mistake. • I genuinely find this fascinating. I don’t think I’ve ever felt worried expressing empathy would be used as a push for concessions, and haven’t wanted for it with this intent. I think your experience might be common though, perhaps among men in particular, and I think it we should talk about it more. Thanks for putting this out there. • Yeah, I think this article is a bit of a case in point. What is the author wanting if not significant changes and what are many comments rejecting if not discussions of whether that is reasonable. • I would like a polling question on this. I think that say 10 − 30% of women have had 2 or more belittling experiences at an EAG and that’s bad. But I read this paragraph and it seems alien to me. What % of women+nb folks have this experience in EA? I could tell you how tears streamed down my face as I read through accounts of women who have been harmed by people within the Effective Altruism community. I could describe how my fists curled and my jaw clenched as I scrolled through forum comments and Reddit threats full of disbelief and belittlement. I could try to convey the rising temperature of my blood as it boiled; I could explain to you that I could not focus in class for a full two days. But I don’t think that I will. I’m unsure that the Effective Altruism community has room for my anger. • Agreed that it would be very helpful to have a widely distributed survey about this, ideally with in-depth conversations. Quantitative and qualitative data seem to be lacking, while there seems to be a lot of anecdotal evidence. Wondering if CEA or RP could lead such work, or whether an independent organization should do it. • I would mainly like it to be easy to fill out so that the results are representative. I think it’s pretty easy for surveys like this to end up only filled in by people with the strongest opinions. • If it’s worth anything I would expect that figure to be between 28-43%. (I’m anchoring on your estimate, I would probably have guessed somewhere around 40-55% if I hadn’t read your comment ) • I thought I would surface some of the points from the post and allow people to express opinions on them. I know this can seem off, but I think cheaply allowing us to see what the community thinks about stuff is useful. • I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations. Agreevote if you think the actions here are, on balance bad, disagreevote if you disagree. • I read this blog post and the comments and controversy it generated. The amount of invalidation and general nastiness in the comments (that have since been deleted so I won’t link to them) shocked and saddened me. Agreevote if you think the nasty comments were from people in the EA community, disagreevote if you think they weren’t. • At an EAG afterparty, an attendee talked about how he scheduled a one-on-one with someone because he found her attractive. Agrevote if you think this is good/​fine, disagreevote if you think it’s bad. • [ ] [deleted] • I have heard 2+ accounts of this (heck, as I’ve apologised for before, I’ve done it), so I think it’s pretty common. My stance is that EAGs should have a high penalty for making people feel they aren’t valued for their work. People can take the risk if they want to but there should be a high penalty if people are bad at it. Social gatherings and afterparties are, in my opinion, the place to flirt without a risk of some kind of community sanction. • He casually mentioned that some of them made a list ranking women in EA in the Bay that they wanted to hook up with. Agrevote if you think this is good/​fine, disagreevote if you think it’s bad. • Imagine the opposite situation—a group of women talking about in detail about which men they would want to hook up with. Agreevote if you think that would be good/​fine, disagreevote if you think that’s bad. • 2 Dec 2022 10:12 UTC 22 points 0 ∶ 2 What systems/​solutions currently exist for “dealing with” misconduct, harassment, or assault after it happens? What systems should exist? • I feel some hesitation about solutions that involves handing the power to blacklist or “punish” people to one agency. • But it’s really hard for individuals to publicly post about other people’s problematic behavior. • A friend of mine in the EA community told me they had been sexually harassed and stalked by another EA member and were considering posting on social media about it. I encouraged them to post do so. They were scared of potential backlash so they didn’t. • But I didn’t post about it at all. It feels inappropriate for me to do so on someone else’s behalf, especially since I’m not particularly wrapped up in Y’s life. • I wonder if scandal markets could potentially be useful (as Scott Alexander recently discussed in a recent thread), or something else inspired by scandal markets. • I think it’s plausible that we could use scandal markets on high-profile people in the EA community. • Scott writes, and I pretty strongly agree: “I’m tired of bad things happening, and then learning there was a “whisper network” of people who knew about it all along but didn’t tell potential victims. It’s unreasonable to expect suspicious to come out and make controversial accusations about powerful people on limited evidence. But a prediction market seems like a good fit for this use case.” • But the majority of the people who do/​will do/​have done problematic things are unlikely high-profile enough to have a scandal market made about them. It seems plausible to me that a well-designed system would be able to effectively deal with the kinds of issues the OP talks about and things like financial misconduct. But I feel like there are a few challenges: • I can’t think of any community that effectively deals with misconduct that isn’t also authoritarian-esque (the CCP comes to mind). (That being said, please comment what other communities or systems exist that effectively deal with misconduct). • And a well designed system should reflect that different problematic actions require different responses. • I think this points to the weakness of a centralized system: most people agree that things like rape should lead to removal from the community. But a lot of things are debateable (like making a ranked list of women someone wants to hook up with), and if CEA or whomever implemented some response as “the authority”, it would almost certainly be opposed by some for being too lenient and by some for being too harsh. • It almost feels like making public knowledge of these kinds of things is the right thing to do, because then people will react accordingly. • But simply saying “we’re going to publicize every distasteful things others do so so that people can decide for themselves how they should respond” feels bad for a lot of reasons. For one thing, it would erode trust between members if people felt like they might be publicly outed for small infractions. • I actually think people should be complaining to, or even complaining about, the community health team significantly more than they are. People on that team are paid to address problems like misconduct/​harassment/​assault. Complaints like Maya’s should be a key performance metric for them. In my view, there should be a stronger default of people like Maya contacting the community health team to say “hey, I heard about women getting ranked in a way that made me uncomfortable”. And the community health team privately contacting the rankers to say “hey, you aren’t helping our goal of a warm professional community that welcomes a wide variety of people and incentivizes them to care about doing good over being hot”. Some might find this draconian—to clarify, I don’t think disciplinary action is justified here. I just think these conversations would be positive expected utility if done well. • Thanks for sharing. I think EAs are only ethically better than other people under consequentialist ethics, but are just as bad as anyone else when it comes to virtues and obeying good social rules, which is sad, because we can and should do better. • I don’t think this is true. Not sure how you’d measure or verify/​refute this, but I suspect that the average EA man objectifies women much less than the average non-EA man. It’s just that we have an imbalanced gender ratio so these incidents are disproportionately concentrated onto few women, which is really unfair to them. • Fantastic post! This is more informative AND more interesting than most philosophy papers on the topic. You accurately summarise meta-ethical hedonism and provide fair criticisms. This is up there with your post on infinite ethics. If I can find the time, I’ll write down why I disagree and post it here or send you an email. • I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit threats, some from prominent members of the EA and Rationalist communities, disparaging Kathy and denying her accusations. I’m one of the people (maybe the first person?) who made a post saying that (some of) Kathy’s accusations were false. I did this because those accusations were genuinely false, could have seriously damaged the lives of innocent people, and I had strong evidence of this from multiple very credible sources. I’m extremely prepared to defend my actions here, but prefer not to do it in public in order to not further harm anyone else’s reputation (including Kathy’s). If you want more details, feel free to email me at scott@slatestarcodex.com and I will figure out how much information I can give you without violating anyone’s trust. • I came to the comments here to also comment quickly on Kathy Forth’s unfortunate death and her allegations. I knew her personally (she subletted in my apartment in Australia for 7 months in 2014, but more meaningfully in terms of knowing her, we also we overlapped at Melbourne meetups many times, and knew many mutual people). Like Scott, I believe she was not making true accusations (though I think she genuinely thought they were true). I would have said more, but will follow Scott’s lead in not sharing more details. Feel free to DM me. • (some of) Kathy’s accusations were false just to draw some attention to the “(some of)”, Kathy claimed in her suicide note that her actions had led to more than one person being banned from EA events. My understanding is that she made a mixture of accusations that were corroborated and ones that weren’t, including the ones you refer to. I think this is interesting because it means both: • Kathy was not just a liar who made everything up to cause trouble. I would guess she really was hurt, and directed responsibility for that hurt to a mixture of the right and wrong places. (Maybe no-one thought this, but I just want to make clear that we don’t have to choose between “she was right about everything” and “she was wrong about everything”.) • Kathy was not ignored by the community. Her accusations were taken seriously enough to be investigated, and some of those investigations led to people being banned from events or groups. Reddit may talk shit about her, but the people in a position to do something listened. (I should say that what I’m saying is mostly based on what Kathy said in her public writings combined with second-or-third hand accounts, and despite talking a little to Kathy at the time I’m missing almost all the details of what actually happened. Feel free to contradict me if something I said seems untrue.) • Responding to the attention on Kathy’s specific case (I’m aware I’m adding more to it) - I think we’re detracting from the key argument that the EA community as a whole is neglecting to validate and support community members who experience bad things in the community In this post, it’s women and sexual assault primarily. But there are other posts (1, 2) exempifying ways the EA community itself can and should prioritise internal community health. To argue the truth of one specific example might be detracting from recognising that this might be a systematic problem. • Can you link to your post? I’m asking in order to avoid the (probably already existing) situation where people see that “some of Forth’s accusations” are allegedly not true, but they don’t know which, so they just doubt all of them. • If someone has a record of repeatedly making accusations that have been proven false, I think it is reasonable and prudent to “just doubt all” their accusations. This person was clearly terribly ill and did not get the help she needed and deserved. It’s painfully clear from reading her heartbreaking note that she was wildly out of touch with reality. • 2 Dec 2022 16:46 UTC 16 points 17 ∶ 19 Parent edit: after discussion below & other comments on this post, I feel less strongly about the claim “EA community is bad at addressing harm”, but stand by /​ am clarifying my general point, which is that the veracity of Kathy’s claims doesn’t detract from any of the other valid points that Maya makes and I don’t think people should discount the rest of these points. A suggestion to people who are approaching this from a “was Kathy lying?” lens: I think it’s also important to understand this post in the context of the broader movement around sexual assault and violence. The reason this kind of thing stings to a woman in the community is because it says “this is how this community will react if you speak up about harm; this is not a welcoming place for you if you are a survivor.” It’s not about whether Kathy, in particular, was falsely accusing others. The way I read Maya’s critique here is “there were major accusations of major harm done, and we collectively brushed it off instead of engaging with how this person felt harmed;” which is distinct from “she was right and the perpetrator should be punished”. This is a call for the EA community to be more transparent and fair in how it deals with accusations of wrongdoing, not a callout post of anybody. Perhaps I would feel differently if I knew of examples of the EA community publicly holding men accountable for harm to women, but as it stands AFAIK we have a lot of examples like those Maya pointed out and not much transparent accountability for them. :/​ Would be very happy to be corrected about that. (Maya, I know it’s probably really hard to see that the first reply on your post is an example of exactly the problem you’re describing, so I just want to add in case you see this that I relate to a lot of what you’ve shared and you have an open offer to DM me if you need someone to hold space for your anger!) • “EA community is bad at addressing harm” As another data point: I’m a woman, I think I’m the main reason a particular man has been banned from a lot of EA events under certain conditions and I think CEA’s Community Health team have handled this situation extremely well. But on balance, I’ve found that men in EA treat me with a lot more respect than men do outside of EA. And if anything, I think any complaints I do make are taken too seriously. This doesn’t excuse bad behaviour of course, even if my experience were typical. But I have always wondered why so much of our energy goes into how women feel in this community vs people with other marginalised characteristics, some of whom no doubt also feel “sad, disappointed, and scared” in EA (e.g. discussions nominally of “diversity and inclusion” often end up just being discussions of how to treat women better). • Predictably, I disagree with this in the strongest possible terms. If someone says false and horrible things to destroy other people’s reputation, the story is “someone said false and horrible things to destroy other people’s reputation”. Not “in some other situation this could have been true”. It might be true! But discussion around the false rumors isn’t the time to talk about that. Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn’t “we need to engage with the ways Bob felt harmed and make him feel valid.” It’s not “the Bob lied lens is harsh and unproductive”. It’s “we condemn these false and damaging rumors”. If the headline story is anything else, I don’t trust the community involved one bit, and I would be terrified to be associated with it. I understand that sexual assault is especially scary, and that it may seem jarring to compare it to less serious accusations like Bob’s. But the original post says we need to express emotions more, and I wanted to try to convey an emotional sense of how scary this position feels to me. Sexual assault is really bad and we need strong norms about it. But we’ve been talking a lot about consequentialism vs. deontology lately, and where each of these is vs. isn’t appropriate. And I think saying “sexual assault is so bad, that for the greater good we need to focus on supporting accusations around it, even when they’re false and will destroy people’s lives” is exactly the bad kind of consequentialism that never works in real life. The specific reason it never works in real life is that once you’re known for throwing the occasional victim under the bus for the greater good, everyone is terrified of associating with you. Perhaps I would feel differently if I knew of examples of the EA community publicly holding men accountable for harm to women. This is surprising to me; I know of several cases of people being banned from EA events for harm to women. When I’ve tried to give grants to people, I have gotten unexpected emails from EA higher-ups involved in a monitoring system, who told me that one of those people secretly had a history of harming women and that I should reconsider the grant on that basis. I have personally, at some physical risk to myself, forced a somewhat-resistant person to leave one of my events because they had a history of harm to women (this was Giego Caleiro; I think it’s valuable to name names in some of the most extreme clear-cut cases; I know most orgs have already banned him, and if your org hasn’t then I recommend they do too—email me and I can explain why). I know of some other cases where men caused less severe cases of harm or discomfort to women, there were very long discussions by (mostly female members of) EA leadership about whether they should be allowed to continue in their roles, and after some kind of semi-formal proceeding, with the agreement of the victim, after an apology, it was decided that they should be allowed to continue in their roles, sometimes with extra supervision. There’s an entire EA Community Health Team with several employees and a mid-six-figure budget, and a substantial fraction of their job is holding men accountable for harm to women. If none of this existed, maybe I’d feel differently. But right now my experience of EA is that they try really hard to prevent harm to women, so hard that the current disagreement isn’t whether to ban some man accused of harming women, but whether it was okay for me to mention that a false accusation was false. Again in honor of the original post saying we should be more open about our emotions: I’m sorry for bringing this up. I know everyone hates having to argue about these topics. Realistically I’m writing this because I’m triggered and doing it as a compulsion, and maybe you also wrote your post because you’re triggered and doing it as a compulsion, and maybe Maya wrote her post because she’s triggered and doing it as a compuIsion. This is a terrible topic where a lot of people have been hurt and have strong feelings, and I don’t know how to avoid this kind of cycle where we all argue about horrible things in circles. But I am geninely scared of living in a community where nobody can save good people from false accusations because some kind of mis-aimed concern about the greater good has created a culture of fear around ever speaking out. I have seen something like this happen to other communities I once loved and really don’t want it to happen here. I’m open to talking further by email if you want to continue this conversation in a way that would be awkward on a public forum. • Thank you, this is clarifying for me and I hope for others. Responses to me, including yours, have helped me update my thinking on how the EA community handles gendered violence. I wasn’t aware of these cases and am glad, and hope that other women seeing this might also feel more supported within EA knowing this. I realize there are obvious reasons why these things aren’t very public, but I hope that somehow we can make it clearer to women that Kathy’s case, and the community’s response, was an outlier. I would still push back against the gender-reversal false equivalency that you and others have mentioned. EA doesn’t exist in a bubble. We live in a world where survivors, and in particular women, are not supported, not believed, and victim-blamed. Therefore I think it is pretty reasonable to have a prior that we should take accusations seriously and respond to them delicately. The Forum, if anywhere on earth, should be a place where we can have the nuanced understanding that (1) the accusations were false AND (2) because we live in a world where true accusations against powerful men are often disbelieved, causing avoidable harm to victims, we need to keep that context in mind while condemning said false accusations. So to clarify my stance: I don’t think it was wrong to mention that the false accusation is false. I think it seems dismissive and insensitive to do so without any acknowledgement of the rest of the post. I don’t think it would have hurt your point to say “yes, EA is a male-dominated culture and we need to take seriously the harms done to women in our community. In this specific instance, the accusations were false, and I don’t believe the community’s response to these accusations is representative of how we handle harm.” I think the disconnect here is that you are responding /​ care about this specific claim, which you have close knowledge of. I know nothing about it, and am responding to /​ care about the larger claim about EA’s culture. I believe that Maya’s post is not trying to to make truth claims about Kathy’s case and is more meant to point out a broad trend in EA culture, and I’m trying to encourage people to read it as such, and not let the wrongness of Kathy’s claims undermine Maya’s overall point. (edit: basically I agree with your comment above: if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that.) • Thanks for your thoughtful response. I’m trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don’t want to read it. This comment is a bad compromise between all these things and I apologize for it, but: I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think “everyone knows” (in Zvi’s sense of the term, where it’s such strong conventional wisdom that nobody ever checks if it’s true ) that the typical response to rape accusations is to challenge and victim-blame survivors. And that although this may be true in some times and places, the typical response in this community is the one which, in fact, actually happened—immediate belief by anyone who didn’t know the situation, and a culture of fear preventing those who did know the situation from speaking out. I think it’s useful to acknowledge and push back against that culture of fear. (this is also why I stressed the existence of the amazing Community Safety team—I think “everyone knows” that EA doesn’t do anything to hold men accountable for harm, whereas in fact it tries incredibly hard to do this and I’m super impressed by everyone involved) I acknowledge that makes it sound like we have opposing cultural goals—you want to increase the degree to which people feel comfortable expressing out that EA’s culture might be harmful to women, I want to increase the degree to which people feel comfortable pushing back against claims to that effect which aren’t true. I think there is some subtle complicated sense in which we might not actually have opposing cultural goals, but I agree to a first-order approximation they sure do seem different. And I realize this is an annoyingly stereotypical situation - I, as a cis man, coming into a thread like this and saying I’m worried about a false accusations and chilling effects. My only two defenses are, first, that I only got this way because of specific real and harmful false accusations, that I tried to do an extreme amount of homework on them before calling false, and that I only ever bring up in the context of defending my decision there. And second, that I hope I’m possible to work with and feel safe around, despite my cultural goals, because I want to have a firm deontological commitment to promoting true things and opposing false things, in a way that doesn’t refer to my broader cultural goals at any point. • Thanks, I realize this is a tricky thing to talk about publicly (certainly trickier for you, as someone whose name people actually know, than for me, who can say whatever I want!). I’m coming in with a stronger prior from “the outside world”, where I’ve seen multiple friends ignored/​disbelieved/​attacked for telling their stories of sexual violence, so maybe I need to better calibrate for intra-EA-community response. I agree/​hope that our goals shouldn’t be at odds, and that’s what I was trying to say that maybe did not come across: I didn’t want people to come away from your comment thinking “ah, Maya’s wrong and people shouldn’t criticize EA culture.” I wanted them to come away both knowing the truth about this specific situation AND thinking more broadly about EA culture, because I think this post makes a lot of other very good points that don’t rely on the Kathy claims. (And thinking more broadly could include updating positively like I did, although I didn’t expect that would be the case when I made that comment!) You’re probably right that it’s not worth giving much more of a response, but I appreciate you engaging with this! • I’m not too confident about this, but one reason you may not have heard about men being held accountable in EA is that it’s not the sort of thing you necessarily publicize. For example, I helped a friend who was raped by a member of the AI safety research community. He blocked her on LessWrong, then posted a deceptive self-vindicating article mischaracterizing her and patting himself on the back. I told her what was going on and helped her post her response once she’d crafted it via my account. Downvotes ensued for the guy. Eventually he deleted the post. That’s one example of what (very partial) accountability looks like, but the end result in this case was a decrease in visibility for an anti-accountability post. And except for this thread, I’m not going around talking about my involvement in the situation. I don’t know how much of the imbalance this accounts for, nor am I claiming that everything is fine. It’s just something to keep in mind as one aspect of parsing the situation. • Thank you, yeah I think I may be overindexing on a few public examples (not being privy to the private examples that you and others in thread have brought up). Glad to hear that there are plenty of examples of the community responding well to protect victims/​survivors. I still also don’t think everything’s fine, but unsure to what extent EA is worse than the rest of the world, where things are also not fine on this front. • In the cases like this I’ve been most closely involved in, the women who have reported have not wanted to publicise the event, so sometimes action has been taken but you wouldn’t have heard about it. (I also don’t think it’s a good habit to try to maximise transparency about interpersonal relationships tbh.) • Yeah, this is very fair and I agree that transparency is not always the right call. To clarify, I’ll say that my stance here, medium confidence, is: (1) in instances which the victim/​survivor has already made their accusations public, or in instances where it’s already necessarily something that isn’t interpersonal [e.g. hotness ranking], the process of accountability or repair, or at least the fact that one exists, should be public; (2) it should be transparent what kind of process a victim can expect when harm happens. There’s some literature around procedural justice and trust that indicates that people feel better and trust the outcomes of a process more when it is transparent and invites engagement, regardless of whether the actual outcome favors them or not. I am glad to hear that there have been cases where women have felt safe reporting and action has been taken! (edited to delete a para about CEA community health team’s work that I realized was wrong, after seeing this page linked below) • I’d agree I’d favour systems that help people feel confident in the outcome even when it doesn’t favour them, and would like to see EA do better in these areas! • Regardless of the accuracy of this comment, it makes me sad that the top comment on this post is adversarial/​argumentative and showing little emotional understanding/​empathy (particularly the line “getting called out in posts like this one”). I think it unfortunately demonstrates well the point the author made about EA having an emotions problem: On the forum in particular and in EA discourse in general, there is a tendency to give less weight/​be more critical of posts that are more emotion-heavy and less rational. This tendency makes sense based on EA principles… to a certain extent. To stay true to the aforementioned values of scientific mindset and openness, it makes sense that we challenge people’s ideas and are truth-seeking in our comments. However, there is an important distinction between interrogating someone’s research and interrogating someone’s lived experience. I fear that the attitude of truth-seeking and challenging one another to be better has led to an inclination to suspend compassion in the absence of substantial evidence of wrongdoing. You’re allowed to be sorry that someone experienced something without fully understanding it. • I very rarely engage in karma voting, and didn’t do so for this comment either. That said, one relevant point is that the comment with the most karma gets to sit at the top of the comments section. That means that many people probably vote with an intention to functionally “pin” a comment, and it may not be so much that they think the comment should represent the most important reaction to a post, as that they think it provides crucial context for readers. I think this comment does provide context on the part of this otherwise very good and important post that made me most uncomfortable as stated. I also agree that Alexander’s tone isn’t great, though I read it in almost the opposite way from you (as an emotional reaction in defense of his friends who came forward about Forth). • To be honest I’m relieved this is one of the top comments. I’ve seen Kathy mentioned a few times recently in a way I didn’t think was accurate and I didn’t feel able to respond. I think anyone who comes across her story will have questions and I’m glad someone’s addressed the questions even if it’s just in a limited way. • I’m glad you made your post about how Kathy’s accusations were false. I believe that was the right thing to do—certainly given the information you had available. But I wish you had left this sentence out, or written it more carefully: But they wouldn’t do that, I’m guessing because they were all terrified of getting called out in posts like this one. It was obvious to me reading this post that the author made a really serious effort to stay constructive. (Thanks for that, Maya!) It seems to me that we should recognize that, and you’re erasing an important distinction when you categorize the OP with imprudent tumblr call-out posts. If nothing else, no one is being called out by name here, and the author doesn’t link any of the tumblr posts and Reddit threads she refers to. I don’t think causing reputational harm to any individual was the author’s intent in writing this. Fear of unfair individual reputational harm from what’s written here seems a bit unjustified. • EDIT: After some time to cool down, I’ve removed that sentence from the comment, and somewhat edited this comment which was originally defending it. I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying “X doesn’t believe sexual assault victims” or “EA has a culture of doubting sexual assault victims”. And they all hoped someone else would go first to mention all the evidence that these particular rumors were untrue, so that that person could be the one to get flak over this for the rest of their life (which I have, so good prediction!), instead of them. I think there’s a culture of fear around these kinds of issues that it’s useful to bring to the foreground if we want to model them correctly. But I think you’re gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that. Although it’s possible that the value gained by saying this true thing is higher than the value lost by potential chilling effects, I don’t want to claim to have an opinion on this, because in fact I wrote that comment feeling pretty triggered and upset, without any effective value calculations at all. Given that it did get heavily upvoted, I can see a stronger argument for the chilling effect part and will edit it out. • Hi Scott, Thank you for both of your comments. I appreciate you explaining why you wrote a post about Kathy and I think it’s useful context for people to understand as they are thinking about these issues. My intention was not to call anybody out, rather, to point to a pattern of behavior that I observed and describe how it made me (and could make others) feel. • Thanks for removing the sentence. I’m sorry you’ve gotten flak. I don’t think you deserve it. I think you did the right thing, and the silence of other people “in the know” doesn’t reflect particularly well on them. (Not in the sense that we should call them out, but in the sense that they should maybe think about whether they knowingly let a likely-innocent person suffer unjust reputation harm.) I think there’s a culture of fear around these kinds of issues that it’s useful to bring to the foreground if we want to model them correctly. Agreed. I think the culture of fear goes in both directions. Women often seem to fear making accusations. But I think you’re gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bringing it up is positive expected value, so I shouldn’t sound critical in any way that discourages future people from doing things like that. Not what I was gesturing at, but potentially valid. My thinking is that attempts to share info “in good faith” should not be punished, regardless of whether that info pushes towards condemnation vs exoneration. (We can debate what exactly counts as “good faith”, but I think it should be defined ~symmetrically for both types of info. I’d like more discussion of what constitutes “good faith”, and fewer implications that [call-outs/​denials] are always bad. I’m open to super restrictive definitions of “good faith”, like “only share info with CEA’s community health team and trust them to take appropriate action” or similar.) In any case, my main goal was to get you to reciprocate what I saw as the OP’s attempt to be less triggered/​more constructive, so thanks for that. • I did not know Kathy well, but I did meet and talk with her at length on a number of occasions in EA/​aligned spaces. We talked about cultural issues in the movement and for what it is worth, she came across as someone of good character, good judgement and measured takes. I am not across the particulars of her accusations and I feel matters like this have a place, actual courts and not forums. I don’t think cherry picked criticisms of her claims are appropriate. I think EA will continue to stumble on this issue, and our downfall as a movement will continue to be handling deontologicaly or virtuously abhorrent behaviour. I think the author of this forum post has been points of great importance. In particular, their critique of the style of writing required to be taken seriously and understood in the manner intended, is novel. • While this is important (clarifying of misinformation), I want to mention that I don’t think this takes away from the main message of the post. I think it’s important to remember that even with a culture of rationality, there are times when we won’t have enough information to say what happened (unlike in Scotts case), and for that reason Mayas post is very relevant and I am glad it was shared. It also doesn’t seem appropriate to mention this post as “calling out”. While it’s legitimate to fear reputations being damaged with unsubstantiated claims, this post doesn’t strike me as doing such. • I want to strong agree with this post, but a forum glitch is preventing me from doing so, so mentally add +x agreement karma to the tally. [Edit: fixed and upvoted now] I have also heard from at least one very credible source that at least one of Kathy’s accusations had been professionally investigated and found without any merit. Maybe also worth adding that the way she wrote the post would in a healthy person be intentionally misleading, and was at least incredibly careless for the strength of accusation. Eg there was some line to the effect of ‘CFAR are involved in child abuse’, where the claim was link-highlighted in a way that strongly suggested corroborating evidence but, as in that paraphrase, the link in fact just went directly to whatever the equivalent website was then for CFAR’s summer camp. It’s uncomfortable berating the dead, but much more important to preserve the living from incredibly irresponsible aspersions like this. • It would be nice to imagine that aspiring to be a rational, moral community makes us one, but it’s just not so. All the problems in the culture at large will be manifest in EA, with our own virtues and our own flaws relative to baseline. And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light. I don’t know how to deal with it except to address specific issues as they come to light. I guess I would just say that you are not alone in your concern for these issues, and that others do take significant action to address them. I support what I think of as a sort of “safety culture” for relationships, sexuality, race, and culture in the EA movement, which to me means promoting an openness to the issues, a culture of taking them seriously, and taking real steps to address them when they come up. So I see your post as beneficial in promoting that safety culture. • Hey AllAmericanBreakfast. I’m Catherine from the Community Health team. I’m so so sorry to hear that your friend was raped. If at all possible, I want to make sure they have support, justice, and that the perpetrator doesn’t have the opportunity to do this again. It doesn’t matter if your friend doesn’t identify as EA, if your friend, or the perpetrator are involved in the EA community in anyway we’re here to do our best to help. I’ll reach out via PM. • 2 Dec 2022 18:06 UTC 55 points 10 ∶ 0 Parent Hey :) I was raped before I was involved in EA. I normally find these discussions hard and frustrating. I feel we often talk past one another and that the people with similar experiences withdraw because it’s still painful/​ they get frustrated and hurt. I would like people like me to know: 1. There are a lot of people who have similar experiences to me who are active in the EA community. You may not see them here because of the aforementioned issue but we are here. 2. There are a lot of people who take these issues very seriously, including me, 3. I trust and endorse Catherine Low entirely. She has seen it all with me and has been kind, empathetic, and not unilateral. 4. To the extent possible, please consider reporting either to Catherine, the community health team or the police, or both. Kirsten is entirely right, this is horrifically unfair and you have no obligation to do so, but it is very important that people with a track record of sexual (any) violence not be in positions of power in any institutions or communities for the safety of other community members. 5. If there is anything whatsoever I can do, including talking openly about my experiences (I do have a blog draft actually about how I coped with my rape) which I am happy to share, an adamant vouch for Catherine and CEA’s team, or just generally a cup of tea, you should hit me up. • And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light. I’m very sad to hear about this. I don’t understand why the community health team is not able to handle this kind of thing. Did your friend make a report? Does the community health team need more funding or employees? Are they afraid to take on people with clout? Even if the accused is doing a lot of good work, if the accusation is found to be credible, at the very least we should ensure the accused does not occupy a position of responsibility. If they are serious about AI safety, they should agree to this measure themselves, for the sake of guarding humanity’s future. EAs should work to ensure that positions of responsibility are occupied by people of exemplary moral character, in my view. (Edit for clarification: I don’t want my view rounded off to “EAs should work to ensure positions of responsibility are occupied by the people who are hardest to cancel”. For example, my notion of “exemplary moral character” accounts for the possibility that failure to report on false accusations made by Kathy Forth could represent a character deficit, even if such failure-to-report makes one harder to cancel. I also think that everyone is flawed, and ability to recognize and learn from one’s mistakes is really important.) • My friend is not part of EA, she was just at an EA-adjacent organization, where the community health team does not have reach AFAIK. • Seems to me she should be talking to them anyway. • I am confident this comes from a good place but I really really dislike that this comment is telling (the friend of) someone who was raped what she should do. People who have been raped can respond however they want, whether they decide to report the situation or not is entirely up to them, and I hate when people act like there is one correct response. • Thanks Kirsten. I’m interested in understanding your position better. Do you agree there are circumstances under which reporting a crime is the correct response? (Would you agree that an FTX employee blowing the whistle on SBF would be the correct response, for example?) If you can think of at least one scenario where you think reporting a crime is the correct response, maybe you could outline how this scenario differs? (For the purpose of our discussion, I’m assuming that the current crime is serious, unambiguous, and unrepented, constituting significant evidence that the perpetrator will cause major harm to others.) My first guess is you think there’s something unique about rape such that the associated trauma means reporting can cause suffering. In that case, this would appear to be a straightforward demandingness dilemma—one’s feeling about the statement “it is correct to report rape” might be similar to one’s feeling about the statement “it is correct to forgo luxuries to donate to effective charities”. In both cases you’re looking at taking on discomfort yourself in order to do good for others. (In my mind the key considerations for demandingness dilemmas are: how much good you’re doing for others, how much discomfort you’re taking on, and what is personally psychologically sustainable for you. And I think saying “Seems to me they should [do the demanding thing]” is generally OK.) Thanks for any thoughts you’re willing to share. • Hi Truck Driver Wannabe, I really appreciate your effort to understand the other side of the argument and I see why you are confused about the reaction. For me, I find the idea that a person has any responsibility whatsoever to involve the cea community health team in any matter regarding their personal life (including and especially sexual assault) baffling. Reporting to CEA is not obviously net harm reducing, because a predator who is kicked out of CEA sponsored events can and will just move to another community and continue their predatory behavior elsewhere. And that is assuming that CEA handles the situation perfectly. I also don’t think a person has such a responsibility to report to law enforcement, only partly because law enforcement has generally not earned a reputation for handling these cases well. If we lived in a different world where law enforcement was more competent in these cases, then I agree this would be a straightforward demandingness dilemma. However, I don’t expect anyone to be publicly retraumatized in the service of helping strangers and I think it is extremely unfair to do so. Being publicly humiliated, mocked, disbelieved, called names, concern trolled, having every past sexual and romantic encounter up for public scrutiny, and being forced to publicly and repeatedly detail the most horrifying moments of your life is not even almost the same as, say, donating ten percent of your income. All or many of these things often happen to people who report sexual assault to a responsible and thorough law enforcement agency that does all the right things and has ample resources. In general I don’t think it’s that healthy to expect others to give a certain amount of their time or money or anything else. I think we should all set an example in our own lives and be public about why we make the choices we do, but respect that others have the right to choose what and how much they give (emotionally and otherwise). But even if I didn’t believe that in general, I would still believe it in case of sexual assault. • Hi Monica, thanks for the reply. Suppose my original comment was Seems to me people should donate toEA_CHARITY.

And I got these replies:

• “I find the idea that a person has any responsibility whatsoever to donate to $EA_CHARITY baffling.” • “Donating to$EA_CHARITY is not obviously net harm reducing. Their work may funge against other efforts. And even if they do perfect work, solving poverty in the developing world still leaves developed-world poverty as a major problem.”

• “The person reading your comment could be almost broke, such that if they donate to $EA_CHARITY they would homeless and destitute. It is unreasonable for us to ask anyone to take that sacrifice.” • “Other charities which claim to solve the problem$EA_CHARITY works on have been found to be scams. Don’t be surprised if they sell your credit card details to cybercriminals.”

• “People have the right to choose how much they give.”

These are all valid replies I agree with partially or fully.

But they all seem to operate under the assumption that I hold a much different position than the one I actually hold. I’m not totally sure what I did to give people the mistaken impression.

Maybe I just need to learn to avoid triggering people.

In any case, I think you and I agree more than we disagree.

• Hey Truck Driver Wannabe (great Forum name by the way) - I’m a medical doctor and have recently completed extra training in helping people who’ve experienced sexual assault. There are no ‘shoulds’ (except that the perpetrator should not have done it). I can’t do this topic justice in a Forum commentary (nor would I want to) but if you’d like to contact me directly, I’m happy to talk to you more about this.

• I agree with the overall reasoning for why we need inflation hedges.

I also agree that US debt poses a risk, as all debts do, but I would view this risk slightly differently, using a historical rather than budgetary lens:

• to a large degree, the privilege of the dollar is due to the US being the largest economy, the major world power and a stable government. That makes it “too big to fail”

• eventually, the US will not be the largest economy /​ major world power (looking at history you have to make a strong bet against permanent supremacy). At that point, it will have to come back to Earth and make much more constrained choices like all the other countries have to

• this course of events is unlikely to happen because some Fed decision about the timing of interest rates, or because some politician in the 2020s refused to cut back entitlements by a small percentage. It will happen because some other country starts to be seen as the safer option. Prolonged GDP decline, a major military defeat, and/​or government overthrow could all help precipitate such an outcome.

The reason this matters is that typical US debt hawks will advocate as a main solution reduced spending on long-term infrastructure, military technology and so on.

However, if you’re concerned with debt, you should be more concerned about GDP (what you get out of the dollar) than government spending (what you put into it, so to speak); about military power than military leanness;and about government stability than government drama (including periodic debt ceiling freakouts). A strong government backed by a strong share of the world economy—that is what investors see in the US dollar. The minute they stop seeing that, the party is over and hard choices will need to be made. (Remember how the markets reacted to Liz Truss’s budget a few months ago? That is what a former empire making economic decisions looks like.)

So if you keep GDP running, prevent China from overturning the world order, and avoid obvious own-goals such as January 6 style craziness the US should be fine for a while longer.

Now, is this US-favourable outcome the most beneficial to the world? I don’t know—it may be better to shift the equilibrium at some point (though on balance, I would tend to say, probably not now). But if it is the outcome you want, those would be the key items to safeguard.

• Agree with your post !

• [ ]
[deleted]
• Teachers are the core of any education system. Their work in the classroom and the relationships they forge with their students impacts the future of society. Supporting the education of young people though does not fall solely on teachers’ shoulders. The resources of a wider community can also make an impact. A school outreach program builds partnerships between educational institutions and sponsoring organizations to open up new pathways for growth and success. It paves a better road to the future for students, sponsors, and their communities.

• Hi, thank you for your post, and I’m sorry to hear about your (and others’) bad experience in EA. However, I think if your experience in EA has mostly been in the Bay Area, you might have an unrepresentative perspective on EA as a whole. Most of the worst incidents of the type you mention I’ve heard about in EA have taken place in the Bay Area, I’m not sure why.

I’ve mostly been involved in the Western European and Spanish-speaking EA communities, and as far as I know there have been much less incidents here. Of course, this might just be because these communities are smaller, or I might just not have heard of incidents which have taken place. Maybe it’s my perspective that’s unrepresentative.

In any case, if you haven’t tried it yet, consider spending more time in other EA communities.

• Your comment (at least how it’s read as, maybe different from your intentions) reads as “that’s a particularly problematic location, just go to a different one”.

That doesn’t solve the problem. That doesn’t hold the Bay * or any community accountable or push for change in a positive direction. I think that sort of logic is a common response to what Maya writes about and doesn’t help or make anything better.

*and this is coming from an ex-Berkeley community builder

• What is it about the Bay area that makes these issues more prevalent or severe, if they are? Seems worth finding out if we want to push for change in a positive direction.

• My thesis here revolves around the overlap between tech and EA culture and how this shapes the demographics. We should expect higher rates of youth, whiteness, maleness, and willingness to move for high pay in the Bay Area because of the influx of people moving for tech jobs in the past 10 years. There could also be some kind of weird sexual competition exacerbated by scarcity.

Here are some other unusual things about the Bay Area which may contribute to the “vibes” mentioned:

• Founder effects: Bay Area EA organizations tend to be more focused on AI and therefore look to hire tech-types, growing the presence of people who fit this demographic (these orgs also could have been founded in the Bay because of these demographics, it’s unclear to me which came first)

• Extremely high wealth inequality and the correlation of wealth with other things EAs select for (e.g. educational attainment) likely means EA in the Bay selects much harder for wealth than in other places

• Racism has a profound influence US society. In my experience, people who are unfamiliar with both the history and modern day effects of race in America (or are from more homogenous countries) are worse at creating welcoming spaces and seem to underappreciate the value of creating diverse groups

• There is a high prevalence and acceptance of hookup culture and casual sex

• The US is one of the most individualistic cultures in the world according to cultural psychology measures

Overall, the Bay Area is much unlike the rest of the world according to most demographic criteria, and it’s plausible that different outreach strategies are needed there in order to find driven and altruistic people from with a diversity of ideas and approaches to doing good.

• My guess is that it’s because the bay area has a lot of professional power entangled in it such that power dynamics emerge much more easily in the bay than elsewhere.

• I agree that would be an unhelpful takeaway from this post/​these experiences

I have only been to the bay area once, and I felt a culture shock from the degree of materialism and individualism that I experienced in the community. On one occasion, I tried to call it out publicly and got rebuffed by a group.

However, I do think it’s unfair that the bay area is presented as representative of the wider EA movement, in a way that, for example, ea berlin—wouldn’t be.

• I haven’t really spent time with the community there, so I’m curious about the individualist & materialist point. Could you expand on that a bit more?

• 2 Dec 2022 6:27 UTC
21 points
3 ∶ 1

I like your recommendations, and I wish that they were norms in EA. A couple questions:

(1) Two of your recommendations focus on asking EAs to do a better job of holding bad actors accountable. Succeeding at holding others accountable takes both emotional intelligence and courage. Some EAs might want to hold bad actors accountable, but fail to recognize bad behavior. Other EAs might want to hold bad actors accountable but freeze in the moment, whether due to stress, uncertainty about how to take action, or fear of consequences. There’s a military saying that goes something like: “Under pressure, you don’t rise to the occasion, you sink to the level of your training.” Would it increase the rate at which EAs hold each other accountable for bad behavior if EAs were “trained” in what bad behavior looks like in the EA community and in scripts or procedures for how to respond, or do you think that approach would not be a fit here?

(2) How you would phrase your recommendations if they were specifically directed to EA leadership rather than to the community at large?

• These are both very important questions—for (1), I think it depends on the circumstance in all honesty. For example, the same way that volunteers are often trained before EAGs and EAGxs, I could see participants receiving something (as part of the behavior guidelines) outlining scenarios and describing why they were an example of inappropriate or appropriate behavior. However, I think it would be extremely difficult to “train” all members of the EA community as people are involved in many different capacities. For (2), I think that, despite all situations involving interpersonal harm and conflict being unique and complex, it could be useful to have more transparency in some areas. I don’t mean naming specific individuals and discussing all the details of each case, I more mean something like (X action is unaccpetable and will result in Y consequence if found to be true). Another note—my suggestions were aimed towards EA community members because I truly believe that, often, people simply do not understand how their actions/​words make others feel. I hope that by raising awareness of this people will be motivated to change themselves without necessitating external conflict (although I understand that’s not always the case).

• What is the license problem that you foresee, Elliot?

What specifics concern you? I haven’t thought about it carefully, what perspective am I missing?

• What is the license problem that you foresee, Elliot?

What specifics concern you? I haven’t thought about it carefully, what perspective am I missing?

The license gives anyone the right to e.g. put my posts in a book and sell them without my consent. It lets them do all kinds of stuff with my work. I think my work is valuable and and I want to retain my IP and copyright rights about it. I think the prior, default system was good: fair use and quotations, plus asking for permission for other stuff.

BTW I’ve been plagiarized multiple times, I’ve had multiple people put my ideas in their published commercial books without consent or even notifying me (including some copyright violations), and including with mangling my ideas so badly that I wouldn’t want to be associated with their version so simply giving credit doesn’t fix the problem for me. Talking about someone’s ideas and quoting and paraphrasing them fairly and reasonably takes some skill that many people lack. One person offered to credit me as a co-author of his book when I found out he’d put a ton of my ideas in it. I declined because I would not want authorship of his low quality writing and reasoning, plus I was not involved with authoring the book at all. I don’t want him to plagiarize me, and I also don’t want him to incompetently summarize my ideas then credit me, let alone say I endorse it… CC BY would make all this stuff worse not better.

But mostly I just want to retain my property rights for my ideas, work, research, writing, etc. I think giving most of my ideas and writing away as free to read is more than generous enough.

My plan is to quit using the EA forum, though I’ll write a few things without important philosophy in them, like this one, rather than quitting abruptly. I will continue posting articles at https://​​criticalfallibilism.com and https://​​curi.us plus I’m actively using my forum and two YouTube channels. I have ~30,000 words of EA related draft articles which I’ll no longer be able to use as planned. I’ll probably try to quickly post a fair amount of that at curi.us with only light editing.

BTW, when reviewing EA’s terms of use yesterday I found other problems, e.g. a prohibition on posting anything “untrue”.

EDIT: I should also mention that I don’t want anyone translating my writing without consent because translations can easily be inaccurate and misleading, and essentially be like misquoting me. Translations basically come with an implication that I endorse what they say because it’s allegedly just my own words. I’ve had an issue with this in the past too, and if I ever get more popular all this stuff will come up more including with my archives.

• Huh, very interesting, although it doesn’t seem that the license terms stopped all that from happening to you.

BTW, it looks like I can’t indicate agreement or disagreement with your post? Is that a setting you have set?

• Yes the current default (US) copyright/​IP system is far from perfect.

I’m not aware of setting a setting and both voting things are showing up for me on my own post just like on yours (including with a private browsing window).

• 2 Dec 2022 5:59 UTC
0 points
0 ∶ 0

Tl;dr. Sounds like you’re criticizing some views/​approaches, perhaps rightly so. Do you have an alternative approach you suggest in place of those that you criticize?

• 2 Dec 2022 5:46 UTC
32 points
11 ∶ 3

Thank you for sharing such a brave, thoughtful and balanced post.

• 2 Dec 2022 5:08 UTC
24 points
3 ∶ 0

Thank you for posting this. I was so sad to see the recent post you linked to be removed by its author from the forum, and as depressing as the subject matter of your post is, it cheers me up that someone else is eloquently and forcefully speaking up. Your voice and experience are important to EA’s success, and I hope that you will keep talking and pushing for change.

• 2 Dec 2022 4:54 UTC
114 points
13 ∶ 0

Hi hi :) Are you involved in the Magnify Mentoring community at all? I’ve been poorly for the last couple of weeks so I’m a bit behind but I founded and run MM. Personally, I’d also love to chat :) Feel free to reach out anytime. Super Warmly, Kathryn

• We didn’t run a draft of this post by DM or Anthropic (or OpenAI), so this information may be mistaken or out-of-date. My hope is that we’re completely wrong!

Why not run a draft of the post by them? Not sure what you had to lose there and seems like it could’ve been better (both from a politeness/​cooperativeness perspective and from a tactical perspective) to have done so.

• If folks at DM/​Anthropic/​OpenAI ask us to run this kind of thing by them in advance, I assume we’ll be happy to do so; we’ve sent them many other drafts of things before, and I expect we’ll send them many more in the future.

I do like the idea of MIRI staff regularly or semi-regularly sharing our thoughts about things without running them by a bunch of people—e.g., to encourage more of the conversation, pushback, etc. to happen in public, so information doesn’t end up all bottled up in a few brains on a private email thread.

I think there are many cases where it’s actively better for EAs to screw up in public and be corrected in the comments, rather than working out all disagreements and info-asymmetries in private channels and then putting out an immaculate, smoothed-over final product. (Especially if the post is transparent about this, so we have more-polished and less-polished stuff and it’s pretty clear which is which.)

Screwing up in public has real costs (relative to the original essay Just Being Correct about everything), but hiding all the cognitive work that goes into consensus-building and airing of disagreements has real costs too.

This is not me coming out against running drafts by people in general; it’s great tech, and we should use it. I just think there are subtle advantages to “just say what’s on your mind and have a back-and-forth with people who disagree” that are worth keeping in view too.

Part of it is a certain attitude that I want to encourage more in EA, that I’m not sure how to put into words, but is something like: tip-toeing less; blurting more; being bolder, and proactively doing things-that-seem-good-to-you-personally rather than waiting for elite permission/​encouragement/​management; trying less to look perfect, and more to do the epistemically cooperative thing “wear your exact strengths and weaknesses on your sleeve so others can model you well”; etc.

All of that is compatible with running drafts by folks, but I think it can be valuable for more EAs to visibly be more relaxed (on the current margin) about stuff like draft-sharing, to contribute to a social environment where people feel chiller about making public mistakes, stating their current impressions and updating them in real time, etc. I don’t think we want maximum chillness, but I think we want EA’s best and brightest to be more chill on the current margin.

• I don’t think this makes sense. Your group, in the EA community, regarding AI safety, gets taken seriously whatever you write. This in not the paradigmatic example of someone who feels worried about making public mistakes. A community that gives you even more leeway to do sloppy work is not one that encourages more people to share their independent thoughts about the problem. In fact, I think the reverse is true: when your criticisms carry a lot of weight even when they’re flawed, this has a stifling effect on people in more marginal positions who disagree with you.

If you want to promote more open discussion, your time would be far better spent seeking out flawed but promising work by lesser known individuals and pointing out what you think is valuable in it.

Am I correct in my belief that you are paid to do this work? If this is so, then I think the fact that you are both highly regarded and compensated for your time means your output should meet higher standards than a typical community post. Contacting the relevant labs is a step that wouldn’t take you much time, can’t be done by the vast majority of readers, and has a decent chance of adding substantial value. I think you should have done it.

• this approach to reasoning assumes authorities are valid. do not trust organizations this way. It is one of effective altruism’s key failings. how can we increase pro-social distrust in effective altruism so that authorities are not trusted?

• What sort of substantial value would you expect to be added? It sounds like we either have a different belief about the value-add, or a different belief about the costs. Maybe if you sketched 2-3 scenarios that strike you as a relatively likely way for this particular post to have benefited from private conversations, I’d know better what the shape of our disagreement is.

If your objection is less “this particular post would benefit” and more “every post that discusses an AGI org should run a draft by that org (at least if you’re doing EA work full-time)”, then I’d respond that stuff like “EAs candidly arguing about things back and forth in the comments of a post”, the 80K Podcast, and unredacted EA chat logs are extremely valuable contributions to EA discourse, and I think we should do far, far more things like that on the current margin.

Writing full blog posts that are likewise “real” and likewise “part of a genuine public dialogue” can be valuable in much the same way; and some candid thoughts are a better fit for this format than for other formats, since some candid thoughts are more complicated, etc.

It’s also important that intellectual progress like “long unedited chat logs” gets distilled and turned into relatively short, polished, and stable summaries; and it’s also important that people feel free to talk in private. But having some big chunks of the intellectual process be out in public is excellent for a variety of reasons. Indeed, I’d say that there’s more value overall in seeing EAs’ actual cognitive processes than in seeing EAs’ ultimate conclusions, when it comes to the domains that are most uncertain and disagreement-heavy (which include a lot of the most important domains for EAs to focus on today, in my view).

This in not the paradigmatic example of someone who feels worried about making public mistakes. A community that gives you even more leeway to do sloppy work is not one that encourages more people to share their independent thoughts about the problem.

I don’t think that sharing in-process snapshots of your views is “sloppy”, in the sense of representing worse epistemic standards than a not-in-process Finished Product.

E.g., I wouldn’t say that a conversation on the 80K Podcast is more epistemically sloppy than a summary of people’s take-aways from the conversation. I think the opposite is often true, and people’s in-process conversations often reflect higher epistemic standards than their attempts to summarize and distill everything after-the-fact.

In EA, being good at in-process, uncertain, changing, under-debate reasoning is more the thing I want to lead by example on. I think that hiding process is often setting a bad example for EAs, and making it harder for them to figure out what’s true.

I agree that I’m not a paradigmatic example of the EAs who most need to hear this lesson; but I think non-established EAs heavily follow the example set by established EAs, so I want to set an example that’s closer to what I actually want to see more.

In fact, I think the reverse is true: when your criticisms carry a lot of weight even when they’re flawed, this has a stifling effect on people in more marginal positions who disagree with you.

If my reasoning process is actually flawed, then I want other EAs to be aware of that, so they can have an accurate model of how much weight to put on my views.

If established EAs in general have such flawed reasoning processes (or such false beliefs) that rank-and-file EAs would be outraged and give up on the EA community if they knew this fact, then we should want to outrage rank-and-file EAs, in the hope that they’ll start something else that’s new and better. EA shouldn’t pretend to be better than it is; this causes way too many dysfunctions, even given that we’re unusually good in a lot of ways.

(But possibly we agree about all that, and the crux here is just that you think sharing rougher or more uncertain thoughts is an epistemically bad practice, and I think it’s an epistemically good practice. So you see yourself as calling for higher standards, and I see you as calling for standards that are actually lower but happen to look more respectable.)

If you want to promote more open discussion, your time would be far better spent seeking out flawed but promising work by lesser known individuals and pointing out what you think is valuable in it.

That seems like a great idea to me too! I’d advocate for doing this along with the things I proposed above.

Contacting the relevant labs is a step that wouldn’t take you much time, can’t be done by the vast majority of readers

Is that actually true? Seems maybe true, but I also wouldn’t be surprised if >50% of regular EA Forum commenters can get substantive replies pretty regularly from knowledgeable DeepMind, OpenAI, and Anthropic staff, if they try sending a few emails.

• What sort of substantial value would you expect to be added? It sounds like we either have a different belief about the value-add, or a different belief about the costs.

I’d be very surprised if the actual amount of big-picture strategic thinking at either organisation was “very little”. I’d be less surprised if they didn’t have a consensus view about big-picture strategy, or a clearly written document spelling it out. If I’m right, I think the current content is misleading-ish. If I’m wrong and actually little thinking has been done—there’s some chance they say “we’re focused on identifying and tackling near-term problems”, which would be interesting to me given what I currently believe. If I’m wrong and something clear has been written, then making this visible (or pointing out its existence) would also be a useful update for me.

Polished vs sloppy

Here are some dimensions I think of as distinguishing sloppy from polished:

• Vague hunches <-> precise theories

• First impressions <-> thorough search for evidence/​prior work

• Hard <-> easy to understand

• Vulgar <-> polite

• Unclear <-> clear account of robustness, pitfalls and so forth

All else equal, I don’t think the left side is epistemically superior. It can be faster, and that might be worth it, but there are obvious epistemic costs to relying on vague hunches, first impressions, failures of communication and overlooked pitfalls (politeness is perhaps neutral here). I think these costs are particularly high in, as you say, domains that are uncertain and disagreement-heavy.

I think it is sloppy to stay too close to the left if you think the issue is important and you have time to address it properly. You have to manage your time, but I don’t think there are additional reasons to promote sloppy work.

You say that there are epistemic advantages to exposing thought processes, and you give the example of dialogues. I agree there are pedagogical advantages to exposing thought processes, but exposing thoughts clearly also requires polish, and I don’t think pedagogy is a high priority most of the time. I’d be way more excited to see more theory from MIRI than more dialogues.

If my reasoning process is actually flawed, then I want other EAs to be aware of that, so they can have an accurate model of how much weight to put on my views.

I don’t think it’s realistic to expect Lightcone forums to do serious reviews of difficult work. That takes a lot of individual time and dedication; maybe you occasionally get lucky, but you should mostly expect not to.

I agree that I’m not a paradigmatic example of the EAs who most need to hear this lesson [of exposing the thought process]; but I think non-established EAs heavily follow the example set by established EAs, so I want to set an example that’s closer to what I actually want to see more of

Maybe I’ll get into this more deeply one day, but I just don’t think sharing your thoughts freely is a particularly effective way to encourage other people to share theirs. I think you’ve been pretty successful at getting the “don’t worry about being polite to OpenAI” message across, less so the higher level stuff.

• I agree with a lot of what you say! I still want to move EA in the direction of “people just say what’s on their mind on the EA Forum, without trying to dot every i and cross every t; and then others say what’s on their mind in response; and we have an actual back-and-forth that isn’t carefully choreographed or extremely polished, but is more like a real conversation between peers at an academic conference”.

(Another way to achieve many of the same goals is to encourage more EAs who disagree with each other to regularly talk to each other in private, where candor is easier. But this scales a lot more poorly, so it would be nice if some real conversation were happening in public.)

A lot of my micro-decisions in making posts like this are connected to my model of “what kind of culture and norms are likely to result in EA solving the alignment problem (or making a lot of progress)?”, since I think that’s the likeliest way that EA could make a big positive difference for the future. In that context, I think building conversations about heavily polished, “final” (rather than in-process) cognition, tends to be insufficient for fast and reliable intellectual progress:

• Highly polished content tends to obscure the real reasons and causes behind people’s views, in favor of reasons that are more legible, respectable, impressive, etc. (See Beware defensibility.)

• AGI alignment is a pre-paradigmatic proto-field where making good decisions will probably depend heavily on people having good technical intuitions, intuiting patterns before they know how to verbalize those patterns, and generally becoming adept at noticing what their gut says about a topic and putting their gut in contact with useful feedback loops so it can update and learn.

• In that context, I’m pretty worried about an EA where everyone is hyper-cautious about saying anything that sounds subjective, “feelings-ish”, hard-to-immediately-transmit-to-others, etc. That might work if EA’s path to improving the world is via donating more money to AMF or developing better vaccine tech, but it doesn’t fly if making (and fostering) conceptual progress on AI alignment is the path to impact.

• Ideally, it shouldn’t merely be the case that EA technically allows people to candidly blurt out their imperfect, in-process thoughts about things. Rather, EA as a whole should be organized around making this the expected and default culture (at least to the degree that EAs agree with me about AI being a top priority), and this should be reflected in a thousand small ways in how we structure our conversation. Normal EA Forum conversations should look more like casual exchanges between peers at an academic conference, and less like polished academic papers (because polished academic papers are too inefficient a vehicle for making early-stage conceptual progress).

• I think this is not only true for making direct AGI alignment progress, but is also true for converging about key macrostrategy questions (hard vs. soft takeoff; overall difficulty of the alignment; probability of a sharp left turn; impressiveness of GPT-3; etc.). Insofar as we haven’t already converged a lot on these questions, I think a major bottleneck is that we’ve tried too much to make our reasoning sound academic-paper-ish before it’s really in that format, with the result that we confuse ourselves about our real cruxes, and people end up updating a lot less than they would in a normal back-and-forth.

• Highly polished, heavily privately reviewed and edited content tends to reflect the beliefs of larger groups, rather than the beliefs of a specific individual.

• This often results in deference cascades, double-counting evidence, and herding: everyone is trying (to some degree) to bend their statements in the direction of what everyone else thinks. I think it also often creates “phantom updates” in EA, where there’s a common belief that X is widely believed, but the belief is wrong to some degree (at least until everyone updates their outside views because they think other EAs believe X).

• It also has various directly distortionary effects (e.g., a belief might seem straightforwardly true to all the individuals at an org, but doesn’t feel like “the kind of thing” an organization writ large should endorse).

In principle, it’s not impossible to push EA in those directions while also passing drafts a lot more in private. But I hope it’s clearer why that doesn’t seem like the top priority to me (and why it could be at least somewhat counter-productive) given that I’m working with this picture of our situation.

I’m happy to heavily signal-boost replies from DM and Anthropic staff (including editing the OP), especially if it shows that MIRI was just flatly wrong about how much those orgs already have a plan. And I endorse people docking MIRI points insofar as we predicted wrongly, here; and I’d prefer the world where people knew our first-order impressions of where the field’s at in this case, and were able to dock us some points if we turn out to be wrong, as opposed to the world where everything happens in private.

(I think I still haven’t communicated fully why I disagree here, but hopefully the pieces I have been able to articulate are useful on their own.)

• I would be curious to hear the pushbacks from people who disagree-voted this!

• From a cooperativeness perspective, people probably should not unilaterally create for-profit AGI companies.

(Note: Anthropic is a for-profit company that raised $704M according to Crunchbase, and is looking for engineers who want to build “large scale ML systems”, but I wouldn’t call them an “AGI company”.) • Well, I wouldn’t say that MIRI decided not to send drafts to DM etc. out of revenge, to punish them for making a strategic decision that seems extremely bad to me. What I’d say is that the norm ‘savvy people freely talk about mistakes they think AGI orgs are making, without a bunch of friction’ tends to save the world more often than the norm ‘savvy people are unusually cautious about criticizing AGI orgs’ does. Indeed, I’d say this regardless of whether it was a good idea for someone to found the relevant AGI orgs in the first place. (I think it was a bad idea to create DM and to create OpenAI, but I don’t think it’s always a bad idea to make an AGI org, since that would be tantamount to saying that humanity should never build AGI.) And we aren’t totally helpless to follow the more world-destroying norm just because we think other people expect us to follow it; we can notice the problem and act to try to fix it, rather than contributing to a norm that isn’t good. The pool of people who need to deliberately select the more-reasonable norm is not actually that large; it’s a smallish professional network, not a giant slice of society. • I continue to object to a norm of running posts by organizations that the posts are talking about. From many interviews with posters to LW and the EA Forum over the years I know that the chilling effects would be massive, and this norm has already multiple times prevented important things from being said, because it doubled or tripled the cost of publishing things that talk about organizations. • Yeah, I agree with this too. I don’t think MIRI staff are scared to poke DM about things, but I like taking opportunities to make it clear “it’s OK to talk about MIRI, DM, etc. without checking in with us privately first”, because I expect that a lot of people with good thoughts and questions will get stuck on scenarios like ‘intimidated by the idea of shooting MIRI an email’, ‘doesn’t know who to contact at MIRI’, ‘doesn’t want to deal with the hassle of an email back-and-forth’, etc. I think it’s good to have ‘send drafts to the org in advance’ as an option that feels available to you. I just don’t want it to feel like a requirement. (It also seems fine to me to send posts about MIRI to me after posting them. This makes it less likely that I just don’t notice the post exists, and gives me a chance to respond while the post is fresh and people are paying attention to it, while reducing the risk that good thoughts just never get posted.) • Posted earlier here. • I don’t feel personally affected by this change but this seems to matter a lot to other people, for example: • If there is any chance you’ll publish your work in an academic journal, or otherwise will be publishing it somewhere else in the future… not all publishers will care about this license, but it might be prudent to be cautious because some definitely will. • If you do not want podcasts or translations of your work to be made without your consent (e.g. maybe you’d want to work with any translators to ensure accuracy — you now won’t be able to control this). • If you do not want your work potentially to be used for commercial purposes. • If your work is done for contract, and not funded by a grant, you’ll need to review the contract terms and depending on the terms of the contract, you may need to get consent from the funder prior to posting. • The CC BY 4.0 license is irrevocable, so even if you think there is a chance of your mind changing with regards to one of the above, you could be screwed. I think people posting deserve to make an informed choice. I’m happy to see the new posting includes a checkbox “Before you can publish this post you must agree to the terms of use including your content being available under a CC-BY license”—this satisfies me. But still possible people might not fully understand the implications so I encourage people to be thoughtful about this. • There could be a little information summary next to the terms of use which is more accessible that explains the implications eg as you have here. • I am imagining a hoverable [i] for info button, not putting it in the terms, as people often don’t bother to even open them as they know they’ll be long and legalistic. • 2 Dec 2022 3:00 UTC 2 points 0 ∶ 0 Great post and glad to see contrarian takes. (That’s true as a general matter but I also happen to agree with this one :P) Couple quick thoughts: 1. Loyalty is important not just as a personal virtue but for efforts at collective action because it convinces people to engage in long term altruistic thinking. Hahrie Han has done some important empirical work showing that people are motivated to help when they have a sense of a shared past, shared future. Sudden ruptures in social relationships are very destructive for fostering that culture. 2. Good decision-making generally does not involve dramatic changes to our beliefs. This is one of the less-known aspects of Tetlock’s research on super predictors. They very rarely make large updates but are rather making small updates based on continuous data; they don’t overcorrect when the mob moves. It seems likely to me that this is an example where we can try to put Tetlock’s research to effect. I don’t see the sort of dramatic evidence I would need to change my mind about SBF (though I also probably did not view him as highly as many others; my prior was that Sam was a very smart guy who was well intentioned but going in the wrong direction in life.) 3. Good decision-making requires avoiding the Fundamental Attribution Error. I imagine most people are aware of the FAE on this forum. But I blogged about systemic forces that seem more important to me, in the collapse of FTX, than any of Sam’s personal misdeeds. • 2 Dec 2022 2:59 UTC 1 point 0 ∶ 0 Heavily cosigned (as someone who has worked with some of Nick’s friends whom he got into EA, not as someone who’s done a particularly great job of this myself). I encourage readers of this post to think of EA-sympathetic and/​or very talented friends of theirs and find a time to chat about how they could get involved! • Thanks Spencer, really appreciated the variety of guests, this was a great podcast. • This article makes one specific point I want to push back on: It’s not that E.A. institutions were necessarily more irresponsible, or more neglectful, than others in their position would have been; the venture capitalists who worked with Bankman-Fried erred in the same direction. But that’s the point: E.A. leaders behaved more or less normally. Unfortunately, their self-image was one of exceptionalism. Anyone who has ever interacted with EA knows that this is not true. EAs are constantly, even excessively, criticizing the movement and trying to figure out what big flaws could exist in it. It’s true, a bit strange, and a bad sign, that these exercises did not for the most part highlight this flaw of reliance on donors who may use EA to justify unethical acts—unless you counted the reforms proposed by Carla Zoe Cremer that were never adopted cited by the article. Yes, there is some blame that could be assigned for never adopting these reforms, but the pure quantity of EA criticism and other potential fault points suggests IMO that it’s really hard to figure out a priori what will fail a movement. EA is not perfect, nobody has ever claimed as much, and to some extent I think this article is disingenuous for implying this. “People focused on doing the most good” =/​= “moral saints who think they are above every human flaw”. • Some reasons I disagree: I think internal criticism in EA is motivated by aiming for perfection, and is not motivated by aiming to be as good as other movements /​ ideologies. I think internal criticism with this motivation is entirely compatible with a self-image of exceptionalism. While I think many EAs view the movement as exceptional and I agree with them, I think too many EAs assume individual EAs will be exceptional too, which I think is an unjustified expectation. In particular, I think EAs assume that individual EAs will be exceptionally good at being virtuous and following good social rules, which is a bad assumption. I think EA also relies too heavily on personal networks, and especially given the adjacency to the rationalist community, EA is bad at mitigating against the cognitive biases this can cause in grantmaking. I expect that people overestimate how good their friends are at being virtuous and following good social rules, and given that so many EAs are friends with each other at a personal level, this exacerbates the exceptionalism problem. • I mean, of course Effective Altruism is striving for perfection, every movement should, but this is very different than thinking that EA has already achieved perfection. I think you listed a couple of things that I had read as EA self-criticism pre-FTX collapse, suggesting that EAs were aware of some of the potential pitfalls of the movement. I just don’t think many people thought EA exceptional in the “will avoid common pitfalls of movements” perfection sense as implied by the article. • I couldn’t attend the interpretability hackathon and was hoping to get acquainted with LLM interpretability research as a sofware dev with no experience in interpretability or transformers. So here’s a starting point following in the footsteps of this submission (see their writeup here): Basically I am thinking we can use the hackathon as a collaborative study session to become more familiar with transformers and interpretability, ultimately culminating in replicating the results in the linked submission (it took them 3 days but since we have a starting point, possibly we can replicate their project and grok what they did much quicker). Not shoehorned to this idea though. If you think there is a better avenue to using the hackathon to upskill in LLM interpretability and transformers, do share. • Nice — this seems ambitious, I really like this idea. Maybe you can start a study group in GatherTown to continue this virtually as well. I’m sure you’d get takers from other folks interested in ML research. • 2 Dec 2022 0:44 UTC 2 points 1 ∶ 0 Some skills are dual purpose. Writing clearly helps both with truth seeking and being influential. This isn’t obvious to me—how does being a good writer make you better at truth seeking? • https://​​forum.effectivealtruism.org/​​termsOfUse You also irrevocably waive any “moral rights” or other rights with respect to attribution of authorship or integrity of materials for Your Content. When we make public use of Your Content, we will, where practical, use good faith efforts to credit you as the original author of Your Content. Wait your terms say you are allowed to delete my name from my posts and use my content without attribution. Why? And is this new or old? Where can I read change history of the terms or something like that? The good faith thing seems basically nonbinding/​meaningless legally, and also doesn’t apply whenever you decide it’s impractical or you’re doing something privately. Why do you want people to waive moral rights? I saw the same thing in Less Wrong’s terms today but did not find any explanation of the upside. • Hi Elliot, your quote cuts off the “Subject to section 2.2” qualifier, which is the section that discusses the Creative Commons license. We’ve tried to give a simple summary of the license in this post, but I might suggest talking to a lawyer if you have questions about legal terms; for example “good faith” is not meaningless legally, it is a well defined term of art. • You want me to talk to a lawyer to know how quotes and your new CC BY stuff works? (E.g. if I quote from my own article while link posting it, possibly the entire thing, does that keep anything quoted out of CC BY?) You aren’t willing to clarify that yourselves and just want individuals to go pay lawyers to find out how your forum works? That seems very unreasonable. Also Subject to Section 2.2, [a bunch of stuff]. You also irrevocably waive any “moral rights” That qualifier doesn’t appear to apply to the moral right waiving. • But now the ship of effective altruism is in difficult straits, and he [Will MacAskill], like Jonah, has been thrown overboard. What a strange line—did I miss some event where people expelled Will from EA to make ourselves look better? Seems like this would make more sense of SBF (but context rules that interpretation out). (for reference, the book of Jonah is pretty short and you can read it here) • Don’t read too much into this piece of journalistic flair. It’s common practice to end pieces like this (in a paragraph or two known as a “kicker”) with something that sounds poignant on first glance to the reader, even if it would fall apart on closer scrutiny, like this does. • Were I being charitable to Lewis-Kraus I might say that he moves on to talk about the belly of the fish so in the story EA is God rather than the folks on the boat. Ie that Will is currently in the fish and that denotes uncertainty about the future. • Note that the ship is already effective altruism so in this reading the ship is also God (which is actually an interesting twist on the story). • At the end of that story, the sinners of that city donned sackcloth and ashes FWIW my read of the text is that the king of Nineveh is the only one who is said to sit in ashes. This wasn’t that hard to check! • Actually sackcloth and ashes go together so maybe we’re supposed to assume that the Ninevites did the ashes as well? I maybe retract this remark. • It’s open to interpretation, but I don’t think”thrown overboard” is there to suggest much about the EA community, though I’m sure some wish there were a way to distance EA from someone who was so deeply entangled with SBF. Whatever the case, I think the reference primarily serves to set up the following: While MacAskill lies in the belly of the big fish, the fate of effective altruism hangs in the balance. Jonah, accepting the burden of duty, eventually went to Nineveh and told the truth about transgression and punishments. At the end of that story, the sinners of that city donned sackcloth and ashes, and found themselves spared. Will’s responses in the piece fall far short from “accepting the burden of duty.” First, on his propagating the myth of SBF’s frugality: When asked about the discrepancies in Bankman-Fried’s narrative, MacAskill responded, “The impression I gave of Sam in interviews was my honest impression: that he did drive a Corolla, he did have nine roommates, and—given his wealth—he did not live particularly extravagantly.” and this, in reference to the Slack message: “Let me be clear on this: if there was a fraud, I had no clue about it. With respect to specific Slack messages, I don’t recall seeing the warnings you described. Perhaps Will doesn’t deserve much blame (that’s certainly a theme running through his comments so far). But if he isn’t able to tell the truth about what happened, or isn’t equipped to grapple with it, it’s bad news for the movements and organizations he’s associated with. • I am purely quibbling with whether the Biblical allusion fits. • It doesn’t. • Like, for one, Nineveh is super alien to Jonah, and he hates the fact that they actually repent, which seems like a bad analogy for Will speaking truth to EAs in order to get us to do better. Also Nineveh’s sins don’t seem like they have much to do with Jonah’s (altho Jonah certainly doesn’t seem to have a totally properly reverent attitude). So the paragraph just doesn’t really make all that much sense. • Edit (3/​12/​2022): On further reflection, I think these accusations are quite shocking, and likely represent (at best) significant incompetence. [Low quality- midnight thoughts] Not sure to what extent to update here, someone unnamed said something about sbf in some slack thread. It would be good to have some transparency on this issue—perhaps by those who have access to said slack workspace—to know how many people read it, how they reacted, and why they reacted that way. Although this is unlikely at the moment because of proceeding legal concerns • Does the EA forum have a terms of use document, or something similar, which gives details of the new (and old) rules? I couldn’t find it with a quick search. EDIT: https://​​forum.effectivealtruism.org/​​termsOfUse How do quotes interact with CC BY? • https://​​www.effectivealtruism.org/​​terms-and-conditions If you have an Account (defined below) with us, we will try to give you reasonable notice of major changes through your Account or the contact information associated with your Account. So EA violated this term – they could have emailed or DMed me but didn’t, and also gave no notice when i commented. Also the terms don’t specify that the CC BY stuff doesn’t apply before dec 1, 2022. They should. Someone reading the terms might think that all the old stuff is CC BY and act accordingly. The terms should also include the relevant text from the older terms so people know what terms still, today, govern all the older posts. Deleting the terms that still actively govern older posts today from the terms of use doesn’t make sense. • I strongly recommend Small and Vulnerable linked in the post, but my motivation was much more mundane. I don’t spend much money and my income will jump discontinuously after grad school so it doesn’t hurt me to give. • As a teenager, I came up with a set of four rules that I resolved ought to be guiding and unbreakable in going through life. They were, somewhat dizzyingly in hindsight, the product of a deeply sad personal event, an interest in Norse mythology and Captain America: Civil War. Many years later, I can’t remember what Rules 3 and 4 were; the Rules were officially removed from my ethical code at age 21, and by that point I’d stop being so ragingly deontological anyway. I recall clearly the first two. Rule 1 - Do not give in to suffering. Rule 2 - Ease the suffering of others where possible. The first Rule was readily applicable to daily life. As for the second, it seemed noble and mightily important, but rarely worth enacting. In middle-class, rural England with no family drama and generally contented friends, there wasn’t much suffering around me. Moving out to University, one of my flatmates was close friends with the man who set up the EA group there, and on learning more about it I was struck by the opportunity for fulfilling my Rules that GiveWell and 80k represented. This story does not account for my day-to-day motivation to uphold a Giving What We Can pledge or fumble through longtermist career planning. I’ve been persuaded by the flavour of consequentialism used here, think that improving the experience of sentient life is wonderful and, quite frankly, don’t have any other strong compulsions for career aims to offer competition. Generally buying-in to the values and aims of this community is my day-to-day motivation. Nevertheless, on taking a step back and thinking about my life and what I wish to do with it, I still feel about the abstract concept of suffering the way Bucky Barnes feels about Iron Man at the end of that film. The Rules don’t matter to me anymore, but their origin grants my EA values the emotional authority to set out a mission statement for what I should be doing. • Editorial/​Speculation/​Personal comments: This article might be good and satisfying to many people because it gives a plausible sense of what happened in EA related to SBF, and what EA leaders might have known. The article goes beyond the “press releases” we have seen, does not come from an EA source, and is somewhat authoritative. Rob Wiblin appears quite a few times and is quoted. In my opinion, he is right and most EAs and regular people would agree with him. New Yorker articles include details to suggest a sense of intimacy and understanding, but the associated narrative is not always substantive or true. This style is what Wiblin is reacting to. Gideon-Lewis makes some characterizations that don’t seem that insightful. As in his last piece, Gideon-Lewis maintains an odd sense of surprise that a large movement with billions of dollars, and a history of dealing with bad actors, has an “inner circle”. Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum. • My intuitions differ some here. I don’t know about Will MacAskill’s notion of moral pluralism. But my notion of moral pluralism involves assigning some weight to views even if they’re informed by less data or less reflection, and also doing some upweighting on outsider views simply because they’re likely to be different (similar to the idea of extremization in the context of forecast aggregation). If a regular person thinks “our great virtue is being right” sounds like hubris, that’s evidence of actual hubris. You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better. You make that replacement because the second statement harmonizes with a wider variety of moral and epistemic views. PR is more corrosive than reputation because “reputation” allows for the possibility of observers outside your group who can form justified opinions about your character and give you useful critical feedback on your thinking and behavior. (One of the FTX Future Fund researchers piped up to make a countervailing point, referring, presumably, to donations that Thiel made to the campaigns of J. D. Vance and Blake Masters: “Might be a useful ally at some point given he is trying to buy a couple Senate seats.”) There’s a sense in which reputational harm from a vignette like this is justified. People who read it can reasonably guess that the speaker has few instinctive misgivings to ally with a “semi-fascist” who’s buying political power and violating widely held “common sense” American morality. One would certainly hope that deontological considerations (beyond just PR) would come up at some point, were EA considering an alliance with Thiel. But it concerns me that Lewis-Kraus quotes so much “PR” discussion, and so little discussion of deontological safeguards. I don’t see anything here that reassures me ethical injunctions would actually come up. And instinctive misgivings actually matter, because it’s best to nip your own immoral behavior in the bud. You don’t want to be in a situation where each individual decision seems fine and you don’t realize how big their sum was until the end, as SBF put it in this interview (paraphrased). That’s where Lewis-Kraus’ references to Schelling fences and “momentum” come in. The best time to get a “hey this could be immoral” mental alert is as soon as you have the idea of doing the thing. Maybe you do the thing despite the alert. I’m in favor of redirecting the trolley despite the “you will be responsible for the death of a person” alert. But an alert is generally a valuable opportunity to reflect. Finally, some meta notes: • I doubt I’m the only person thinking along these lines. Matt Yglesias also seems concerned, for example. The paragraphs above are an attempt to steelman what seems to be a common reaction on e.g. Twitter in a way that senior EAs will understand. • The above paragraphs, to a large degree, reflect updates around my thinking about EA which have occurred over the past years and especially the past weeks. My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants. • I’ve noticed it’s easy for me to get into a mode of wondering if I’m a good person and trying to defend my past actions. Generally speaking it has felt more useful to reflect on how I can improve. Growth mindset over fixed mindset, essentially. • That said, I think it is a major credit to EAs that they work so hard to do good. Lack of interest in identifying and solving the world’s biggest problems strikes me as a major problem with common-sense morality. So I don’t think of EAs in the Slack channel as bad people. I think of them as people working hard to do good, but the notion of “good” they were optimizing for was a bit off. (I used to be optimizing for that thing myself!) • I’ve also noticed that when experiencing an identity threat (e.g. sunk cost fallacy), it’s useful for me to write a specific alternative plan, without committing to it during the process of writing it. This could look like: Make a big list of things I could’ve done differently… then circle the ones I think I should’ve done, in hindsight, with the benefit of reflection. Or, if I’m feeling doubtful about current plans, avoid letting those doubts consume me and instead outline one or more specific alternative plans to consider. • I’m unsure what part of my comment you are replying to. I’m happy to own up to valuing “being right over optics/​politics”. I’m OK if you became aggressive or even hostile, through making good inferences about me. However, many of the things you said are confusing to me. I don’t know how the blog post on “PR”/​”reputation” is relevant. Also, I agree with Matt Yglesias (I’m in touch with him!). Importantly, I think it would be good for you to be aware of how your writing in your comment might present to some people. Your comment begins with “my intuitions differ some here…” which implies I share these the views you are opposing below your comment. This seems confirmed throughout, e.g. “My thinking used to be a lot closer to yours and the thinking of the quoted Slack participants”. If I tried to reply, I think I would have be obligated to refute or deal with the the associations you imply for me, which include “ally with a semi-fascist”, “violating American morality”, “little discussion of deontological safeguards”, “nip [my] own immoral behavior in the bud”. I don’t think the following idea is in the article, much less my comment: “You don’t just replace “our great virtue is being right” with “our focus is being right” because it sounds better.” I don’t think you intended this, but I think some people would find this strange and somewhat offensive. I actually found your reply interesting and filled with content. I think you have interesting opinions to share. • My comment was a sloppy attempt at simultaneously replying to (a) attitudes I’ve personally observed in EA, (b) the PR Slack channel as described in the article, and (c) your comment. I apologize if I misunderstood your comment or mischaracterized your position. My reply was meant as a vague gesture at how I would like EA leadership to change relative to what came through in the New Yorker article. I wouldn’t read too much into what I wrote. It’s tricky to make a directional recommendation, because there’s always the possibility that the reader has already made the update you want them to make, and your directional recommendation causes them to over-update. • Gideon-Lewis has had great access to senior EAs, and inside documents. After weeks of work, there is not that much that he shows he has uncovered, that isn’t available after a few conversations or even just available publicly on the EA forum. This is true, but it’s about as good as can be expected since it’s an online New Yorker piece. Their online pieces are much closer to blog posts. The MacAskill profile that ran in the magazine was the result of months of reporting, writing, editing, and fact-checking, all with expenses for things like travel. • 1 Dec 2022 22:57 UTC 2 points 0 ∶ 0 SoGive ran a grants programme earlier this year, and we plan to publish an update explaining how it went (should be published in the next few days). We would be happy to: • have a chat with funders who want to run their own grants process • incorporate funds into our next grants round (which likely won’t happen until next summer, assuming it goes ahead) • Do you have a sense of whether the case is any stronger for specifically using cortical and pallial neurons? That’s the approach Romain Espinosa takes in this paper, which is among the best work in economics on animal welfare. • Nice post, I mostly agree. The study specifically asked people how they would evaluate a harmful act in light of a range of potentially extenuating circumstances, such as different moral beliefs, a mistake of fact, or self-defense. While there was significant variation in people’s moral judgments across cultures, there was nevertheless unanimous agreement that committing a harmful act based on different moral beliefs was not an extenuating circumstance. Indeed, on average across cultures, committing a harmful act based on different moral beliefs was considered worse than was committing the harmful act intentionally (see Barrett et al., 2016, fig. 5). It’s worth noting that the specific different moral belief used in the study was that the “perpetrator holds the belief that striking a weak person to toughen him up is praiseworthy”, which seems quite different from e.g. a utilitarianism/​deontology divide. Like, that view may just seem completely implausible to most people, and therefore not at all extenuating. Other moral views may be more plausible and so you’d be judged less harshly for acting according to them. I’m speculating here, of course. • Thanks for highlighting that. :) I agree that this is relevant and I probably should have included it in the post (I’ve now made an edit). It was part of the reason that I wrote “it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs”. But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand “intentional harm” in a variety of ways. In any case, I think it’s important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught. • Yes, this strikes me as an important point. It’s a bit like how ideologically-motivated hate crimes are (I think correctly) regarded as worse than comparable “intentional” (but non-ideologically-motivated) violence, perhaps in part because it raises the risks of systematic harms. Many moral differences are innocuous, but some really aren’t. For an extreme example: the “true believer” Nazi is in some ways worse than the cowardly citizen who goes along with the regime out of fear and self-interest. But that’s very different from everyday “value disagreements” which tend to involve values that we recognize as (at least to some extent) worthy of respect, even if we judge them ultimately mistaken. • I edited this in. Thank you! • Thanks for posting this Will! • I’d be interested in hearing from downvoters here; this seems to me like a fairly anodyne-ly beneficial post, so while I wasn’t expecting a bunch of strong upvotes I also wasn’t expecting downvotes. I’m curious what the disagreement is. • https://​​www.theguardian.com/​​commentisfree/​​2022/​​nov/​​30/​​science-hear-nature-digital-bioacoustics what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/​ proteins are deemed dangerous to human health? Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures? • [ ] [deleted] • It depends why you have those sympathies. If you think they just formed because you find them aesthetically pleasing, then sure. If you think there’s some underlying logic to them (which I do, and I would venture a decent fraction of utilitarians do) then why wouldn’t you expect intelligent aliens to uncover the same logic? • [ ] [deleted] • This seems like a strange viewpoint. If value is something about which one can make ‘truthy’ and ‘falsey’ - or something that we converge to given enough time and intelligent thought if you prefer—then it’s akin to maths and aliens would be a priori as likely to ‘discover’ it as we are. If it’s arbitrary, then longtermism has no philosophical justification, beyond the contingent caprices of people who like to imagine a universe filled with life. Also if it’s arbitrary, then over billions of years even human-only descendants would be very unlikely to stick with anything resembling our current values. • I think about the microhumor section of SSC’s nonfiction writing advice is a good example of this. Scott Alexander is very easy to read for me despite covering pretty complex topics and does a very good job of making it both easy and enjoyable to read. I’ve started peppering things like this into my communications with non-technical people at my job and people really enjoy it. https://​​slatestarcodex.com/​​2016/​​02/​​20/​​writing-advice/​​ • Love this example, thanks so much for sharing it. I know I mention John Oliver above, but realistically his style is almost certainly too extreme to be replicated in most cases by most people. I agree that Alexander’s microhumor is a perfect example of subtle humor that could potentially be employed in almost any context. • Great podcast, with not just one but five highly informative experts covering different aspects of the crisis. The intro and timeline was a very clear overview. Ozzie Gooen was especially helpful on the EA background and implications. Highly recommend for all EAs. • Thanks for the heads up! I suggest adding the following to the forum sidebar: • A link to the TOU • A statement like “Content published after December 1, 2022 is available under a CC BY 4.0 license” (unless there is already a license banner on each page). • Thanks Eevee! The license information is included in “How to use the Forum” which is linked to in the sidebar, but yeah possibly we should consider a more prominent link. • I appreciate the suggestions! I agree we should make this info easier to find—added these to our list for triage. • The public debt to GDP ratio in Japan is over 260% and they still haven’t defaulted (it somewhat boggles my mind that they can sustain such high debt levels even though it seems that there are reasonable explanations for it). There are ostensibly some differences between the Japanese and American contexts but nonetheless it seems possible for developed countries to sustain high levels of public debt for a considerable amount of time. I think how long of a time is still an open question. I’d expect to see the Japanese situation unravel before the American one and maybe that might give an indication of how sustainable extreme high levels of debt are if there is ever such an unraveling. • As the government debt reaches maturity, it needs to roll over and be repriced at current interest rates of 4% (but let’s take an average of 3%) - think of this as your fixed-term interest rates on your mortgage running out so you need to renegotiate a new rate with the bank. If we reprice the current$31.3T of debt at 3% interest rates, the interest expense would increase by ~$500 billion per year to almost$1 Trillion per year—That will overtake military spending as the biggest single line item for spending.

Not all the debt matures at the same time, does it? If not then maybe only the portion that matures gets repriced?

• Correct, that’s why it says

it needs to roll over and be repriced at current interest rates of 4% (but let’s take an average of 3%)

• Add Agreement Karma to posts.

This comment suggesting this feature got 32 Agreement with 9 votes:

• Perhaps it’s not clear whether adding agreement karma to posts is positive on net; but I think perhaps it would be worth adding for a month as an experiment.

A counter-consideration is that many voters on the Forum may not understand the difference between overall karma and agreement karma still. Unconclusive weak evidence: This answer got 3 overall karma with 22 votes (at some point it was negative) and 18 agreement karma with 20 votes:

(It’s unconclusive evidence because while the regular karma downvotes surprised me, people could have had legitimate reasons for not liking the meta-answer and downvoting it. My suspicion though is that at least some people down-voted this in an attempt to “Disagree” vote in the poll.)

• Cool work! Props that you allow for people using their own discount rate, as your first footnote is a good point.

I think that for transparency and ease of reader understanding, you ought to link the 80K’s article on grantmaking for most-pressing problems. Similarly, linking info on Squiggle would be good.

Also, best to clarify that this review is only about “working at a government agency that funds relevant research”, and that this is only one out of their 3 mentioned highly-effective careers related to grantmaking (at the bottom), and that the other two would need a different analysis.

I also think that if the intent is to advise on careers, you need to do some analysis of the team of E-ARPA. Variables that come to mind are: size of team, how many of each role, how senior each person seems (for thoughts on how soon a person can get hired in such a role, and maybe even, in the case of junior employees, where they could go from there career-capital-wise), and a rough guesstimate of how much each role contributed to the overall annual grantmaking decisions of E-ARPA.

Also a minor wording note.. You say:
”We chose ARPA-E because other ARPA agencies are explicitly called out by 80,000 Hours profile of grantmaking (i.e., DARPA, IARPA)”

“Called out” has negative connotations, so I’d probably say “mentioned”, “referred to”, or “brought up” instead. That terminology confused me—I thought you were saying you only chose ARPA-E because the others had been essentially ruled out. I was a bit aghast thinking you’d chosen an example in a category where others had already been shown to be moot or something , and that’s why I dug for and read the original 80K piece >.>

Phew, sorry that was so much seemingly-critical feedback, but to clarify I (not a researcher or data scientist) think what you did do is good and I’m happy you reviewed this career path, which tends to, I think, be unfortunately skipped in many career discussions. I strong upvoted the post.

• On AI quietism. Distinguish four things:

1. Not believing in AGI takeover.

2. Not believing that AGI takeover is near. (Ng)

3. Believing in AGI takeover, but thinking it’ll be fine for humans. (Schmidhuber)

4. Believing that AGI will extinguish humanity, but this is fine.

1. because the new thing is superior (maybe by definition, if it outcompetes us).

2. because scientific discovery is the main thing

(4) is not a rational lack of concern about an uncertain or far-off risk: it’s lack of caring, conditional on the risk being real.

Can there really be anyone in category (4) ?

• Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared… ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.

• Hinton: “the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”

I expect this cope to become more common over the next few years.

• (4) was definitely the story with Ben Goertzen and his “Cosmism”. I expect some “a/​acc” libertarian types will also go for it. But it is and will stay pretty fringe imo.

• 1 Dec 2022 18:05 UTC
5 points
0 ∶ 0

“2. Secondly, we assume that all benefits—including health benefits, reduction of monetary costs associated with climate change, reduction of existential risk associated with climate change, and spillover benefits into other industries—are incorporated into the market valuation. We believe that markets likely price in the aforementioned externalities (e.g., health and enviro benefits) (support for this claim here). Furthermore, the valuations listed on ARPA-E’s site are from after the Inflation Reduction Act passed, which, itself, internalized a significant chunk of US emissions. For these reasons, we believe it’s reasonable to assume a significant amount of external benefits have been internalized into markets, but, perhaps, not all benefits. Thus, we believe that, all else equal, this assumption leads to an underestimation of benefits. ”

Thanks for this article! The above assumption feels quite wrong to me and, as such, I expect this makes your estimate a vast underestimate everything else being equal.

You seem to assume that climate risk and other externalities are priced in market valuations. Even if that were true for the US (which seems unlikely) it would certainly not be true for most places in the world that have very little in terms of pricing energy-related externalities. Given the role of US in the global energy innovation system, it seems reasonable to assume that the benefits that ARPA-E creates are way larger than market valuations, at least if market valuations are—as you suggest—reflective of expected policy returns.

• [ ]
[deleted]
• Going to “New Post” now, it looks like you have to explicitly consent:

EDIT: This seems to have disappeared for me, and generally be inconsistent/​buggy right now (see also Elliot’s comment below). I think fixing this is pretty important.