RSS

# History

TagLast edit: 26 Jul 2022 12:53 UTC by

The history tag is for posts that are strongly focused on historical events or trends (rather than just mentioning these things briefly), or that discuss or make heavy use of historical research methods.

## Further reading

Aird, Michael (2021) Collection of EA-associated historical case study research, Effective Altruism Forum, June 4.

# Some his­tory top­ics it might be very valuable to investigate

8 Jul 2020 2:40 UTC
88 points
34 comments6 min readEA link

# So­cial Move­ment Les­sons from the US Pri­son­ers’ Rights Movement

22 Jul 2020 12:10 UTC
32 points
3 comments128 min readEA link
(www.sentienceinstitute.org)

# Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

29 Jul 2021 16:38 UTC
20 points
0 comments156 min readEA link

# Long-Term In­fluence and Move­ment Growth: Two His­tor­i­cal Case Studies

13 Dec 2018 19:03 UTC
58 points
5 commentsEA link

# How tractable is chang­ing the course of his­tory?

22 May 2019 15:29 UTC
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

# Les­sons from the his­tory of an­i­mal rights

17 May 2016 19:32 UTC
36 points
10 commentsEA link

# Dis­con­tin­u­ous progress in his­tory: an update

17 Apr 2020 16:28 UTC
68 points
3 comments24 min readEA link

# Does Eco­nomic His­tory Point Toward a Sin­gu­lar­ity?

2 Sep 2020 12:48 UTC
135 points
58 comments3 min readEA link

# What Helped the Voice­less? His­tor­i­cal Case Studies

11 Oct 2020 3:38 UTC
127 points
16 comments92 min readEA link

# How frag­ile was his­tory?

2 Feb 2018 6:23 UTC
20 points
9 commentsEA link

# Key Les­sons From So­cial Move­ment History

30 Jun 2021 17:05 UTC
116 points
20 comments11 min readEA link
(www.sentienceinstitute.org)

# Why Un­der­grads Should Take His­tory Classes

7 Nov 2021 14:25 UTC
43 points
15 comments3 min readEA link

# Po­ta­toes: A Crit­i­cal Review

10 May 2022 15:27 UTC
116 points
27 comments9 min readEA link
(docs.google.com)

# [Question] Are there his­tor­i­cal ex­am­ples of ex­cess panic dur­ing pan­demics kil­ling a lot of peo­ple?

27 May 2020 17:00 UTC
28 points
15 comments1 min readEA link

# [Short Ver­sion] What Helped the Voice­less? His­tor­i­cal Case Studies

15 Dec 2020 3:40 UTC
52 points
2 comments10 min readEA link

# Per­sis­tence—A crit­i­cal re­view [ABRIDGED]

10 Nov 2021 11:30 UTC
90 points
6 comments2 min readEA link

# The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

28 Oct 2020 12:55 UTC
45 points
7 comments7 min readEA link

# [Question] What would an en­tity with GiveWell’s de­ci­sion-mak­ing pro­cess have recom­mended in the past?

25 Jun 2021 6:12 UTC
28 points
11 comments1 min readEA link

# Sum­mary of his­tory (em­pow­er­ment and well-be­ing lens)

28 Sep 2021 17:48 UTC
62 points
11 comments11 min readEA link

# New His­tory Ex­plo­ra­tion Club for EAs!

27 May 2022 15:00 UTC
25 points
0 comments2 min readEA link

# [Question] Most harm­ful peo­ple in his­tory?

11 Sep 2022 3:04 UTC
15 points
9 comments1 min readEA link

# Briefly, the life of Tetsu Naka­mura (1946-2019)

3 Oct 2022 15:44 UTC
131 points
4 comments2 min readEA link

# Why do so­cial move­ments fail: Two con­crete ex­am­ples.

4 Oct 2019 19:56 UTC
102 points
16 comments8 min readEA link

# Which prop­er­ties does the EA move­ment share with deep-time or­gani­sa­tions?

26 Aug 2020 10:59 UTC
46 points
6 comments8 min readEA link

# A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

30 Aug 2020 7:49 UTC
51 points
16 comments3 min readEA link

# Some promis­ing ca­reer ideas be­yond 80,000 Hours’ pri­or­ity paths

26 Jun 2020 10:34 UTC
140 points
28 comments15 min readEA link

# At­tempt at un­der­stand­ing the role of moral philos­o­phy in moral progress

28 Oct 2019 16:32 UTC
32 points
8 comments5 min readEA link

# The Germy Para­dox—Filters: Hard and soft skills

3 Oct 2019 1:28 UTC
20 points
0 comments1 min readEA link
(eukaryotewritesblog.com)

# Model­ling the odds of re­cov­ery from civ­i­liza­tional collapse

17 Sep 2020 11:58 UTC
39 points
8 comments2 min readEA link

# [Part 2] Am­plify­ing gen­er­al­ist re­search via fore­cast­ing – re­sults from a pre­limi­nary exploration

19 Dec 2019 16:36 UTC
31 points
1 comment14 min readEA link

# Notes on ‘Atomic Ob­ses­sion’ (2009)

26 Oct 2019 0:30 UTC
62 points
16 comments8 min readEA link

# Rose Had­shar: From the Ne­olithic Revolu­tion to the far future

26 Oct 2018 8:35 UTC
13 points
0 comments1 min readEA link
(www.youtube.com)

# Is Democ­racy a Fad?

13 Mar 2021 12:40 UTC
144 points
36 comments18 min readEA link

# Notes on Hen­rich’s “The WEIRDest Peo­ple in the World” (2020)

25 Mar 2021 5:04 UTC
38 points
4 comments3 min readEA link

# How big a deal was the In­dus­trial Revolu­tion?

16 Sep 2017 7:00 UTC
31 points
0 comments3 min readEA link
(lukemuehlhauser.com)

# [Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

22 Mar 2021 11:07 UTC
26 points
2 comments1 min readEA link
(hearthisidea.com)

# So­cial Move­ment Les­sons from the Fair Trade Movement

2 Apr 2021 10:51 UTC
39 points
0 comments41 min readEA link
(www.sentienceinstitute.org)

# Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

6 Apr 2021 8:49 UTC
50 points
6 comments7 min readEA link

# Welfare sto­ries: How his­tory should be writ­ten, with an ex­am­ple (early his­tory of Guam)

2 Jan 2020 23:32 UTC
46 points
3 comments59 min readEA link

# On the longter­mist case for work­ing on farmed an­i­mals [Uncer­tain­ties & re­search ideas]

11 Apr 2021 6:49 UTC
78 points
12 comments8 min readEA link

# [Question] How many times would nu­clear weapons have been used if ev­ery state had them since 1950?

4 May 2021 15:34 UTC
16 points
13 comments1 min readEA link

# The world is much bet­ter. The world is awful. The world can be much bet­ter.

22 Aug 2022 14:33 UTC
61 points
2 comments5 min readEA link

# What the EA com­mu­nity can learn from the rise of the neoliberals

6 Dec 2016 18:21 UTC
43 points
9 commentsEA link

# Some AI Gover­nance Re­search Ideas

3 Jun 2021 10:51 UTC
90 points
5 comments2 min readEA link

# Hu­man­i­ties Re­search Ideas for Longtermists

9 Jun 2021 4:39 UTC
146 points
13 comments15 min readEA link

# [Pod­cast] Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

25 Jun 2021 15:39 UTC
12 points
0 comments1 min readEA link
(80000hours.org)

# Has Life Got­ten Bet­ter?

5 Oct 2021 8:31 UTC
77 points
23 comments7 min readEA link

# GovAI An­nual Re­port 2021

5 Jan 2022 16:57 UTC
51 points
2 comments9 min readEA link

# [Question] Ex­am­ples of pure al­tru­ism to­wards fu­ture gen­er­a­tions?

26 Jan 2022 16:42 UTC
16 points
14 comments1 min readEA link

# A Primer on God, Liber­al­ism and the End of History

28 Mar 2022 5:26 UTC
7 points
3 comments14 min readEA link

# [Question] What pieces of ~his­tor­i­cal re­search have been ac­tion-guid­ing for you, or for EA?

29 Mar 2022 10:31 UTC
31 points
7 comments1 min readEA link

# H-Day, a case study in co­or­di­nated change

16 Apr 2022 7:51 UTC
14 points
0 comments1 min readEA link
(en.wikipedia.org)

# The Fabian so­ciety was weirdly similar to the EA movement

26 Apr 2022 8:23 UTC
67 points
4 comments1 min readEA link

# The Mys­tery of the Cuban mis­sile crisis

5 May 2022 22:51 UTC
10 points
4 comments9 min readEA link

# Re­quest for pro­pos­als: Help Open Philan­thropy quan­tify biolog­i­cal risk

12 May 2022 21:28 UTC
127 points
6 comments7 min readEA link

# Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

19 May 2022 8:42 UTC
387 points
36 comments17 min readEA link

# [Question] Pro­jects for EA historians

7 Jun 2022 14:48 UTC
10 points
4 comments1 min readEA link

# Bruce Kent (1929–2022)

10 Jun 2022 14:03 UTC
47 points
3 comments2 min readEA link

# Will MacAskill: The Begin­ning of History

13 Aug 2022 22:45 UTC
36 points
0 comments1 min readEA link
(www.foreignaffairs.com)

# The His­tory of AI Rights Research

27 Aug 2022 8:14 UTC
41 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

# His­to­ries of Value Lock-in and Ide­ol­ogy Critique

2 Sep 2022 10:09 UTC
9 points
1 comment31 min readEA link

# EA is too fo­cused on the Man­hat­tan Project

5 Sep 2022 2:00 UTC
27 points
0 comments1 min readEA link

# [Question] Who are some less-known peo­ple like Petrov?

6 Sep 2022 13:22 UTC
51 points
19 comments1 min readEA link

# The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

16 Sep 2022 10:42 UTC
80 points
4 comments25 min readEA link

# [Question] Why doesn’t WWOTF men­tion the Bronze Age Col­lapse?

19 Sep 2022 6:29 UTC
16 points
4 comments1 min readEA link

# 9/​26 is Petrov Day

25 Sep 2022 23:14 UTC
62 points
10 comments2 min readEA link
(www.lesswrong.com)

# The sense of a start

28 Sep 2022 13:37 UTC
52 points
0 comments5 min readEA link
(www.gleech.org)

# [Question] Ex­am­ples of self-gov­er­nance to re­duce tech­nol­ogy risk?

25 Sep 2020 13:26 UTC
32 points
1 comment1 min readEA link

# Wrong by Induction

6 Sep 2018 22:00 UTC
20 points
2 commentsEA link

# Reli­gious Texts and EA: What Can We Learn and What Can We In­form?

30 Jan 2021 12:04 UTC
29 points
11 comments5 min readEA link

# The Germy Para­dox – The empty sky: How close did we get to BW us­age?

27 Sep 2019 3:56 UTC
22 points
5 comments6 min readEA link
(eukaryotewritesblog.com)

# The Germy Para­dox – The empty sky: A his­tory of state biolog­i­cal weapons programs

24 Sep 2019 5:26 UTC
24 points
0 comments1 min readEA link
(eukaryotewritesblog.com)

# [Question] Which sci­en­tific dis­cov­ery was most ahead of its time?

16 May 2019 12:28 UTC
34 points
17 comments1 min readEA link

# Cortés, Pizarro, and Afonso as Prece­dents for Takeover

2 Mar 2020 12:25 UTC
27 points
17 comments11 min readEA link
(aiimpacts.org)

# How democ­racy ends: a re­view and reevaluation

24 Nov 2018 17:41 UTC
26 points
2 commentsEA link
(thinkingcomplete.blogspot.com)

# [Question] Best time-travel in­ter­ven­tion?

27 Oct 2020 14:24 UTC
22 points
3 comments1 min readEA link

# [Question] What are some his­tor­i­cal ex­am­ples of peo­ple and or­ga­ni­za­tions who’ve in­fluenced peo­ple to do more good?

11 Apr 2020 21:44 UTC
8 points
9 comments1 min readEA link

# Poverty in De­pres­sion-era England: Ex­cerpts from Or­well’s “Wi­gan Pier”

12 Feb 2020 1:01 UTC
16 points
2 comments3 min readEA link

# The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

# [Question] What are some moral catas­tro­phes events in his­tory?

22 Jun 2021 6:37 UTC
29 points
11 comments1 min readEA link

# Book re­view: The Dooms­day Machine

18 Aug 2021 22:15 UTC
21 points
0 comments16 min readEA link
(strataoftheworld.blogspot.com)

# Nu­clear Es­pi­onage and AI Governance

4 Oct 2021 18:21 UTC
32 points
3 comments24 min readEA link

# Book Re­view: Churchill and Orwell

10 Oct 2021 11:06 UTC
28 points
0 comments11 min readEA link

# Past and Fu­ture Tra­jec­tory Changes

28 Mar 2022 20:04 UTC
32 points
5 comments12 min readEA link
(goodoptics.wordpress.com)

# The Cult Deficit: Anal­y­sis and Spec­u­la­tion (v2.0)

14 Jun 2022 4:14 UTC
3 points
0 comments14 min readEA link
(rogersbacon.substack.com)

# Dialec­tic of Enlightenment

15 Jun 2022 4:58 UTC
5 points
1 comment2 min readEA link
(monoskop.org)

# My notes on: A Very Ra­tional End of the World | Thomas Moynihan

20 Jun 2022 8:50 UTC
12 points
1 comment5 min readEA link

# His­tory of the the­ory of well-being

22 Jun 2022 8:17 UTC
21 points
8 comments23 min readEA link

# How to Be­come a World His­tor­i­cal Figure (Péladan’s Dream)

7 Jul 2022 22:41 UTC
−5 points
2 comments30 min readEA link

# How moral progress hap­pens: the de­cline of foot­bind­ing as a case study

26 Jul 2022 9:18 UTC
91 points
10 comments17 min readEA link

# The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

10 Aug 2022 11:00 UTC
51 points
3 comments9 min readEA link
(verfassungsblog.de)

# [Question] Does any­one have a list of his­tor­i­cal ex­am­ples of so­cieties seek­ing to make things bet­ter for their de­scen­dants?

15 Aug 2022 13:13 UTC
6 points
6 comments1 min readEA link

# [Question] Any­one has a refer­ence for “It’s es­ti­mated that abo­li­tion cost Bri­tain 2% of its GDP for 50 years”?

16 Aug 2022 19:26 UTC
11 points
0 comments1 min readEA link

# An­drew Carnegie and Public Libraries: A Model of Effec­tive Altruism

21 Aug 2022 4:24 UTC
17 points
2 comments3 min readEA link

# Who or­dered al­ign­ment’s ap­ple?

28 Aug 2022 14:24 UTC
5 points
0 comments3 min readEA link

# [Question] Was cen­tral place food shar­ing the origi­nal effec­tive al­tru­ism?

27 Aug 2022 8:27 UTC
10 points
1 comment1 min readEA link

# Pop­u­lar ed­u­ca­tion in Swe­den: much more than you wanted to know

11 Sep 2022 22:04 UTC
9 points
0 comments10 min readEA link

# Im­prov­ing “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”: A brief his­tory of IIDM

12 Sep 2022 17:45 UTC
65 points
3 comments11 min readEA link

# [Question] Books on so­cial move­ments?

17 Sep 2022 6:27 UTC
8 points
5 comments1 min readEA link

# “A Creepy Feel­ing”: Nixon’s De­ci­sion to Disavow Biolog­i­cal Weapons

30 Sep 2022 15:17 UTC
39 points
3 comments17 min readEA link
• [I’m a content organizer but I’m recusing myself for this because I personally know Andrew.]

Thanks for writing! A few minor points (may leave more substantive points later).

In 2014, one survey asked the 100 most cited living AI scientists by what year they saw a 10%, 50%, and 90% chance that HLMI would exist

There is updated research on this here (survey conducted 2019) and here (2022; though it’s not a paper yet, so might not be palatable for some people).

Only 17% of respondents said they were never at least 90% confident HLMI would exist.

I think this is a typo.

Considering all of these scenarios together, 80,000 Hours’ team of AI experts estimates that “the risk of a severe, even existential catastrophe caused by machine intelligence within the next 100 years is something like 10%.”

I don’t think I would cite 80,000 hours, as that particular article is older. There is a newer one recently, but it still seems better for ethos to cite something that looks like a paper. You could possibly cite Carlsmith or the survey above, which I think says the median researcher assigns 5% chance of extinction-level catastrophe.

• NOTES FOR A YOUTUBE VIDEO ON EA

Many people realize that our WORLD is on a path that may lead to extinction. Many people thus do “THEIR PART” in making the change necessary to help. They become vegetarian to decrease greenhouse gases, buy an electric car, recycle, start composting, etc etc. All these minute changes, they say, if added up, will make the difference. Its possible, I mean theres a non-zero chance that these actions will make that change.

But if you think about time spent, versus change made, you have a linear amount of change over time that amounts to a tiny, tiny drop in the bucket.

What you and others are now finding is the knowledge to create greater change than a drop. To make the same time spent, far more effective.

• This sounds mostly right, and it’s concordant with research in evolutionary psychology over the last 35 years.

Previous applications of evolutionary theory to behavior (before the 1980s) did often model animals and humans as if they were trying to maximize inclusive genetic fitness (IGF) -- but this was usually considered a heuristic over-simplification of how animals actually make decisions (in foraging, mating, predator avoidance, parental investment, etc). This IGF-maximizing model was often useful in behavioral ecology, evolutionary game theory, optimal foraging theory, and sociobiology—but nobody took it very seriously as a functional description of how animals make decisions, and only the most naive researchers assumed that animal brains include little utility functions that directly represent IGF.

The key innovation in evolutionary psychology, in the late 1980s, as pioneered by Leda Cosmides and John Tooby, was to explicitly reject IGF-maximization as a description of animal/​human decision-making, and to replace it with the notion of domain-specific psychological adaptations that evolve to handle particular aspects of fitness. In other words, we’re adaptation-executors, not fitness-maximizers. The psychological adaptations typically track ‘fitness affordances’ (e.g. food, mates, territories, offspring, kin) that are statistically associated with fitness, rather than trying to track fitness itself—much as AIs might show instrumental convergence onto generally useful instrumental goals.

This ev psych perspective makes it much easier to explain cases of ‘evolutionary mismatch’, where the modern environment doesn’t match the ancestral environment, so our adaptations might not work very well any more.

• Buddhists in most countries are seen as people who simply meditate. The idea is to relinquish attachment to earthly desires in order to DO MORE. Not to meditate all day to do nothing. In an effort of learning all teachings, we will re-evaluate the core values, and give a new perspective based on the teachings. The core re-evaluation leads us to the idea that Nirvana is something to achieve in this realm. And that leaving this realm is a negative desire in it of itself.

Let’s break them down.

1. To save all people: this is by far the biggest vow that is failed to be recognized by modern Buddhist society. Outreach is a non-factor. So many people who need this valuable information are left in the dark, with no outreach from Buddhists.

2. Many people have regarded this initial vow as meaning to renounce. But here’s what the true text says “Desires for tangible things (such as wealth, property, or other material goods) or for pleasures of the body (such as sexual activity, gluttony, or other hedonistic pursuits). Buddhism teaches us to try to let go of our worldly desires, freeing our minds and bodies for a state of enlightenment.”As it says, freeing our minds and bodies for a state of enlightenment.

3. Learning all teachings requires knowing all things, we do not even know how consciousness works yet. To learn all teachings, we need more time.

4. “In Buddhism, enlightenment (called bodhi in Indian Buddhism, or satori in Zen Buddhism) is when a Buddhist finds the truth about life and stops being reborn because they have reached Nirvana.” In order to stop the cycle of rebirth, we must stop the death to life cycle. With infinite time our particles are always brought back to a conscious form. Until we find a conscious form to reach Nirvana in, we will repeat the cycle. “Rebirth in Buddhism refers to its teaching that the actions of a person lead to a new existence after death, in an endless cycle called saṃsāra.[1][2] This cycle is considered to be dukkha, unsatisfactory and painful. The cycle stops only if liberation is achieved by insight and the extinguishing of craving.

• 7 Oct 2022 21:42 UTC
1 point
0 ∶ 0

Good luck—hope you find someone :)

• I have written several posts that I didn’t submit for a combination of reasons. Reason #3 below is deeply troubling and is the thing that I most want to see changed about EA in general. I wrote 1500 words yesterday about it, and decided not to post it because of reason #2 and reason #3 itself. Since this post asks for feedback, it seems like an OK place to copy-paste it, er, write a completely new essay on the same topic.

### 1. Perfectionism

This is the most common and at the same time the least tractable. It’s not that I don’t feel the drafts meet the EA Forum standards, rather the drafts don’t meet my standards.

### 2. Negativity

Like Scott A. wrote about on ACX, EA self-critiques are almost a fetish. I deeply appreciate our culture of being open to criticism, and I wouldn’t want to change that. However, my subconscious is always looking for more reasons to be disappointed with myself, and I don’t want to add to the negativity for others, especially when I’m piling onto things that others have said already.

### 3. Ongoing Exclusivity of EA

Like me, many people feel like EA outsiders—even those of us who identify deeply and passionately with EA. (To me, the forum itself is not exclusive; everyone on the forum is very kind, welcoming, and encouraging.)

If we can’t absorb more people due to lack of funding, that’d be one thing. However, since at least 2015, 80K (and others) have talked about a “funding overhang” and a “talent gap,” and no matter how many asterisks or replacement terms we come up with, the implication is still the same. I’ll let this 2019 writer describe what he hears EA saying:

“We are so talent constraint… (20 applications later) … Yeah, when we said that we need people, we meant capable people. Not you. You suck.”

Why exactly does that prevent me from posting more on the forum? First, this is the problem I want to write about, and I can only rephrase things that others have written eloquently about for years. Second, because the inferential difference feels insurmountable even when we talk about solutions.

For example, when I read the title, “EA needs consultancies,” I think, “that’s exactly what I was thinking!” only to read the post and find that Luke’s talking about “McKinsey-style consultancies.” So, Ivy League folks? To be fair to Luke, he didn’t say that “analytically strong people” excludes the average engineer or software developer;[1] I’m getting that impression elsewhere. Like I said in a comment on Constance’s post about EAG rejection, if a literal doctor doesn’t make the cut, what hope do the rest of us have?

### 4. Not Ambitious Enough

Maybe the real reason I don’t post is that I need to “be more ambitious.” Sorry, this is a cheap jab, but that narrative really kills me. Is there any other group on the planet with loftier goals than us? Is there a community that’s trying to save the extra-super-duper long-term future?[2] I mean, I beat myself up for not being as rich as SBF, but if there’s a reason I should feel worse about myself, please, do tell!

Some people interpret EA as meaning, high impact. To me, EA is for anyone who wants to have higher at the margin—no matter who you are.

Again, I’m SO SORRY about the negative tone of this whole comment, but I worry that the level of irony from not posting it could initiate vacuum collapse. Hopefully you all know that I still love you, even though I’m a bit cranky about my unemployment these days.

1. ↩︎

which is already too high a bar IMO

2. ↩︎

Besides, if there was, most of us would be over there instead of here.

• I recently applied for funding and found it helpful to look at my month to month spending over the last few months. I guessed at a rough mean monthly spending over 6 months but I might have been better off picking a median. I also forgot to account for tax!

• Julia—excellent advice.

Also budget for a car and car insurance if you’re planning to live in the US.

The US generally has very bad public transportation, and a very car-centered culture, so you’ll need to buy a used car. Only a few cities in the US are feasible to live without a car (e.g. New York, San Francisco), and most are way too dangerous to use a bicycle for a daily commute.

You can get a decent used car for something around USD $5k to$10k, but bear in mind used car prices are quite high at the moment—about 50% higher than just 2 years ago.

In most areas of the US, I’d recommend something substantially bigger and heavier than UK or European people are used to driving, given the safety concerns. We have a lot of bad drivers, and if you’re into longtermism and longevity, you’ll want to minimize risk of death/​disability from car crashes. My heuristic would be, get a vehicle with at least 4,000 pounds mass, 6 airbags, and some active safety features. Bear in mind that gas is much cheaper in the US than in many other countries, so the extra mass doesn’t matter very much in terms of running costs.

• Thank you! This gave me an unexpectedly detailed peek into the Diversity Visa Program, and I hadn’t found good information elsewhere.

• Collective actions: persuasion, policy, and energy systems. I am omitting some more specialized opportunities, such as regarding cement and refrigerants, because they’re arguably less relevant for a general audience and because I just don’t know much about them, my apologies—but see the links if you’re interested.

Why is cement less relevant for a general audience than the policy on how the energy supply works? From an EA perspective, the topic produces significant emissions and is getting little attention from a general audience. That suggests neglectedness and thus that it’s more important to talk about it than to talk about solar power.

• Thanks for offering this!

• The catastrophic harms are those studied by people interested in climate change as a GCR/​Xrisk. The research is limited (1, 2, 3, 4, 5, 6, 7), but for now, the basic picture seems to be of some chance of climate change contributing to global/​existential catastrophe. The size of the effect is a point of debate. My sense is that it’s being underestimated, but it’s difficult to pin down.

That paragraph tells me nothing about what order of magnitude of chance we are speaking about. If you want to draw any conclusions then it’s important to talk about the likelihood or at least the ballpark of it.

Being too vague to be wrong is bad.

• People interested in AI risk and this post might be interested in applying to the researcher or software engineer roles at the Alignment Research Center, a non-profit organization focused on theoretical research to align future machine learning systems with human interests.

This is a test by the EA Forum Team to gauge interest in job ads relevant to posts - give us feedback here.

• Some good humor was had among CEA devs today laughing at this stack overflow: What is the best comment in source code you have ever encountered?

//
// Dear maintainer:
//
// Once you are done trying to 'optimize' this routine,
// and have realized what a terrible mistake that was,
// please increment the following counter as a warning
// to the next guy:
//
// total_hours_wasted_here = 42
// 
• There are a lot of great resources for grantees that people posted in the responses to my question a few months ago, “Grantees: how do you structure your finances & career?” (The answers with the most detail are about taxation and other expenses-related stuff. I am still curious to hear more about how being an EA grantee intersects with long-term career planning and other life considerations about things like family, location, etc.)

• (writing personally here, not for any organization)

Some good questions raised here!

I know that EA organizations have thought about this, for example when Giving What We Can was working on the dashboard that would show projected lifetime earnings and donations. I think it’s really hard to provide a broad overview of differences that doesn’t sound insulting or over-generalizing.

So much of the wage gap by gender reflects time away from work after having children, and individuals have significant choice over that. For example, I took a lot more leave (from a non-EA job I disliked) with my first child than I did with my second two from my EA job. In the US it’s common to take more like 10 weeks of leave, so the post’s use of a year of maternity leave as the default is pretty different from my experience.

(But I recognize that a lot of choices about how much time to take depends on income, childcare options, the children’s needs, and how flexible the parents’ work is. I don’t want to frame it all as completely up to personal preference. For example, my partner and I found ourselves unexpectedly without childcare when our eight-month-old refused to eat at daycare.)

On the informal side, there are groups for women and nonbinary people in EA and parents (and people considering parenting) in EA.

Some non-EA resources that I’ve found at least somewhat helpful:

I Know How She Does It by Laura Vanderkam—based on working mothers’ time logs, showing how they actually use their time. Both the book and her podcast are definitely aimed at high earners /​ high spenders, which I found frustrating at times. But they’re useful for taking your time seriously. I often find time management advice by non-parents irrelevant, but Vanderkam is a parent of five.

ParentData blog and books by Emily Oster—less about career, but an economist-y take on a bunch of parenting questions.

Selfish Reasons to Have More Kids—maybe will help you worry less about things you can’t change, but I find there’s still plenty to worry about.

Sheryl Sandberg’s Lean In is in this genre, but honestly I don’t remember any takeaways.

And some pieces by EAs about parenting:

Parenting: things I wish I could tell my past self, Michelle Hutchinson

How to be productive before your baby turns one, Ruth Grace

My experience returning to work after having a baby, Rose Hadshar

Equal parenting advice for dads, Jeff Kaufman (plus lots of his other parenting posts)

Parenting and effective altruism, Bernadette Young

How much will pregnancy affect your health and work? me

• Thanks for sharing, Julia_Wise. I can definitely imagine how hard it is to for provide a broad overview of differences that doesn’t sound insulting or over-generalizing, and its good to know that EA organizations have thought about this. It’s really helpful to think about the non-EA resources too—I’ll take a look!

• What does 50 FTE mean? Do you mean 50% full time equivalent.

• AI operates in the single-minded pursuit of a goal that humans provide it. This goal is specified in something called the reward function.

It turns out the problem is a lot worse than this—even if we knew of a safe goal to give AI, we would have no idea how to build an AI that pursues that goal!

See this post for more detail. Another way of saying this using the inner/​outer alignment framework: reward is the outer optimization target, but this does automatically induce inner optimization in the same direction.

• 7 Oct 2022 19:24 UTC
4 points
0 ∶ 1

I don’t think it’s a good plan to build an AI that enacts some pivotal act ensuring that nobody ever builds a misaligned AGI. See Critch here and here. When I think about building AI that is safe, I think about multiple layers of safety including monitoring, robustness, alignment, and deployment. Safety is not a single system that doesn’t destroy the world; it’s an ongoing process that prevents bad outcomes. See Hendrycks here and here.

• Now I desperately want to know: Are EA jobs as selective as EAG?

If EA(G) rejects a doctor who tries really really hard to attend (and pay for) a conference, I wonder whether the Rest-of-Us™ are wasting our time by applying for EA jobs/​grants.

For now, I have updated towards believing that most EA opportunities (e.g., 80K job board postings) aren’t accessible to me. Sad as that is, I now have some explanation for why I have been rejected for jobs that I’m plenty qualified for on paper. The competition must be substantially stiffer than I thought.[1] Good thing I didn’t try to apply for start-up grants because those applications are a much larger time investment.[2]

Constance: Yesterday I dug myself a very deep hole of discouragement, and this post helped me climb out. Thank you!

1. ↩︎

therefore, the bottleneck cannot be “talent gap”!

2. ↩︎

having been a manager and done a start-up, I sometimes wondered if I should have tried

• Your question and concerns are really valid.

I think what is missing in this post and the comments is content about individual differences that are hard to observe and deeply private, that have been used to determine admission. Another issue is giving detail about admissions answers.

If this discussion was more candid and complete, these would reduce concerns like yours.

People are unwilling to provide the above, because they both strongly are counter to the narrative of the post (and the highly upvoted SA post) and involve a public criticism of a person.

Also, a more systemic explanation about admissions, would need to describe/​imply the quality curve about the supply of candidates (which no one completely is sure of) and is bad optics/​implies eliteness/​makes everyone rejected feel bad.

• 7 Oct 2022 18:59 UTC
4 points
0 ∶ 0

Furthermore, the quality distribution of jammie dodgers is arguably fat-tailed.[1] If by many examples you’ve trained your intuition about what “good cookies” look like, you’re most likely still sampling near the median part of the distribution. The very best might be very different. What you naively perceive as “lumpy”—a trait you rarely see in “good cookies” so you instead grab another one—might in fact be part of the unusual character that takes it into the very best category.[2] After all, you should expect the extreme outliers to be different in some unusual way compared to the merely good outliers you’ve trained your intuitions on. I always eat the ones that don’t fit in.

1. ^

Although more realistically the distribution has several peaks due to recipe variation and baker idiosyncrasies.

2. ^

Sensitivity over specificity for non-poisonous cookie-distributions! Not only because, as you say, flaws are easier to notice than hitherto-unknowable outlier winning-traits, but also because flaws are less consequential in lower-bounded distributions.

• Students for High-Impact Charity shut down based on their results:

• Within a year of instructor-led workshops, we presented 106 workshops, reaching 2,580 participants at 40 (mostly high school) institutions. We experienced strong student engagement and encouraging feedback from both teachers and students. However, we struggled in getting students to opt into advanced programming, which was our behavioral proxy for further engagement.

• Ajeya posted an update to her AI timelines report:

My personal timelines have also gotten considerably shorter over this period. I now expect something roughly like this:

• ~15% probability by 2030 (a decrease of ~6 years from 2036).

• ~35% probability by 2036 (a ~3x likelihood ratio[3] vs 15%).

• This implies that each year in the 6 year period from 2030 to 2036 has an average of over 3% probability of TAI occurring in that particular year (smaller earlier and larger later).

• A median of ~2040 (a decrease of ~10 years from 2050).

• This implies that each year in the 4 year period from 2036 to 2040 has an average of almost 4% probability of TAI.

• ~60% probability by 2050 (a ~1.5x likelihood ratio vs 50%).

• Joe Carlsmith’s report on power seeking AI says this (emphasis added):

I assign rough subjective credences to the premises in this argument, and I end up with an overall estimate of ~5% that an existential catastrophe of this kind will occur by 2070. (May 2022 update: since making this report public in April 2021, my estimate here has gone up, and is now at >10%.)

• [ ]
[deleted]
• Hi David! I apologize for the very slow response. A few points:
- Your analysis makes me upgrade how important I think diligent time tracking is on this project in future years, segmented by e.g., ‘managerial and tech time’ vs ‘volunteer/​student time’
- I don’t have a go-to answer for you on the time costs for EA GT 2021. We had 2 Ops Specialists (Aisha and Mac) each work ~200 paid hours; I worked about 350 paid hours (including hiring and training); Avi worked probably a few hundred volunteer hours (including hiring and training); Gina and a few others worked a small amount of volunteer hours.
- Can the project’s time costs decrease via “learn by doing?” I am somewhat optimistic about this. But it’s tricky because historically, new people have had to be trained on the systems and context every year. So processes can be improved, but a big thing is getting the same people to contribute to the project year after year. And this is tough, because it’s uncertain the project will run any given year, and it’s only seasonal. Ideally, the “institutional knowledge” would sit at an EA org (ideally, with the same people) over the long term.
- Thanks again for your BOTEC, I enjoyed reading it and I imagine it has helped folks in the community evaluate the projects’ value.

• [ ]
[deleted]
• I appreciate that—thanks! I have worked a lot on it. A lot of the credit goes to my great EA GT teammates, in present and past years.

• Thank you for all your amazing work in connecting people to make the world a better place. I have some feedback on the following:

“We expect applications for the other conferences to open approximately 2 months before the event”

After the pandemic, visa admissions have slowed down quite significantly. “Priority Service” option for UK visas wasn’t available when I applied for a visa for the EAGx Oxford in 2021. A friend of mine has waited 6 weeks until he received his UK visa, which is not so uncommon. Similarly, earliest tourist visa appointment I can get from US embassy is for 2024. If applicants from countries with “weaker” passports were allowed to apply earlier, that would help with accessibility and inclusivity.

• I think my polls create value, but not this much. Writing polls on the forum is absolutely overpowered in terms of gaining karma.

• If you type “#” follwed by the title of a post and press enter it will link that post.

Example:
Examples of Successful Selective Disclosure in the Life Sciences

This is wild

• OMG

• A hot take I heard recently that I like but am not yet sure I buy is that asking for more rather than less can make you feel more like an employee and less like a volunteer, and therefore more likely to actually take your work seriously.

• [ ]
[deleted]
• Interesting, I hadn’t thought of the anchoring effect you mention. One way to test this might be to poll the same audience about other more outlandish claims, something like the probability of x-risk from alien invasion, or CERN accidentally creating a blackhole.

• This comment might seem somewhat tangential but my main point is that the problem you are trying to solve is unsolvable and we might be better off reframing the question/​solution.

My views

(1) anti realism is true

• Every train/​stop is equally crazy/​arbitrary.

(2) The EA community has a very nebulous relationship with meta-ethics

• obligatory the community is not a monolith

• I see lots of people who aren’t anti-realists, lots who are

• Community has no political system, so logic/​persuasion is often the only way to push many things forward (unless you control resources)

• if anti-realism is true there is no logical way to pick a worldview

• (if anti-realism is true) most of this discussion is just looping and/​or trying to persuade other EAs. I say that as someone who likes to loop about this stuff.

• most of the power is in the hands of the high status early joiners (we constantly reference Karnofsky or Macaskill as if they have some special insight on meta-ethics) and rich people who join the community (give money to whatever supports their worldview).

(3) I think the community should explicitly renounce its relationship with utilitarianism or any other ethical worldview.

• Let subgroups pop up that explicitly state their worldview, and the political system they will use to try to get there. e.g. utilitarianism + democracy, welfarism + dictatorship, etc.

• You can reflect upon your moral views and divy up your funds/​time to the groups accordingly.

• These groups will become the primary retro funders for impact markets, since their goals are clear, and they will have the institutions to decide how they measure impact.

• and also just the main funders but I wanted to emphasize that this will have good synergy with higher tech philanthropy.

• I feel that this is much more transparent, honest, and clear. This is personally important to me.

• We can stop arguing about this sort of stuff and let the money talk.

• EA as a “public forum”, not an agent with power

• I deleted my first comment to post a better one.

Some of my comment is a bit long and may not be 100% directed at you, sorry for that. I still thought it was relevant enough to mention.

I am generally in favour of honesty and openness about one’s views, including moral views. I’m also generally in favour of being open about power dynamics, such as the fact that two people (SBF and Moskowitz) control almost all EA funding. I also think moral antirealism is true in a strong sense with high probability.

However, some things I wanted to point out:

1. Moral antirealism doesn’t disallow progress on political questions in general. (Like, which system of govt is better or should you fund republicans or democrats.) Sometimes disagreement on political questions is due to both persons correctly deducing which moral values lead to which political opinion. But sometimes its people incorrectly deducing, and one can come to agree when their chain of reasoning is correct.

2. Even if you think moral anti-realism is true, good luck convincing everyone else in EA of that, except through even more debate. Often the best you’ll be able to convince people of is that the probabilities of them eventually converging to each other’s opinions is lower than they thought. (And that maybe EV calculus or some other moral calculus now implies they should devote less resources to debating other EAs than they currently do.)

3. One of the EA’s community strengths is that money is not the only (and arguably not even the primary) source of power people weild within it. a) Many EAs have inside views and are willing to fight for what they think is right, even if this is at cross-purposes with what a donor is willing to pay them for. (I hope) People in EA will often go to the point of earning and funding themselves to do work based on their inside views, instead of working on agendas they don’t believe in (or believe are suboptimal) for the sake of money. b) Point 3 applies not just to lone agents and circles with EA but also circle outside EA that EA is trying to influence, that cannot be bought with money. For instance nuclear strategists or future AGI labs are not going to listen to which EAs have more money, they’re going to listen to who they think have better ideas or social status or networks and so on.

4. I think previous point (money possibly not being the primary source of power in EA) is a good thing, and one should be careful before trying to increase the power that money weilds in EA, relative to other non-money sources of power.

5. People with disagreements spending money on differing views is often suboptimal. Often this turns into a negative sum money-burning contest like funding democrats versus republicans. (Negative-sum because of harms such as worsened public epistemics—it may be better if both sides dumped their money in a volcano instead.) We can do even better though, such as moral trades. You can also make the case that any debate reduces the probability of a money-burning race to the bottom. Even if neither side is objectively correct, people converging to any side at all may often be better than burning resources (time, money, ideas, power) engaging in conflict against the other side.

• Belated comment to express my excitement about this after logging into the new fund dashboard

• David Nash and I are organising a lightning talk session where we hope to feature presentations of several new cause ideas from the contest. If you submitted an entry and happen to be in London on 25th October, we’d love you to come along to the event with a short presentation. If you’re interested, please get in touch. Here are more details about the event.

• The rhetoric-to-substantive-argument ratio of that review is incredibly high, though the idea that space expansion poses a catastrophic risk from people hurling weaponized asteroids around is very interesting, and it’s a pity it’s buried in something so irritating.

• Here is the section that ends with the idea of weaponized asteroids:

On Earth, speciation can occur when spatially expanding populations become geographically isolated from one another (the unique diversity of species on the Galapagos Islands is the paradigm example of this phenomenon). Because of the vast distances between planets it is likely that similar fragmentation would eventually occur in outer space. Along with the interplanetary spread of cyborgs and artificial intelligences, the result, centuries after a viable Mars colony is established, will probably be a plethora of intelligent species, all of which will have evolved to fit their distinctive ecological constraints—an archipelago of politically distinct worlds. The idea is common in sci-fi, from Verner Vinge’s 1993 A Fire Upon the Deep to Mark Fergus and Hawk Otsby’s The Expanse, both Hugo Award winners.

Multi-world pluralism can look attractive if we assume that everyone will get along, regardless of the profound morphological, technological, and ideological differences that are bound to grow up around and between these groups. Here, however, the space expansionist invocation of natural selection bites back. Radiating and diversifying species notoriously compete with one another for available space and its resources or, in the case of intelligent species, just for glory and prestige. This is all familiar enough from the history of life on Earth, as is the mostly sorry result of the human interaction with other species as well as earlier human groups. We exterminated the Neanderthals (after breeding with them for a while) and are, according to the United Nations, currently in the process of eviscerating the non-human biosphere.

Doesn’t it seem likely that our deep-space descendants will inherit these destructive tendencies and turn them on each other? A space archipelago will be composed of mutually suspicious and competitive groups, millions of them eventually. But the bonds of sameness that can foster respectful recognition or mutual forbearance will surely diminish with increased interplanetary spatial dispersion and the ordinary workings of evolution. Not that we should expect a space-based Hobbesian war of all against all. There will doubtless be a good deal of room for interspecies and interworld diplomacy in this scenario. However, in the absence of a pacific trans-planetary government—and given our inability to create a single world government here, the chances of that seem slim—opportunities for plunder and general mayhem will likely abound. The temptation to cast the interplanetary Other as subhuman will be pronounced. Remember that the intelligent aliens in Starship Troopers are “bugs,” and in Battlestar Galactica they are “toasters.” Even in our fiction, it seems, we have a difficult time imagining what peaceful co-existence among wildly disparate beings might look like.

Because of this, all minimally viable colonies will have compelling reasons of state to stockpile awesome weapons of mass destruction: not only hydrogen bombs, but more importantly the ability to convert asteroids into planetoid bombs. Somehow this possibility—potentially genocidal or xenocidal wars of worlds—seems not to matter much to space expansionists, even though it’s standard fare in the well of sci-fi from which many of them have drunk so deeply.

• Methylphenidate has generally negative effects on me (makes it more difficult to think; I get tired easily; unmotivated).
Dextroamphetamine has negative effects on me, similar to methylphenidate.

Vyvanse or Adderall XR radically improves several aspects of my life; I become 2-3x more productive, a lot of friction to productivity is removed; I enjoy my work more; and I can also better prioritize what I should be working on. Beyond that, I also get way less anxiety and it removes my tendency to validation-seek almost entirely.

I take 20mg vyvanse. if I take too much, I will instead become over-focused on smaller tasks and miss the big picture of what’s important. A friend described it this way with too much Modafinil:

”YES! narrowing of focus. Normally I set up a bunch of threads, and I keep them alive in my mind so I can zoom out and think of the big picture and then zoom back in again. But on modafinil it was hard to keep the threads alive.”

I have not yet tried Wellbutrin, Guanfacine, but would be interested. For people in the UK Wellbutrin can be gotten online at a teledoctor service, so it seems like low friction.

@Nathan I also think it would generally be useful to write a short document of the procedure for getting an ADHD diagnosis in various countries (US, UK, France), and how to accelerate it? Eg. I think with the UK private psychiatry is significantly faster, if funds are available. I will write this up at some point soon, & I think Offroad also wants to do stuff with executive functioning and guides to getting diagnosed.

• “Various theories of moral uncertainty exist, outlining how this aggregation works; but none of them actually escape the issue. The theories of moral uncertainty that Effective Altruists rely on are themselves frameworks for commensurating values and systematically ranking options, and (as such) they are also vulnerable to ‘value dictatorship’, where after some point the choices recommended by utilitarianism come to swamp the recommendations of other theories. In the literature, this phenomenon is well-known as ‘fanaticism’.[10]”

This seems too strong. Doesn’t this only apply to maximizing expected choiceworthiness with intertheoretic comparisons, among popular approaches? My impression is that none of the other popular approaches are fanatical. You mention moral parliament in the footnote as a non-fanatical exception, but there’s also MEC variance voting, formal bargain-theoretic approaches and my favourite theory.

Also, theories with value lexicality can swamp utilitarianism, although this also seems fanatical.

• Good post, thank you.

”or other such nonsense that advocates never taking on risks even when the benefits clearly dominate”

An important point to note here—the people who suffer the risks and the people who reap the benefits are very rarely the same group. Deciding to use an unsafe AI system (whether presently or in the far future) using a risks/​benefits analysis goes wrong so often because one man’s risk is another’s benefit.

Example: The risk of lung damage from traditional coal mining compared to the industrial value of the coal is a very different risk/​reward analysis for the miner and the mine owner. Same with AI.

• Tyler Cowen’s standard generic advice is “find excellent peers and mentors”. I agree with that as one of the most valuable things to do, whether teenager or otherwise.

• Was just about to post this

I also like Scott Adams’s list of generic skills that “make you luckier” if you’re good at most of them:

Public speaking
Psychology
Business Writing
Accounting
Design (the basics)
Conversation
Overcoming Shyness
Second language
Golf
Proper grammar
Persuasion
Technology ( hobby level)
Proper voice technique

(though some—golf stands out—are kind of idiosyncratic)

• Invitation-only link post is an interesting format.

A couple of things I can imagine being more likely to write if I’m permitted to do the same.

Suggestion: tell people what to say about themselves in the “Request access” box to help you decide whether to grant access.

• Thomas Hurka’s St Petersburg Paradox: Suppose you are offered a deal—you can press a button that has a 51% chance of creating a new world and doubling the total amount of utility, but a 49% chance of destroying the world and all utility in existence. If you want to maximise total expected utility, you ought to press the button—pressing the button has positive expected value. But the problem comes when you are asked whether you want to press the button again and again and again—at each point, the person trying to maximise expected utility ought to agree to press the button, but of course, eventually they will destroy everything.[2]

I have two gripes with this thought experiment. First, time is not modelled. Second, it’s left implicit why we should feel uneasy about the thought experiment. And that doesn’t work due to highly variable philosophical intuitions. I honestly don’t feel uneasy about the thought experiment at all (only slightly annoyed). But maybe I would have it been completely specified.

I can see two ways to add a time dimension to the problem. First, you could let all the presses be predetermined and in one go, where we get into Satan’s apple territory. Second, you could have 30 seconds pause between all presses. But in that case, we would accumulate massive amounts of utility in a very short time—just the seconds in-between presses would be invaluable! And who cares if the world ends in five minutes with probability when every second it survives is so sweet? :p

• 7 Oct 2022 14:21 UTC
1 point
0 ∶ 0

How do you operationalize who counts as a member of the EA community? Is it based on self-identity?

• I’m going to bite the bullet of absurdity, and say this already happened.

Imagine a noble/​priest 500-1000 years ago trying to understand our western society, and they would likely find it absurd as well. Some norms have survived primarily due to the human baseline not changing through genetic engineering, but overall it would be weird and worrying.

For example, the idea that people are relatively equal would be absurd to a medieval noble, let alone our tolerance for outgroups/​dissent.

The idea that religion isn’t an all-powerful panacea or even optional would be absurd to a priest.

The idea that there are positive sum trades that are the majority, rather than zero or negative sum trades, would again be absurd to the noble.

Science would be worrying to the noble.

And much more. In general I think people underestimate just how absurd things can get, so I’m not surprised.

• 7 Oct 2022 14:17 UTC
3 points
0 ∶ 0

More of a methodological question, but as I know you are doing quite rigorous analysis in idea selection I would be very curious to learn how you model the multiplier from policy advocacy vis-à-vis more direct forms of work.

• Want to listen to the original paper? You can find this, and 26 other academic papers by Nick Bostrom, at radiobostrom.com.

https://​​radiobostrom.com/​​2/​​sharing-the-world-with-digital-minds

• I think the Train to Crazytown is a result of mistaken utilitarian calculations, not an intrinsic flaw in utilitarianism. If we can’t help but make such mistakes, then perhaps utilitarianism would insist we take that risk into account when deciding whether or not to follow through on such calculations.

Take the St. Petersburg Paradox. A one-off button push has positive expected utility. But no rational gambler would take such a series of bets, even if they’re entirely motivated by money.

The Kelly criterion gives us the theoretically optimal bet size for an even money bet, which the St. Petersburg Paradox invokes (but for EV instead of money).

The Paradox proposes sizing the bet at 100% of bankroll. So to compute the proportion of the bet we’d have to win to make this an optimal bet, we plug it into the Kelly criteria and solve for B.

1 = .51 - .49/​B

This gives B = −1, less than zero, so the Kelly criterion recommends not taking the bet.

• The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/​utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.

It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won’t press a button with any chance of the end of the world, but you’ll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.

• That’s a helpful correction/​clarification, thank you!

I suppose this is why it’s important to be cautious about overapplying a particular utilitarian calculation—you (or in this case, I) might be wrong in how you’re going about it, even though the right ultimate conclusion is justified on the basis of a correct utilitarian calculus.

• I don’t understand the relevance of the Kelly criterion. The wikipedia page for the Kelly criterion states that “[t]he Kelly bet size is found by maximizing the expected value of the logarithm of wealth,” but that’s not relevant here, is it?

• A very interesting read. I didn’t know much about BSD.

I was wondering if you had a spreadsheet or something equivalent where you put all the numbers you mention together? Prevalence, loss due to misdiagnoses, false positives, costs, etc. It could help with understanding the predicted cost-effectiveness of the different actions you mention. Apologies if I missed it in the core text.

• Thank you! I have some more detailed notes on the calculations I made, and the resulting numbers are in the post, but creating a proper spreadsheet with it is a good idea! I’ll do it on Monday and update the post.

• 7 Oct 2022 13:53 UTC
4 points
0 ∶ 0

Related: Risks of space colonization (Kovic, 2020).

• Thank you for mentioning about the work of Animal Empathy Philippines and EA Philippines in the write up! We definitely support this program, promote this locally, and nudge individuals to apply!

• Thank you for the post which I really liked! Just two short comments:

1. It is not clear to me why the problems of utilitarianism should inevitably lead to a form of fanaticism, under promising frameworks for moral uncertainty. At least this seems not to follow on the account of moral uncertainty of MacAskill, Ord and Bykvist (2020) which is arguably the most popular one for at least two reasons: a) Once the relevant credence distribution includes ethical theories which are not intertheoretically comparable or are merely ordinal-scale, then theories in which one has small credences (including totalist utilitariansm) won’t always dictate how to act. b) Some other ethical theories, e.g. Kantian theories which unconditionally forbid killing, seem (similar to totalist utilitarianism) to place extremely high (dis)value on certain actions.

2. It would be interesting to think about how distinctions between different version of utilitarianism would factor into your argument. In particular, you could be an objective utilitarian (who thinks that the de facto moral worth of an action is completely determined by its de facto consequences for total wellbeing) without believing (a) that expected value theory is the correct account of decision making under uncertainty or (b) that the best method in practice for maximizing wellbeing is to frequently explicitly calculate expected value. Version (b) would be so-called multi-level utilitarianism.

• 7 Oct 2022 12:52 UTC
3 points
0 ∶ 0

There is a old post quite similar to my question:

• [ ]
[deleted]
• In our case we’re planning an open graph architecture, with users able to fork/​spawn subnets.

• [ ]
[deleted]
• We’re planning to test these kind of voting in our governance system.

• [ ]
[deleted]
• It’s funny, I came with the same idea and implemented it in the social network I’m building https://​​nmkbs-aaaaa-aaaam-aadfa-cai.ic0.app/​​post/​​740#main.

I’m also planning to make it EA aligned at the core.

• Hell yeah! Glad to see experiments with this kind of thing. I think one of the potential routes to impact here is (of course) becoming a kind of rationalist-twitter for EA discussion. Another path to impact worth keeping in mind is just that, by testing and proving out a bunch of innovations, your project could help influence actual Twitter (and other social media sites) to adopt similar features.

• Nice! As a tweak, maybe only hide a few numbers on a page, rather than all of them?

• “Calibr█”

• A problem for 80,000 Hours’ jobs board and marketing departments (from my outside perspective): the overall effect we should expect, if 80K have powerful marketing that brings new people into the website, and if the jobs board doesn’t distinguish career capital roles from impact roles, is that 80K funnels people with little exposure to EA ideas straight into career capital roles (which those people mistake for impactful roles), and 80K loses contact with them thereafter; and the overall effect of that would be to have ~no direct impact, and possibly prevent a more-engaged person from accessing the career capital.

• Hey Peter, I really liked your post and have similar thoughts. I would love to have a chat and talk through some of your arguments more deeply. Some thoughts (I am in a hurry, thus rather short):

I think it is essential to consider the perception of EA (e. g. reviewers of WWOTF rejecting EA because of crazy town) and EA itself (e. g. we might miss something in our reasoning, community building, mainstream adoption, …) separately. I would argue that the case for the former is a bit stronger (people think we are crazy, thus not interacting with the community, which also has consequences for community building), but that the latter concern is way more important given its potential impact on the movement. I would love to see more thoughts on these issues.

• Potential way to feel better about fear of EA criticism:

Fear of criticism is indeed a huge source of this problem. One thing I’ve found recently that’s improved my ability to weather EA criticism online is to think of the comment section not as a comment section, but as the debate section.

Then it’s not that the post is being criticised, but that it’s being debated, which is something I enjoy and appreciate.

Maybe this framing will help others, but open for debate on it :)

• Admittedly I’m not a teen but I don’t seem to be able to access the doc.

• 7 Oct 2022 9:44 UTC
14 points
3 ∶ 1

I couldn’t agree more with this post. I’ve been referring to it in my circles as the “risks of inaction” and “leaving impact on the table”, if any of those terms resonate more with people.

Will MacAskill also mentioned in a post once the “bureaucrat’s curse”, which I love. It’s the inverse of the unilateralist’s curse, where if just one person doesn’t like the idea, it gets killed.

I see this everywhere, especially in longtermism. The fear of accidentally making things worse (which is a warranted fear!), overshadows the fear of accidentally moving too slowly.

If you’re on a bus hurtling towards a cliff, instinctively acting in a panic can make things worse, but also moving too slowly or not at all also leads to high downsides

• Thank you!

I also really like the phrase bureaucrat’s curse. Here’s the relevant passage (in this post):

As well as the unilateralist’s curse (where the most optimistic decision-maker determines what happens), there’s a risk of falling into what we could call the bureaucrat’s curse,[10] where everyone has a veto over the actions of others; in such a situation, if everyone follows their own best-guesses, then the most pessimistic decision-maker determines what happens. I’ve certainly seen something closer to the bureaucrat’s curse in play: if you’re getting feedback and your plans, and one person voices strong objections, it feels irresponsible to go ahead anyway, even in cases where you should. At its worst, I’ve seen the idea of unilateralism taken as a reason against competition within the EA ecosystem, as if all EA organisations should be monopolies.

(In a comment, Linch points out that this is a special case of the unilateralist’s curse.) I also really like the suggestions below the cited passage — on what we need to do or keep doing to manage risks properly:

• Stay in constant communication about our plans with others, inside and outside of the EA community, who have similar aims to do the most good they can

• Remember that, in the standard solution to the unilateralist’s dilemma, it’s the median view that’s the right (rather than the most optimistic or most pessimistic view)

• Are highly willing to course-correct in response to feedback

(In writing, I think there’s something somewhat related to the bureaucrat’s curse, which is writing-by-committee, or what Stephen Clare called “death by feedback”.)

• 7 Oct 2022 9:22 UTC
1 point
0 ∶ 0

This is great content! Thank you for writing and sharing it. I added the tag “High Impact Psychology” because this is an example of how psychology can help to increase EA impact.

• I like the term AGI x-safety, to get across the fact that you are talking about safety from existential (x) catastrophe, and sophisticated AI. “AI Safety” can be conflated with more mundane risks from AI (e.g. isolated incidents with robots, self-driving car crashes etc). And “AI Alignment” is only part of the problem. Governance is also required to implement aligned AI and prevent unaligned AI.

• Perhaps ASI x-safety would be even better though (the SI being SuperIntelligent), if people are thinking “we win if we can build a useless-but-safe AGI”.

• I’d guess not. From my perspective, humanity’s bottleneck is almost entirely that we’re clueless about alignment. If a meme adds muddle and misunderstanding, then it will be harder to get a critical mass of researchers who are extremely reasonable about alignment, and therefore harder to solve the problem.

It’s hard for muddle and misinformation to spread in exactly the right way to offset those costs; and attempting to strategically sow misinformation so will tend to erode our ability to think well and to trust each other.

• Following Eliezer, I think of an AGI as “safe” if deploying it carries no more than a 50% chance of killing more than a billion people

Is this 50% from a the point of view of some hypothetical person who knows as much as is practical about this AGI’s consequences, or from your point of view or something else?

Do you imagine that deploying two such AGIs in parallel universes with some minor random differences has only a 25% chance of them both killing more than a billion people?

• I forgot to add a very important feature. I don’t think income doubling is a helpful conversion in this case. In fact, I think it’s quite misleading.

Increasing growth by one percent for 71 years costs a lot more than doubling one’s income for a year. In the 71st year, the cost is the same, but there are the preceding transfers that were also necessary to increase subjective well-being. In year 70, the cost was nearly a doubling of income.

I estimate there are nearly 32 annual income doublings when you sum them across the 71 years. You get a portion of an income doubling every year, which can be calculated as (1.03/​1.02)^n − 1. In the first year (n=1) it costs you 0.01 of an income doubling. In year 70, it costs 0.98 of a doubling. Summing them from n = 1 to n=71 you get 31.9 doublings.

Your estimated impact should then be the coefficient * 71 /​32. If we use .002 as the coefficient on growth, that equals .004 - hardly worth it. The same is true for the lager estimated coefficients. Alternatively, you could look at my calculation to see when the costs add up to a the first cumulative annual income doubling. I estimate this happens between year 13 and 14. Let’s say 13.5, then 13.5 years * .002 = .027. You get 0.027 subjective well-being points for a cumulative doubling of annual income in 13.5 years. Again, I think this is not worth it.

Essentially, you need to consider the time involved to double income, for the reasons mentioned in my earlier post, and because it simply costs more. I wouldn’t compare these results with those of interventions of cross-sectional studies.

• I think it’s worth mentioning that it isn’t obvious you should use linear EV maximisation when probabilities are very low (say, less than 1%).

When probabilities of events are high, EV maximisation makes sense because you only need a few events before law of large numbers kicks in and people not doing EV maximization are much worse off than people doing it in nearly all worlds.

When probabilities of events are very low you need a much larger number of events before you see the same effect.

• 7 Oct 2022 8:07 UTC
14 points
3 ∶ 0

With respect to Covid, I am pretty sure the EA community and related communities incurred more costs in avoiding the risk of dying of Covid than was warranted.

I think this pretty true for the San Francisco-based and rationalist strands of the community. But in New York/​London/​Oxford, people were more balanced—being cautious for an initial several months, then reverting back to normal at a similar time to the rest of society.

• Thanks for sharing, and doing so in the most friendly manner, Madhav!
GREAT advice, relevant for everyone who enters a running organization.

• Echoing Nathan:

1. YIMBY land-use policy isn’t among the very top problems of the whole world—it isn’t as important as AI alignment, biorisk & nuclear war risk mitigation, global development, etc. That’s why it’s usually not considered a core EA cause area.

2. But, in many western first-world “superstar cities”, I believe that YIMBY/​housing issues are indeed the #1 economic issue holding back growth in those cities. It’s not just young people complaining about first-time homebuying; it really is a massive economic distortion that causes many tragic problems and inefficiencies. So, when people living in cities like London apply an “EA mindset” to prioritize /​among local political issues/​ instead of among all possible causes, they correctly realize that housing restrictions are a huge problem sabotaging their homeland.

I would guess (although I’m not sure) that in most LMIC, it would be rare for san-fransisco-style land-use overregulation to be the #1 most important obstacle to development. Personally, I am both a big fan of EA and a big fan of YIMBYism. But if I lived in Turkey instead of California, I’d probably spend less time thinking about how to fix housing and more time thinking about how to fix inflation & bad monetary policy. Each nation is unique, so I think there is a huge amount of benefit from applying an “EA mindset” to the issues of your local politics (see for instance Zvi Moshovitz’s vision of EA inspired political analysis: https://​​thezvi.substack.com/​​p/​​announcing-balsa-research), even if those local issues aren’t as globally important as AI, nuclear war, climate change, etc.

(PS, on a separate note I am totally enthusiastic about the potential for work-from-home to mitigate housing problems, by breaking the monopoly of the few top cities and promoting more “governance competition” as cities are forced to compete more to attract residents! Also excited about the potential for “virtual immigration” and people working remotely across national borders. I agree than more people should be excited about this; for more info you might enjoy my entry in a sci-fi worldbuilding contest where I talk more about these ideas: https://​​worldbuild.ai/​​w-0000000088/​​.)

• and I endorse researchers doing less capabilities work and publishing less, in the hope that this gives humanity enough time to figure out how to do alignment before it’s too late.

I strongly recommend against endorsing that. The lion’s share of the effect will most likely be Alignment-minded people surrendering their slots in the AI development genepool to a random person.

Furthermore, the more competence that is lost as a result, the more that it will attract unwanted attention to the concept of alignment; particularly, from powerful dangerous people who were hired to watch out for potential threats to the stability and growth of their industry (no matter how strange or persuasive the threat is). These people definitely exist (based on the historical record for similar industries), and although the details about them are vague, they are definitely not very nice nor understanding.

• If an alignment-minded person is currently doing capabilities work under the assumption that they’d be replaced by an equally (or more) capable researcher less concerned about alignment, I think that’s badly mistaken. The number of people actually pushing the frontier forward is not all that large. Researchers at that level are not fungible; the differences between the first-best and second-best available candidates for roles like that are often quite large. The framing of an arms race is mistaken; the prize for “winning” is that you die sooner. Dying later is better. If you’re in a position like that I’d be happy to talk to you, or arrange for you to talk to another member of the Lightcone team.

I do not significantly credit the possibility that Google (or equivalent) will try to make life difficult for people who manage to successfully convince the marginal capabilities researcher to switch tracks, absent evidence. I agree that historical examples of vaguely similar things exist, but the ones I’m familiar with don’t seem analogous, and we do in fact have fairly strong evidence about the kinds of antics that various megacorps get up to, which seem to be strongly predicted by their internal culture.

• I’ve proposed a number of things to be tested, and would love to get feedback.
- Can we adapt Carl Rogers to real life? Inspire people to see ourselves in others (empathy), see the best in others (positive regard), and bring out the best in others.
- Can we use our complex buying behaviors to foster empathy and positive regard? We buy different things, but we often have similar coveting, budgeting, browsing, shopping, and hoarding experiences. Can knowledge and sophistication in buying widgets help people build empathy and positive regard for people who buy doodads?
- Can exposing people to different kinds of biases help them better understand different kinds of discrimination?
- Safe driving allows us to foster altruistic and cooperative behavior on a global scale. Can we expand it to general society? Teach everyone to listen (instead of yield), check their biases (instead of the blind spots), reject Ideological Rage (instead of road rage).

More details here:
https://​​forum.effectivealtruism.org/​​posts/​​7srarHqktkHTBDYLq/​​bringing-out-the-best-in-humanity

• I enjoyed reading this—I spent 10 years active in Quakerism, including a college course on Quaker history and a year working at a Quaker retreat center. I also happened to write a post about EA and Quakerism recently.
I like the idea of drawing inspiration from historical Quakers. But I feel confused about what it would look like for EA to emulate peak Quakerism.

Some counter-points:
- Early (and peak) Quakers went down some weird ineffective paths. It’s cool that they were into nonviolence and class equality, but they were also really into renaming the days of the week and months of the year to avoid pagan names. Even at their peak, they were maybe most notable for dressing funny. Even one of the cofounders described the fastidiousness about clothing as “a silly poor gospel.”
- Quakerism doesn’t have heresy per se as a concept, but you could definitely get kicked out for doing it wrong. Perhaps most commonly in the past for marrying a non-Quaker, but also for extramarital sex or disagreeing about the nature of God.

Modern Quakerism isn’t very religious for a religion (at least in the UK and coastal US—there are evangelical branches elsewhere), but until the 20th century it really was. I don’t really understand what led this particular sect to hit on a bunch of social policies that look really good to modern views, but a ton of 17th century sects didn’t.

As for what made them successful, a couple of things come to mind
- inability to participate in the military or universities (because those required violence and swearing an oath of loyalty to the crown respectively) meant that their talented people went into business. Similar to other religious minorities that sometimes do well after being cornered into one part of the economy.
- Businesses did well partly because they were so fastidious about honesty, so they had a reputation for fairness. I do think this is worth learning from—for a while one theory about what EA’s brand should be was “astonishingly rigorous”, and I think there’s something parallel here.

• I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts.

I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it’s helpful to have an image like this to come back to. I’m glad that guy made it. Hopefully we will too!

• 7 Oct 2022 1:51 UTC
8 points
1 ∶ 0

Random meta point: You can now crosspost posts to the EA Forum from LW and vice-versa, which automatically adds a link to the crosspost to the top, and adds a link to the comment section on the other side to the bottom of the comment section (together with a counter of the number of comments). Seems like this would have been a bit nicer for this case.

• 7 Oct 2022 0:56 UTC
3 points
0 ∶ 0

Better analytics for both authors and readers:

• Readers can highlight sections of an article. The forum might then show a “featured highlight” similarly to how this works in Medium.

• The forum can also measure how much screen time each paragraph in an article gets, and show this to users (a bit like a heatmap of where readers look at). This could lead to improved writing, and incentivize shorter articles.

• An article’s engagement and read time can become factors used for ranking, as a complement to Karma.

• 7 Oct 2022 0:47 UTC
2 points
0 ∶ 0

In my previous job, we used the technique described below to prioritize feature requests and estimate their relative value. Feel free to skip this comment if you’re not interested in slightly related survey techniques.

• Show a random sample of five items to a survey participant

• Participant selects the most important and least important (leaving three items “somewhere in-between”)

• Repeat

Each iteration creates six links between items (A > B, A > C, A > D, B > E, C > E, D > E) plus, transitively, A > E. After enough iterations, a preference order can be established using something like the Schulze Method.

I’ve forgotten the name of this survey method, but found it quite neat. It is both easy to use for participants and yields rich information. I remember participants saying that it was “hard to cheat” in this type of survey, and so it might result in fewer inconsistencies than using the utility function extractor.

• Thank you for telling about this! In economics, the discrete choice model is used to estimate a scale-free utility function in similar way. It is used in health research for estimating QALYs, among other things, see e.g. this review paper.

But discrete choice /​ the Schulze method should probably not be used by themselves, as they cannot give us information about scale, only ordering. A possibility, which I find promising, is to combine the methods. Say that I have ten items I want you to rate. Then I can ask “Do you prefer to ?” for some pairs and “How many times better is than ?” for other pairs, hopefully in an optimal way. Then we would lessen the cognitive load of the study participants and make it easier to scale this kind of thing up.

(The congitive load of using distributions is the main reason why I’m skeptical about having participants using them in place of point estimates when doing pairwise comparisons.)

• 7 Oct 2022 0:35 UTC
10 points
3 ∶ 5

While I agree with a lot in this post, I do want to push back on this reasoning:

About 500 more people who met the admissions bar would get to experience the conference. However much impact the default conference would produce — sparked collaborations, connections for years to come, inspirations for new projects, positive career changes — there would be about double that

I think an estimate of “double that” is pretty wrong. I think the first 500 people who would be admitted would of course be selected for getting value out of the conference, and I expect the value that different people gain to be heavy-tailed. It is hard to predict who exactly will get value out of a conference, but it wouldn’t surprise me if you get to a state where you capture 90% of the value by admitting the right 500 people.

On the other hand, I think a conference might produce value in the square of the number of the participants, since people can self sort, and meeting more people is more valuable than meeting less people.

I think in one line of reasoning you get something like “a conference twice the size would be maybe 10-20% more valuable” and in the other line of reasoning you get “a conference twice the size could be 4x as valuable”, but I don’t have any line of reasoning I endorse that outputs the 2x number.

• Thanks for the pushback. I agree that a linear model will be importantly wrong, although if you approximate the impact from the conference using the number of connections people report and assume that stays roughly the same, it doesn’t seem wild as a first pass. (Please let me know if you disagree!)

[Half-formed thoughts below.]

On the other hand, I think 10-20% more valuable seems very off to me, especially in this case, given we were not “lowering the bar” for the second group of attendees. Setting this case aside, I can imagine a world in which someone is very confident in their ability to admit the people who will benefit the most from a conference (and the people who would be most useful for them to meet with), and in this world, you might be able to get 90% of the value with 50% of the size — but I don’t really think we’re in this world (especially in terms of identifying people who will benefit most from the event).

I’m not really sure how well people self-sort at conferences, which was a big uncertainty for me when I was thinking about these things more. I do think people will often identify (often with help) some of the people with whom it would be most useful to meet. If people are good at self-sorting (e.g. searching through swapcard and finding the most promising 10-15 meetings), and if those most-useful meetings over the whole conference aren’t somehow concentrated on meetings with a small number of nodes, then admitting double the people will likely lead to more than double the impact.[1] If people are not good at self-sorting, though, it seems more likely that we’d get closer to straightforward doubling, I think. (I’m fairly confident that people are better than random, though.)

1. ^

It does seem possible that there are some “nodes” in the network — at a very bad first pass, you could imagine that everyone’s most valuable meetings are with the speakers. The speakers each meet with lots of people (say, they have lots of time and don’t get tired) and would be at the conference in any world (doubling or not). Then the addition of 500 extra people doesn’t significantly improve the set of possible meetings for the 500 first attendees, although 500 extra people get to meet with the speakers (which is nearly all that matters in this model).

I’m really unsure about the extent to which the “nodes” thing is true (and if it’s true I don’t really think that “speakers” are the right group), but there’s something here that seems like it could be right given what we hear. There’s also the added nuance that some nodes are probably in the second group of 500, and also that the size and capacity for meetings of the “nodes” group would matter.

• Hmm, I do really think there is a very wrong intuition here. I think by-default in most situations the return to doubling a specific resource should be modeled as logarithmic (i.e. the first doubling is as valuable as the second doubling). I think in this model, it is very rare that doubling a thing along any specific dimension produces twice the value. I think the value of marginally more people in EA should likely also be modeled as having logarithmic returns (or I might argue worse than logarithmic returns, but I think logarithmic is the right prior).

I think you will get estimates wrong by many orders of magnitude if you do reasoning of the type “if I just double this resource that will double the whole value of the event”, unless you have a strong argument for network effects.

• I wonder how much of your intuition comes from thinking that marginal (ex ante) impact of marginal EAG attendees is much lower than the existing average, vs normal logarithmic prior considerations vs how much of it comes from diseases of scale (e.g. higher population making things harder to coordinate, pressure towards conformity).

The first consideration is especially interesting to isolate, since:

I think the value of marginally more people in EA should likely also be modeled as having logarithmic returns (or I might argue worse than logarithmic returns, but I think logarithmic is the right prior

If you think doubling the quality-adjusted people in EA overall has logarithmic returns, you still get ~linear effects from doubling the output of one event or outreach project, since differential functions are locally linear.

• I would say the square in the number of participants is too extreme, since the average attendee probably wouldn’t meet many more people than otherwise, except for those who wouldn’t have gotten to attend at all.

(EDIT, nvm this bit; I don’t know how to strike it out via mobile: Plus, because people are going to meet based on interests, if you were thinking about the number of possible meetings, I think it would be better to think about it like multiple cliques doubling in size than a single large clique doubling in size, or something more complicated.)

The first 500 (except those hiring?) probably wouldn’t get much more out of it, since it at most only slightly adds to who they might have met in terms of counterfactual value, and they might even get less, since they need to compete with the new 500 over meetings with the first 500. Then the next 500 at least get to meet each other, but also the first 500, and especially the (I assume) roughly fixed number of organizations that are hiring.

The first 500 is also plausibly made up of many people who are already largely in contact with one another because they work at EA-related orgs, either the same org, or orgs working in the same area who review each other’s work, strategize together or collaborate.

Around 2x seems plausible to me, but my best guess is less than 2x.

• Sorry, I think you could arrive at 2x for bayesian reasons (like weighing multiple models), but I just wanted to push back on the model that an event with twice as many attendees should be straightforwardly modeled as twice as valuable.

• I agree that it’s not straightforward that a linear model is approximately correct. I do think a linear model could still be approximately correct for straightforward linear reasons, like the value being roughly proportional to the number of one-on-ones, though, and not just because you weighed multiple models together and it happened to come out to about 2x.

• And—despite valiant effort!—we’ve been able to do approximately nothing.

This should update judgements on whether GOF research is as easy to influence as was thought in 2021.

Some resources I recommend on GOF research are the first two chapters of Mearshimer’s Tragedy of Great Power Politics (2014) and the first two chapters of Schelling’s Arms and Influence (1966).

• 7 Oct 2022 0:27 UTC
−10 points
0 ∶ 0

People interested in AI risk and this post might be interested in applying to multiple research roles at Epoch, which is working to forecast when and how transformative AI will develop. These roles are all remote.

This is a test by the EA Forum Team to gauge interest in job ads relevant to posts - give us feedback here.

• Larissa from Leverage Research here. I think there might be an interesting discussion to be had about the relationship between feedback loops, external communication (engaging with your main external audiences), and public communication (trying to communicate ideas to the wider public).

For a lot of the history of scientific developments, sharing research, let alone widely distributing it was expensive and rare. Early discoveries in the history of electricity, for example, were nonetheless still made, often by researchers who shared little until they had a complete theory, or a new instrument to display. Often the feedback loops were simply direct engagement with the phenomena itself. Only in more recent history has it become cheap and easy enough to widely share research such that this has become the norm. Similarly, as a couple of people have mentioned in the comments, there are more recent examples of groups that have done great research while having little external engagement: Lockheed Martin and the Manhattan Project being two well-known examples.

This suggests that it is feasible to have feedback loops while doing little external communication of any kind. During Leverage 1.0[1] people relied more on feedback from their own experiences, interactions with teammates’ experiences and views, workshops and coaching.

That said, we do believe (for reasons independent of research feedback loops) that it was a mistake to not do more external communication in the past, which is why this is something Leverage Research has focused on since 2019. More recently, we have also come to think that it is also important to try to communicate to the wider public (in ways that can be broadly understood) as opposed to just your core audience or peer group. One reason for this is that if projects are only communicated about, and criticisms only accepted in, the language of the particular group that developed them, it’s easy for blindspots to remain until it is too late. (I recommend Glen Weyl’s “Why I’m Not A Technocrat” for a more detailed treatment of this topic.)

For anyone interested in some of our other reflections on public engagement, I recommend reading our 2019-2020 annual report or our Experiences Inquiry Report. The former is Leverage Research’s first annual report since the re-organization in 2019, and one topic we discuss is our new focus on external engagement. The latter shares findings from our inquiry last year into the experiences of former collaborators during Leverage 1.0. To see our engagement efforts today, I recommend checking out our website, subscribing to our newsletter, or following us on Twitter or Medium.

For those interested in the exploratory psychology research Jeff mentions, we recommend reading our write-up from earlier this year covering our 2017 − 2019 Intention Research and keeping an eye on our Exploratory Psychology Research Program page. We are currently working on two pieces: one on risks from introspection (we discuss this a bit on Twitter here), and one on Belief Reporting (an introspective tool developed during Leverage 1.0). We’re also thinking of sharing a few documents written pre-2019 that relate to introspection techniques. These would perhaps be less accessible for a wider audience unfamiliar with our introspective tools but may nonetheless be of interest to those who want to dive deeper on our introspective research. All of this will be added to our website when completed.

Finally, I just wanted to thank Jeff for engaging with us in a discussion of his post. Although we disagreed on some things and it ended up a lengthy discussion, I do feel like I came to understand a bit more of where the disagreement stemmed from, and the post was improved through the process. This seems valuable, so I would like to see that norm encouraged.

1. ^

As context, “Leverage 1.0” is the somewhat clumsy term I introduced as a shorthand for the decentralized research collaboration between a few organizations from 2011 to 2019 that’s commonly referred to as “Leverage,” so as to distinguish it from Leverage Research the organization since 2019 which looks very different.

• Interesting thoughts. But do you think AI will, on balance, help or hurt us in this quest?

• Interesting post Yuri, but I am very confused about your claim that Pavlov’s ideas were ignored: “this mechanism has been neglected by the mainstream of psychologists”. My understanding is that the ideas inspired the U.S. school of Behaviorism where Watson and then Skinner pretty much ruled American psychology from 1920 to the mid 50s.
The Cognitive Revolution spearheaded by for example Chomsky, showed that simple rules of learning were not sufficient to explain adult competence. The debate has been revived in a modern form by deep learning, of course,

• This was super informative thank you.

• Anyone can summarise: anyone can write a summary of anyone else’s post, and the original author can choose whether to feature one of them at the top, visible by default.

The other summaries are accessible at the top of post but with a click required to view them. The summaries accumulate karma in usual way, low karma ones are hidden.

There’s also some mechanism by which the author is prompted to write their own summary, either inline on the post or in a separate box, to compete with other people’s summaries.

(Spitballing here, can easily imagine coming to think this is a pretty bad idea all things considered)

• Readers might be interested in the comments over here. I want to highlight Daniel K.’s comment:

The only viable counterargument I’ve heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It’s big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.

Or, to put it more succinctly: The COVID situation is just one example; it’s not overwhelmingly strong evidence.

• This post makes the case that warning shots won’t change the picture in policy much, but I could imagine a world where some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?

This isn’t a well developed thought, just something that came to mind while reading.

• I would be curious how much the pandemic preparedness stuff is actually a crux. E.g. if gain of function research is restricted within the next year, would that noticeably change your estimate of how helpful a warning shot will be?

I think this is kind of testing your argument (2) – EA advisors might possibly become more influential within the next year. (And also it just takes forever to do anything in policy.)

• In Scanlon’s case, we might think it’s better that they not have such a preference in the first place because it conflicts with another preference (eating properly), and we don’t want to encourage people to have such preferences.

• But the example assumes the person actually wants to build the monument more strongly than they want to eat. If we admit that some desires matter more than others, even if they are weaker, we seem to be giving up preference utilitarianism.

• I’m not saying their preference to build the monument matters less than their preference to eat, just that it would have been better for them if they didn’t have that preference in the first place. I’m thinking in antifrustrationist and “preference-affecting” terms. Having conflicting preferences seems bad.

• Then why is it better, according to preference utilitarianism, not to have a preference for monuments than not to have a preference for eating properly? (Not having one of them resolves the conflict after all.)

• Either would be better, but it’s hard to imagine someone not being worse off by their own lights in any way from not eating properly (felt unpleasantness, weakness making it harder to do things, risk of death, etc.). If that were not true about eating properly, then people might not prioritize it in the first place. If we could make it so that people didn’t need to eat to avoid being worse off in important ways, all else equal, that would be better.

• As someone more sympathetic to preference-based views than the alternatives, I don’t find any of these arguments persuasive, although Parfit’s is closest. The others all seem pretty paternalistic. If something matters to you, shouldn’t it matter to me in my concern for your interests?

In Parfit’s case, my response would just be that it doesn’t matter for their welfare because the preference is not held by them anymore, or if it is still held, why should it be the case that only things that affect our experiences matter? I think Parfit is just asserting this is implausible, with no further argument.

Another example might be people doing things to your body without your consent while you’re unconscious in a way you never find out. I think the best explanation for why this is wrong is simply that you prefer this not happen, whether or not you find out, and regardless of indirect effects. If someone feels violated after finding out, I think you’d have to claim that this is an irrational reaction and that informing someone they’ve been touched without their consent is what causes them harm, not the actual nonconsensual act. If someone would prefer to know about something they know they’d feel bad to find out, isn’t it still right to tell them?

Also, this gets into the experience machine thought experiment. See https://​​forum.effectivealtruism.org/​​posts/​​vWqMzv97ueX8iagg8/​​contact-with-reality

• Us not wanting people to do things with our body without our knowledge is indeed a different argument, one which seems to show that at least some preferences matter ethically. But preference utilitarianism is usually the view that only preferences matter, perhaps even all preferences.

Regarding Parfit’s case, is this not the same as me being unconscious while my body is manipulated? In both cases we do not seem to currently hold a preference. In one because he forgot about it, in the other because I’m unconscious.

But even suppose Parfit did not forget about the stranger. Why would it be good for Parfit that the stranger is cured, without his knowledge? To me it does not seem to be good for him. And wouldn’t such a view have the unfair consequence that it is much less important to cure a lonely person about whom no other people care than a popular person about whom lots of people care, even if those are not informed about the cured illness?

• I don’t think it’s necessarily true that you hold no preferences while you are unconscious (and not dreaming), which seems to be what you’re suggesting. The preferences are still probably encoded in your brain somewhere, either explicitly, or as a general response tendency.

“And wouldn’t such a view have the unfair consequence that it is much less important to cure a lonely person about whom no other people care than a popular person about whom lots of people care, even if those are not informed about the cured illness?”

Is it more unfair because they aren’t informed? I think it’s already unfair if they are informed. I think this only seems worse if you assume the conclusion that if you never find out, it shouldn’t matter.

To be clear, though, I think it’s very plausible preferences matter more if you’re informed about their extent of satisfaction, because the experience of satisfaction or frustration matters, too.

• Yeah, preferences may still be latent dispositions in case of unconsciousness, but the same seems plausible for Parfit’s forgotten stranger. If he is reminded of them, his preference may come back. So the two cases don’t seem very different.

Is it more unfair because they aren’t informed? I think it’s already unfair if they are informed. I think this only seems worse if you assume the conclusion that if you never find out, it shouldn’t matter.

Well, it is presumably less unfair if they are informed, because it would make them happy to learn that the person is cured, which matters, at least somewhat. And yes, my (and Parfit’s) intuition is that if they never find out that the person was cured, this would not be good for the carers. So curing the cared-about person would not be better than curing the person about whom no one else cares. That’s not a conclusion, it’s a more a premiss for those who share this intuition.

• Hi, Nir.

You make some great points.

Somehow EA folks seem to be good at establishing an emotional distance between climate change and existential or extinction risk. They believe that their attention to direct vs indirect risks somehow justifies denying that climate change is an existential risk. For example, they claim that it’s less important than pandemics (although it will contribute to pandemics) or the risk of nuclear war (although it will increase the risk).

I like to see solutions to problems as either individual, group, or systemic. An individual solution to a systemic problem is one that protects just the individual. I’m fairly sure that most people in developed countries adopt an individualist attitude toward the systemic problem of climate change.

As individuals, we have to play along with the bigger system if we can’t change it. I think EA’s have a conflict of interest around climate change. It’s very hard not to have a conflict of interest around climate change somewhere in how I live. I think that’s true for EA’s as well.

Whatever I am invested in as an individual probably contributes to global warming in a significant way when everyone does it collectively. Whether its elements of my lifestyle, my political stance, or my vision of the future (for example, techno-utopianism), it evokes conflicts for me personally, politically, or professionally if I address the root causes of global warming.

Whether I:

• play along with technological determinism (by my definition, marketing that tech companies have a great vision of the future for consumers who use their products)
• sell myself on techno-utopianism (for example, that a nanotech future will be worth living in or that AI will solve our problems for us)

• pretend that old talking points are still relevant (for example, that climate change could rather than will have existential and extinction consequences)

• speak hopefully about actual efforts to solve climate change (for example, the recent climate change bill that made it into law in the US)

all I’m doing is trying to protect myself as an individual, or maybe some small group that I care about.

We do have a need for research into tipping points, for better hardware to run higher-fidelity (smaller mesh size) atmospheric and ocean models, for better risk modeling of systemic and cascading risks, etc. We might, for example, develop better models of how geo-engineering efforts could work or fail. So, yeah.

But what that research will also confirm is something like Carl Sagan talked about in the 1980′s to congress, or what climatologists began worrying about publicly in the 1970′s, or scientists understood about heat-trapping gases in the 1960′s. We will confirm, with ever greater certainty, that we should really do something about GHG’s and anthropogenic climate change.

When EA folks say that climate change is not neglected, what they are not saying is that genuine climate change adaptation and mitigation efforts are limited or doomed to failure. What about BECCS? CCS? Planting trees? Migration assistance for climate refugees? All unfeasible at scale and in time.

Furthermore, the lack of climate change prevention is why a paper like that Climate Endgame paper would ever get published. Passing 5 tipping points in the short term? That is an utter failure of prevention efforts. Tipping points were not supposed to be passed at all. The assumption that tipping points are in the distant future has kept the discussion of “fighting climate change” a hopeful one. And now that assumption has to be given up.

Didn’t EA start with some understanding that a lot of money and energy is wasted in charitable efforts? Well, similar waste must be happening in the climate change arena. Governments are taking action based on silly models of risk or outdated models of causes and so their attention is misdirected and their money is wasted.

So I agree with you, yes, EA folks should take climate change seriously. It could help the situation for EA’s to learn that climate change poses an existential and extinction threat this century. Beyond that, I don’t know what EA’s could really positively accomplish anyway, unless they were willing to do something like fund migration for climate refugees or pay for cooling technologies for the poor or reconstruct infrastructure in countries without an effective government.

• 6 Oct 2022 21:03 UTC
77 points
8 ∶ 0

These numbers seem pretty all-over-the-place. On nearly every question, the odds given by the 7 forecasters span at least 2 orders of magnitude, and often substantially more. And the majority of forecasters (4/​7) gave multiple answers which seem implausible (details below) in ways that suggest that their numbers aren’t coming from a coherent picture of the situation.

I have collected the numbers in a spreadsheet and highlighted (in red) the ones that seem implausible to me.

Odds span at least 2 orders of magnitude:

Another commenter noted that the answers to “What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?” range from .001 to .27. In odds that is from 1:999 to 1:2.7, which is an odds ratio of 369. And this was one of the more tightly clustered questions; odds ratios between the largest and smallest answer on the other questions were 144, 42857, 66666, 332168, 65901, 1010101, and (with n=6) 12.

Other than the final (tactical nuke) question, these cover enough orders of magnitude for my reaction to be “something is going on here; let’s take a closer look” rather than “there are some different perspectives which we can combine by aggregating” or “looks like this is roughly the range of well-informed opinion.”

Individual extreme outlier answers:

Two forecasters gave an estimate on one of the component questions that was more than 2 orders of magnitude away from the next closest estimate (odds ratio over 100).

On the question “Conditional on Russia using a nuclear weapon in Ukraine, what is the probability that nuclear conflict will scale beyond Ukraine in the next YEAR after the initial nuclear weapon use?”, one forecaster gave the answer 10^-5. The next smallest answer was 0.0151, an odds ratio of 1533. On the MONTH version of this question, the ratio was 130. So the 10^-5 answer differs wildly from each of the other answers, and also (IMO) seems implausibly low.

On the question “Conditional on the nuclear conflict expanding to NATO, what is the chance that London would get hit, one MONTH after the first non-Ukraine nuclear bomb is used?”, the largest answer was .9985 and the 2nd largest was 0.5, an odds ratio of 666. The ratio was the same for the YEAR version of this question. This multiple-orders-of-magnitude outlier from all the other forecasts also seems implausibly high to me.

Implausible month-to-year ratios:

We can compare the answers to “Conditional on Russia using a nuclear weapon in Ukraine, what is the probability that nuclear conflict will scale beyond Ukraine in the next MONTH after the initial nuclear weapon use?” to the YEAR version of this question to see how likely each forecaster thought that the escalation would happen within a month, conditional on it happening within a year. From smallest to largest, these probabilities for p(escalation within a MONTH | escalation within a YEAR) are .067, .086, .5, .6, .75, .75, 1. Probabilities below 10% seem implausible here, both considering the question (nuclear escalation will very likely take more than a month if it happens?) and considering the other estimates, but 2 forecasters are in that range. (A probability of 1 would be implausibly high if forecasters were estimating it directly, but given that this is calculated from 2 probabilities and many answers only had 1 sigfig I guess it’s not a major issue.)

Similarly, the implied estimates for p(London hit within a MONTH of a non-Ukraine nuke | London hit within a YEAR of a non-Ukraine nuke) are, from smallest to largest, .17, .2, .5, .89, 1, 1, 1. Again, low probabilities (.2 or smaller) seem implausible.

Conjunction vs. direct elicitation:

One sanity check in the original post is comparing the implied probability for a London nuke (based on p(London within a month | escalation), p(escalation within a month | Ukraine nuke), and p(Ukraine nuke within a month)) with the directly elicited p(London nuke in October). The implied probability covers a longer time period (since the monthlong window resets with each event), but the directly elicited probability covers all paths to London being nuked (not just the path via escalation from Russia nuking Ukraine), so it’s not obvious which should be larger, but I think they should be close (and Nuño thought the conjunction should be larger).

Looking at each forecaster, the ratio of p(London nuke in October) to the conjunction, from smallest to largest, is .57, .62, 1.04, 8, 20, 25, 48. Five of seven forecasters gave estimates which imply that the direct estimate (shorter timeframe, more pathways) is larger. Four of them gave estimates which imply a ratio of 8 or higher, which seems implausible.

And all four of those forecasters gave at least one of the other implausible forecasts mentioned above (an outlier individual estimate and/​or an implausible month:year ratio). The three forecasters who have plausible ratios here (.57, .62, 1.04) did not give any of the implausible answers according to my other two sanity checks.

Bottom line:

3 of the 7 forecasters passed all three of these sanity checks. The other 4 forecasters each failed at least 2 of these sanity checks.

Aggregation which treats all this as noise and tries to find the central tendency helps keep the final estimate in a plausible range (and generally within the range of the 3 forecasters who passed the sanity checks), but it still seems possible to do significantly better.

IMO the epistemic status here is not seven good generalist forecasters who have thought carefully enough about these questions to give well-considered estimates, aggregated with some math that helps combine their different perspectives. Instead, the math is mainly just helping to filter out the not-carefully-considered answers.

• Hey Dan, thanks for sanity-checking! I think you and feruell are correct to be suspicious of these estimates, we laid out reasoning and probabilities for people to adjust to their taste/​confidence.

• I agree outliers are concerning (and find some of them implausible), but I likewise have an experience of being at 10..20% when a crowd was at ~0% (for a national election resulting in a tie) and at 20..30% when a crowd was at ~0% (for a SCOTUS case) [likewise for me being ~1% while the crowd was much higher; I also on occasion was wrong updating x20 as a result, not sure if peers foresaw Biden-Putin summit but I was particularly wrong there].

• I think the risk is front-loaded, and low month-to-year ratios are suspicious, but I don’t find them that implausible (e.g., one might expect everyone to get on a negotiation table/​emergency calls after nukes are used and for the battlefield to be “frozen/​shocked” – so while there would be more uncertainty early on, there would be more effort and reasons not to escalate/​use more nukes at least for a short while — these two might roughly offset each other).

• Yeah, it was my prediction that conjunction vs. direct wouldn’t match for people (really hard to have a good “sense” of such low probabilities if you are not doing a decomposition). I think we should have checked these beforehand and discussed them with folks.

• Hey, thanks for the analysis, we might do something like that next time to improve consistency of our estimates, either as a team or as individuals. Note that some of the issues you point out are the cost of speed, of working a bit in the style of an emergency response team, rather than delaying a forecast for longer.

Still, I think that I’m more chill and less worried than you about these issues, because as you say the aggregation method was picked this up, and it doesn’t take the geometric mean of the forecasts that you colored in red, given that it excludes the minimum and maximum.

I also appreciated the individual comparison between chained probabilities and directly elicited ones, and it makes me even more pessimistic about using the directly elicited ones, particularly for <1% probabilities

• Why does the Forum have a ‘karma system’? Why was it called a ‘karma system’ over any other description? Is the karma system a truly accurate reflection of a persons input into discussions on the forum?

• 6 Oct 2022 20:43 UTC
2 points
0 ∶ 0

Just so I understand, are all four of these quotes arguing against preference utilitarianism?

• [ ]
[deleted]
• This may be nit picking, but I think that education itself is not an intrinsic good. Understanding, knowledge, wisdom, etc. are all things that I hold intrinsically valuable, and are all attainable by education. Though, if these things are all what you mean by the word “education” then yes, I completely agree that there is intrinsic value in education.

Why do we build space telescopes? Why do we build particle accelerators? Why do we fund philosophers? Why do we fund theoretical mathematicians? Yes, there is a potential for these things to generate extrinsic value, but it is very low. I think the simpler reason to fund these things and people, is because many people find the knowledge and understanding they generate to be intrinsically good.

• Instead I would specifically look at its output and approach to external engagement: if they’re not publishing research I would take that as a strong negative signal for the project. Likewise, in participating in a research project I would want to ensure that we were writing publicly and opening our work to engaged and critical feedback.

I’m curious about why your conclusion is about the importance of public engagement instead of about the importance (and difficulty) of setting up good feedback loops for research.

It seems to me that it is possible to have good feedback loops without good public engagement (e.g., the Manhattan Project) and good public engagement without good feedback loops (e.g., many areas of academic research). But, whereas important research progress seems possible in the former case, it seems all but impossible in the latter case.

• I think feedback loops are the important thing, but public engagement is a powerful way to strengthen them which Leverage seemed to have suffered from deprioritizing.

In the example of the Manhattan Project, they were studying and engineering physical things, which makes it a lot harder to be wrong about whether you’re making progress. My understanding is also that they brought a shockingly high fraction of the experts in the field into the project, which might mean you could get some of what you’d normally get from public presentation internally?

• The degree to which public presentation is likely to strengthen your feedback loops seems to depend quite a lot on the state of the field that you are investigating. In highly functional fields like those found in modern physics, it seems quite likely to be helpful. In less functional fields or those with fewer relevant researchers, this seems less helpful.

To my mind, one strong consideration in favor of publicly presenting your research if you’re working in a less functional field is that even if you’re right, causing future researchers to build on your work is extremely difficult. Indeed, promising research avenues that are presented publicly die all the time (e.g., muscle reading or phlogiston c.f. Chang in Is Water H2O?). Presenting your research publicly is the best way to engage with other researchers and ensure that, if you do succeed, a research tradition can be built on top of your work.

• And—despite valiant effort!—we’ve been able to do approximately nothing.

Why not?

I apologize for an amateur question but: what all have we tried and why has it failed?

• It’s possible there’s a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden’s Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act): • I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn’t fight effectively for the pandemic preparedness funding because it was not something that representatives’ constituents were demanding. • During his presentation at EA Global DC a few weeks ago, Gabe Bankman-Fried from Guarding Against Pandemics said that Democratic leaders in Congress had polled Senators and Representatives about their top three issues as Build Back Better was being negotiated in order to get a sense for what could be cut without incurring political backlash. Apparently few to no members named pandemic preparedness as one of their top three. (I’m paraphrasing from memory here, so may have gotten a detail or two wrong.) The obvious takeaway here is that there wasn’t enough attention to motivating grassroots support for this funding, but to be clear I don’t think that is always the bottleneck—it just seems to have been in this particular case. I also think it’s true that if the administration had wanted to, it probably could have put a bigger thumb on the scale to pressure Congressional leaders to keep the funding. Which suggests that the pro-preparedness lobby was well-connected enough within the administration to get the funding on the agenda, but not powerful enough to protect it from competing interests. • I think this reasoning applies to the initial funding for new, neglected areas. It often appears like grantmakers are evaluating a project in absolute terms as to whether they think it is likely to succeed. But exploration and discovery costs are often a pittance compared to potential impact of new ideas, and when the potential for exploitation of promising interventions is incorporated, we should definitely be more risk-seeking with the resources we deploy as a community. The greatest EV fund distributor may very well be one with many duds and perhaps we should be wary of incentivizing funds that have a bunch of good outcomes. You hear on 80k and on many other sources that we should be risk-neutral re altruistic projects, but this neutrality depends on institutions that will enable new ideas. • 6 Oct 2022 18:55 UTC 1 point 0 ∶ 0 A$200k prize for publishing any analysis which we consider the canonical critique of the current position highlighted above on any of the above questions

Question: In this formulation, what is meant by the “current position”? Just asking to be sure.

It could refer to the specific credences outlined above, but it would seem somewhat strange to say (e.g.) “here is what we regard as the canonical critique of ‘AGI will be developed by January 1, 2043 =/​= 20%’”. So I am inclined to believe that it probably means something else.

I would love to know, since I might consider writing a critique. In particular, I would love a list of specific points (or beliefs or pieces of writing) that you would like to see critiqued.

• Yes, agree. Two more points:

Not all population counts, but only those who can think about anthropic. A nuclear war will disproportionally destroy cities with universities, so the population of scientists could decline 10 times, while other population will be only halved.

Anthropic shadow means higher fragility: we underestimate how easy it is to trigger a nuclear war. Escalation is much easier. Accidents are more likely to be misinterpreted.

• Meta note: I think it encourages in-group/​out-group experiences on the Forum when known individuals are identified only by their first names and at a minimum would like to see e.g. Geoff, Larissa, and Catherine named in full at least once in this post.

• I’ve intentionally used only first names for everyone in the post, including for individuals who are not well known, to make this post less likely to show up when searching anyone’s name.

• I just wanted to say thank you for doing this Jeff. I sympathize with Rockwell Schwartz’s general point, but since Cathleen’s post asks that people not use her full name or name her former colleagues I appreciate you taking this seriously.

(For clarity, I don’t mind people using my full name. It’s my forum username and very easily found e.g. on Leverage’s website. But I currently work at Leverage Research and decided to work there knowing full well how some people in EA react when the topic of Leverage comes up. The same is not true of everyone, and I think individuals who have not chosen to be public figures should be allowed to live in peace should they wish to).

• That makes sense and I wasn’t familiar with Cathleen’s request or the general aims of quasi-anonymity here. I think it is useful to specify that you are intentionally not using full names because otherwise the assumption is likely that these are people one should know and contributes to my above concern.

• I’ve taken ADHD meds for roughly a third of my life, and metilphenidate really helped me deal with some of my worst symptoms of ADHD. It also helped me realize many of my academic issues weren’t my fault (it wasn’t just me being lazy).

In parallel, I started developing better ADHD management strategies, so I’ll classify the experience as pretty good overall. I did have problems with appetite, which is why I’m currently trying a different class of ADHD meds (atomoxetine). I would go back if the new meds didn’t work, though.

• Partly unrelated: at first, I thought the title meant that we should research deprioritizing external communication. It took me a while to understand it meant that research is/​was deprioritizing external communication.

• Sorry, stupid question, but just to clarify, questions should be posted in this thread, or in the general “questions” section on the forum?

• For the purpose of trying this thread, it would be nice to post questions as “Answers” to this post.[1] Although you’re welcome to post a question on the Forum if you think that’s better: you can see a selection of those here.

Not a stupid question!

1. ^

The post is formatted as a “Question” post, which might have been a mistake on my part, as it means that I’m asking people to post questions in the form of “Answers” to the Question-post, and the terminology is super confusing as a result.

• I’ve asked this question on the forum before to no reply, but do the people doing grant evaluations consult experts in their choices? Like do global development grant-makers consult economists before giving grants? Or are these grant-makers just supposed to have up-to-date knowledge of research in the field?

I’m confused about the relationship between traditional topic expertise (usually attributed to academics) and EA cause evaluation.

• # Should I/​we use the acronym HQALYs (Human QALYs)?

Assuming the language we use is not only descriptive but also performative, when I talk to others:

… I use “non-human animals” as a reminder that we are also animals and all of us deserve moral consideration

… I use “non-human primates” and “non-human simians” as a reminder that we belong to the same order/​family

Following the same rational, I also use “Human QALYs” to be explicit that I’m comparing the impact of different interventions in terms of human lives. When we talk about “lives with value to be measured and saved” referring to human lives only (without being explicit of that and assuming that everyone will understand we are referring to human lives only), we may unconsciously link the concepts “valuable lives” and “only human lives” in our minds.

On the other side I’ve never seen HQALY written anywhere else, and QALY/​DALY is such a widespread term that I’m afraid we don’t need an additional acronym.

What do you think? Shall I/​we use QALY or HQALY?

• My experience is mostly that it sucks, but I guess you knew that already. On a separate note, it seems to be that that the prevalence of ADHD (symptoms) is higher among EAs than in the rest of the population, has anyone every done a survey on this?

This also leads me to think that EAG(x)s should have session for neurodivergent people.

• I am so happy and excited that EAG London is scheduled around the Ascension weekend, this will make it significantly easier to attend (if I get accepted of course).

• Oh, and thanks for the transparency. I appreciate being able to follow the calculations. Great practice.

• First off, very interesting. This is my first exposure to the EA community. My friends /​ colleagues have rightly encouraged me to learn more about your work.

Essentially, my argument is this: you cannot believe that the relations we estimate are consistent with cross-sectional results or experimental results because it takes 71 years for income to double when increasing growth. I further explain this below.

I think we agree that GDP is not a good measure of wellbeing. I also strongly believe it is not a good policy target. We should target wellbeing directly.

For alternative policies that similarly cover a long period of time, see recent work by me and Easterlin, “Explaining happiness trends in Europe” (https://​​doi.org/​​10.1073/​​pnas.2210639119). We show the best predictor of long-run changes in life satisfaction is the generosity of the social safety net – more generous, greater happiness – in ten European countries. At the same time, we argue economic growth does not have a meaningful influence on life satisfaction in the long-run.

For my more substantive comment: increasing growth from two to three percent takes 71 years to double income. This is a very long time in my view. I’m not coming from the EA framework. Perhaps the EA community disagrees with me. It matters a great deal however, for both conceptual and empirical reasons.

Fundamentally, you cannot compare doubling one’s income at a point of time (e.g., due to lottery and investment returns or cash transfers) to doubling one’s income in 71 years. 71 is greater than life expectancy in numerous countries. Empirically, the growth-happiness relation depends upon on the time horizon; it gets smaller as the duration increases. We discuss this in the paper conceptually and in reference to the two data sets we use. The longer period in the WVS/​EVS data results in lower growth- subjective well-relations. For further support, see Bartolini S, Sarracino F (2014) Happy for how long? How social capital and economic growth relate to happiness over time. Ecol Econ 108:242–256. https://​​doi.org/​​10.1016/​​j.ecolecon.2014. 10.004.

Your replication /​ robustness tests are not so surprising. As you point out, our results include larger coefficient estimates using different specifications, yet we still argue they are not economically significant, implying we would argue your alternative results are still too small to prioritize growth. Here’s the quote: “Based on the largest magnitude across all estimations [larger than what you estimate using 2020 or excluding India], it would still take 100 years for a one percentage point increase in the growth rate to raise happiness by one point.”

To your point, however, even a small increase in subjective well-being for a large number of people is meaningful, but we’re talking about a long time to achieve even these small changes. I’m reasonably assured you can find much more effective policies for short-run gains. See Table 1 of P. Frijters, A. E. Clark, C. Krekel, R. Layard, A happy choice: Wellbeing as the goal of government. Behav. Public Policy, 1–40 (2020).

This table inspired a similar one used by the U.K. government in the Green Book. See:

MacLennan, S., Stead, I., 2021a. Wellbeing Guidance for Appraisal: Supplementary Green Book Guidance, His Majesty’s Treasury: Social Impact Task Force.

MacLennan, S., Stead, I., 2021b. Wellbeing discussion paper: monetisation of life satisfaction effect sizes: A review of approaches and proposed approach, His Majesty’s Treasury: Social Impacts Task Force.

Your robustness test results do not overturn our results; they fall within the range we estimate and only apply to one data set, indeed the one that is based on a shorter period, which is less preferred for reasons explained in the text and implied by the Bartolini Sarracino paper referenced above.

We need more research on wellbeing. Increasing consumption does not necessarily increase wellbeing, especially in highly developed countries.

Perhaps you can explain to me how the GiveWell team determined the “Value assigned to increasing ln(consumption) by one unit for one person for one year” and why this is used in determining the value of subjective well-being benefits (cf. the value: https://​​docs.google.com/​​spreadsheets/​​d/​​1lTX-qNY1cSo-L3yZCNzMbzIM1kqWC1vSEhbyFAYr6E0/​​edit#gid=1362437801, which is used in this calculation: https://​​docs.google.com/​​spreadsheets/​​d/​​1aDUPvizGsgT6rLtIf8RkT8LNTmZyXjlXa7Kddc-UeWM/​​edit#gid=135302151)?

See instead the MacLennan references above to see a derivation of the monetary value of a life satisfaction point per year.

Unfortunately, I have not had time to go through the comments, and will be slow to respond due to family concerns. I’ll do my best to respond and keep up with future posts. Thanks for the lively discussion. I wish we could do it in person.

Lastly, you all probably know the Easterlin Paradox has come under fire for years upon years and in different fields. See his article Easterlin RA (2017) Paradox lost? Rev Behav Econ 4:311–339. https://​​doi.org/​​10.1561/​​105. 00000068. You can also find the working paper version for free on google scholar.

• This would be a good fit for a prize competition! and then you can stick it on Superlinear which aggregates EA competitions.

• Sorry for a disorganised comment, but here’s a recent summary I’ve wrote about why most people, including academics, are wrong about polarisation (ping me for references and details):
There’s been a lot of high quality research on polarisation since at least the 80s, spearheaded by Keith Poole and Howard Rosenthal. Around the time of Trump’s election, the subject became extremely popular and a lot of people (including me) decided to research this issue. Unfortunately, many of those scholars (including political scientists unfamiliar with polarisation research) believe things that seem to make sense intuitively but that contradict the large body of pre-existing research on polarisation.
Findings in the area have established that many seemingly likely causes of polarisation are not causes of polarisation; including polarisation of the electorate, party leadership, polarisation over social issues, and gerrymandering. It seems most people (including academics) can get away with claiming these were the main causes, it seems to be a very common belief (I thought it myself until I fact-checked this); alas, it’s well-established it’s just not so. The only mildly well-established likely cause is economic inequality. It is the only trend that follows the pattern of raising polarisation; but it’s an open question whether there’s an actual causal link from inequality to polarisation. Another plausible contender is polarisation of partisans (people heavily involved in politics), but experts seem split over this issue. It is my impression that the latest research favours the view that this was not a relevant cause and that party sorting better explains the data used in favour of partisan polarisation.
The NOMINATE-type measures are the most frequently cited basic data supporting the view that Republicans are now more extremists than they ever wore in the last century. But it does not really directly support this view on its own. NOMINATE-type measures are strictly relational, that is, they measure how far things are from each other, not how far they are from an absolute point such as the centre. Therefore, the data is just as compatible with Republicans moving away from the centre as it is with the whole thing (Republicans and Democrats) moving towards conservatism as part of a long-term macro-trend that changed the US Party System. Nolan McCarty (Poole’s most frequent recent co-author) agreed this is the case by email, but that he believes Republicans drove the trend due to contextual reasons (e.g. Gingrichism, I assume).

• I’m so happy to see the EA community trying to increase la comunidad de EA para mi gente en Latino America!

However, a common criticism of EAs and the EA community seems to be showing up again. In the New Yorker piece on William MacAskill and EA, one of the criticisms was that EAs, like Sam Bankman-Fried, will live in luxurious areas (the Bahamas) and houses.

I recall a fellow EA telling me how when he temporarily lived in Peru, the country where my parents are from and I hold citizenship and share a strong culture, he lived in Mira Flores, an upper echelon of the capital where the wealthy and most fortunate reside.

I hope EAs in the fellowship err away from living in the aristocratic communities and instead try to live among the common, working-class (or middle-class) citizens of Mexico. Especially, if you plan on addressing issues in Mexico that affect the people.

Best of look on this fellowship! ¡Mucha suerte en los proyectos, hermanos y hermanas!

• DC, New York, and San Francisco are among the highest-likelihood-of-being-hit-in-a-full-nuclear-exchange cities in the US.… For San Franciscans, going to Santa Rosa is probably nearly as good as going to the middle of the mountains, or going to Eugene, and it’s much less costly.

Can someone share their credence for why Russia would use only a few warheads against America, attacking only a few cities? If Russia uses many warheads, most US cities that most of us would feel at ease in are a no-go. If we expect Russia to use many if they use them, it is even less worth leaving (unless you want to exit America and NATO regions).

Major question:
What is America’s public-facing policy on retaliation? Do they fire number of warheads proportional to how many Russia fires at us/​our allies? If so, it incentivizes Russia using fewer missiles. But I have the vibe that we have told them we would use everything at our disposal, so they don’t have a reason to only target a handful of cities.

Image courtesy of Rob Bensinger from fb, don’t know where he got it

• The black dots assumes Russia has 2000 functional missiles that they successfully launch against the US and that successfully detonate, and that the US is unable to shoot many of them down/​destroy missile launch sites before launch. My understanding is, concretely, that even if all Russian missiles currently reported ready for launch are launched, there’s 1500 of them not 2000, and that one would expect many to be used against non-US targets (in Ukraine and Europe). The 500 scenario (purple triangles) seems likelier to me for how many targets Russia would try to hit.

Further, my impression of the competence of the Russian military, the readiness of their forces, the state of upkeep on their nukes and missiles, the willingness of individual commanders ordered to launch to do so, etc. is quite low. In many cases they have had an incredibly embarrassingly low success rate at firing missiles at Ukraine, which is an easier task than launching on short notice in a nuclear war. They seem to be using un-upgraded Soviet technology that is often degrading and failing, and the theft of parts for sale on the black market isn’t uncommon.

For each nuclear missile, lots of things need to go right: the missile needs to be in good shape/​ready to launch, the people ordered to launch need to do it, the missile needs to be successfully launched before anyone destroys the launch site, the missile needs to not be shot down, the missile needs to successfully be aimed at the target (this isn’t even very hard, but there’ve been notable Ukraine failures) and the missile needs to actually detonate at the right time. US capabilities to shoot down ICBMs, if such capabilities exist, would be extremely secret (we have no such public capabilities) but it seems like we almost definitely cannot shoot down or prevent the launch of submarine-launched missiles (of which there’d be perhaps a dozen). My personal median expectation is that submarine-launched missiles will likely hit and detonate and a relatively small share of non-submarine-launched missiles will hit and detonate. If Russia is also worried about this, they’ll probably concentrate missiles further on critical targets.

This is decision-relevant in a couple of respects, the most important being that the fewer missiles hit and detonate, the less likely that a nuclear exchange results in a collapse of civilization/​post-apocalyptic wasteland, though note that even if you assume all the purple triangles hit you don’t have to go very far to be safe, and if we evacuate we’ll evacuate to somewhere outside any of the purple triangles. People in major coastal cities should be more worried as they’re likelier to be targeted by submarine-launched missiles which I think almost definitely 1) work 2) would be launched if ordered 3) could not be prevented from launching and 4) cannot be shot down, and people near US military bases should assume a lot of missiles would be launched at that target to make sure at least some get through.

People elsewhere in purple triangles are at, in my assessment, 5x to 20x less risk from a combination of more uncertainty about whether their city will be targeted and much higher likelihood an attempt wouldn’t work.

• I upvoted this comment and post.

But I’m unsure these threads are the best way to get people to chill out.

• See also:

• Are you really sure it’s appropriate to compare launch of strategic ICBMs to rockets in Ukraine? Wouldn’t those ICBMs be aimed in advance, and wouldn’t their operation and upkeep be done by entirely different people using much more careful protocols, laid out over a longer time period?

• 6 Oct 2022 16:17 UTC
16 points
5 ∶ 4

Edit: The post has excellent nuance, and I make no claim to support or defend Leverage specifically (idk them). My comment is intended more generally, and my disagreement concerns two points:

1. “The core problem was that Leverage 1.0 quickly became much too internally focused.”

2. “If they’re not publishing research I would take that as a strong negative signal for the project.”

You make several points, but I just want to respond to my impression that you’re trying to anchor wayward researchers or research groups to the “main paradigm” to decrease the chance that they’ll be wrong. I’m pretty strongly against this.

In a common-payoff game (like EA research), we all share the fruits of major discoveries regardless of who makes the discovery. So we should heavily prioritise sensitivity over specificity. It doesn’t matter how many research groups are wildly wrong, as long at least one research group figures out how to build the AI that satisfies our values with friendship and ponies. So when you’re trying to rein in researchers instead of letting them go off and explore highly variable crazy stuff, you’re putting all your eggs in one basket (the most respectable paradigm). Researchers are already heavily incentivised to research what other people are researching (the better to have a lively conversation!), so we do not need additional incentives against exploration.

The value distribution of research fruits is fat-tailed (citation needed). Strategies that are optimal for sampling normal distributions are unlikely to be optimal for fat tails. Sampling for outliers means that you should rely more on theoretical arguments, variability, and exploration, because you can’t get good data on the outliers—the only data that matters. If you insist on being legible and scientific, so you optimise your strategy based on the empirical data you can collect, you’re being fooled into mediocristan again.

Lemme cite a paper in network epistemology so I can fake looking like I know what I’m talking about,

“However, pure populations of mavericks, who try to avoid research approaches that have already been taken, vastly outperform the other strategies. Finally, we show that, in mixed populations, mavericks stimulate followers to greater levels of epistemic production, making polymorphic populations of mavericks and followers ideal in many research domains.”[1]
-- Epistemic landscapes and the division of cognitive labor

That said, I also advocate against explorers being allowed to say

But I’m virtuously doing high-variance exploration, so I don’t need to worry about your rigorous schmigorous epistemology!

Explorers need to be way more epistemologically vigilant than staple researchers pursuing the safety of existing paradigms. If you leave your harbour to sail out into the open waters, that’s not a good time to forget your sextant, or pretend you’ll be a better navigator without studying the maps that do exist.

1. ^

FWIW, I think conclusions from network-epistemological computer simulations are extremely weak evidence about what we as an irl research community should do, and I mainly benefit from it because they occasionally reveals patterns that help with analysing real-life phenomena. The field exists at all—despite their obviously irrelevant “experiments”—because it makes theoretical speculation seem more technical, impressive, professional.

• It doesn’t matter how many research groups are wildly wrong, as long at least one research group figures out how to build the AI that satisfies our values with friendship and ponies.

Sort of? In your hypothetical there are two ways your research project could go once you believe you’ve succeeded:

1. You go and implement it, or

2. You figure out how to communicate your results to the rest of the industry.

If you go with (1) then it’s really important that you get things right, and if you’ve disconnected yourself from external evaluation I think there’s a large chance you haven’t. I’d much prefer to see (2), except now you do need to communicate your results in detail so the rest of the world can evaluate and so you didn’t gain that much by putting off the communication until the end.

I’ll also make a stronger claim, which is that communication improves your research and chances of success: figuring out how to communicate things to people who don’t have your shared context makes it a lot clearer which things you actually don’t understand yet.

trying to rein in researchers instead of letting them go off and explore highly variable crazy stuff

I’m not sure why you think I’m advocating avoiding high-variability lines of research? I’m saying research groups should make public updates on their progress to stay grounded, not that they should only take low-risk bets.

• I edited my original comment to point out my specific disagreements. I’m now going to say a selection of plausibly false-but-interesting things, and there’s much more nuance here that I won’t explicitly cover because that’d take too long. It’s definitely going to seem very wrong at first glance without the nuance that communicates the intended domain.

I feel like I’m in a somewhat similar situation to Leverage, only in the sense that I feel like having to frequently publish would hinder my effectiveness. It would make it easier for others to see the value of my work, but in my own estimation that trades off against maximising actual value.

This isn’t generally the case for most research, and I might be delusional (ime 10%) to think it’s the case for my own, but I should be following the gradient of what I expect will be the most usefwl. It would be selfish of me to do the legible thing motivated just by my wish for people to respect me.

The thing I’m arguing for is not that people like me shouldn’t publish at all, it’s that we should be very reluctant to punish gambling sailors for a shortage of signals. They’ll get our attention once they can demonstrate their product.

The thing about having to frequently communicate your results is that it incentivises you to adopt research strategies that lets you publish frequently. This usually means forward-chaining to incremental progress without much strategic guidance. Plus, if you get into the habit of spending your intrinsic motivation on distilling your progress to the community, now your brain’s shifted to searching for ideas that fit into the community, instead of aiming your search to solve the highest-priority confusion points in your own head.

To be an effective explorer, you have to get to the point where you can start to iterate on top of your own ideas. If you timidly “check in” with the community every time you think you have a novel thought, before you let yourself stand on it in order to explore further down the branch, then 1) you’re wasting their time, and 2) no one’s ever gonna stray far from home.

When you go from—

A) “huh, I wonder how this thing works, and how it fits into other things I have models of.”
to
B) “hmm, the community seems to behave as if Y is true, but I have a suspicion that ¬X,
so I should research it and provide them with information they find valuable.”

—then a pattern for generating thoughts will mostly be rewarded based on your prediction about whether the community is likely to be persuaded by those thoughts. This makes it hard to have intrinsic motivation to explore anything that doesn’t immediately seem relevant to the community.

And while B is still reasonably aligned with producing value as long as the community is roughly as good at evaluating the claims as you are, it breaks down for researchers who are much better than their expected audience at what they specialise in. If the most competent researchers have brains that optimise for communal persuasiveness, they’re wasting their potential when they could be searching for ideas that optimise for persuading themselves—a much harder criteria to meet given that they’re more competent.

I think it’s unhealthy to–within your own brain–constantly try to “advance the communal frontier”. Sure, that could ultimately be the goal, but if you’re greedily and myopically only able to optimise for specifically that at every step, then that is like a chess player who’s compulsively only able to look for checkmate patterns–unable to see forks that merely win material or positional advantage.

How frequently do you have to make your progress legible to measurable or consensus criteria? How lenient is your legibility loop?

I’m not saying it’s easy to even start trying to feel intrinsic motivation for building models in your own mind based on your own criteria for success, but being stuck in a short legibility loop certainly doesn’t help.

If you’ve learned to play an instrument, or studied painting under a mentor, you may have heard the advice “you need to learn you trust in your own sense of aesthetics.” Think of the kid who, while learning the piano, expectantly looks to their parent after every key they press. They’re not learning to listen. Sort of like a GAN with a discriminator trusted so infrequently that it never learns anything. Training to both generate and discriminate within yourself, using your own observations, will be pretty embarrassing at first, but you’re running a much shorter feedback loop.

• Maybe we’re talking about different timescales here? I definitely think researchers need to be able to make progress without checking in with the community at every step, and most people won’t do well to try and publish their progress to a broad group, say, weekly. For a typical researcher in an area with poor natural feedback loops I’d guess the right frequency is something like:

1. Weekly: high-context peers (internal colleagues /​ advisor /​ manager)

2. Quarterly:medium-context peers (distant internal colleagues /​ close external colleagues)

3. Yearly: low-context peers and the general world

(I think there are a lot of advantages to writing for these, including being able to go back later, though there are also big advantages to verbal interaction and discussion.)

I think Leverage was primarily short on (3); from the outside I don’t know how much of (2) they were doing and I have the impression they were investing heavily in (1).

• Roughly agreed. Although I’d want to distinguish between feedback and legibility-requirement loops. One is optimised for making research progress, the other is optimised for being paid and respected.

When you’re talking to your weekly colleagues, you have enough shared context and trust that you can ramble about your incomplete intuitions and say “oops, hang on” multiple times in an exposition. And medium-context peers are essential for sanity-checking. This is more about actually usefwl feedback than about paying a tax on speed to keep yourself legible to low-context funders.

Thank you for chatting with me! ^^

• Edit: I mostly retract this comment. I skimmed and didn’t read the post carefully (something one should never do before leaving a negative comment) and interpreted it as “Leverage wasn’t perfect, but it is worth trying to make Leverage 2.0 work or have similar projects with small changes”. On rereading, I see that Jeff’s emphasis is more on analyzing and quantifying the failure modes than on salvaging the idea.

That said, I just want to point out that (at least as far as I understand it), there is a significant collection of people within and around EA who think that Leverage is a uniquely awful organization which suffered a multilevel failure extremely reminiscent of your run-of-the mill cult (not just for those who left it, but also for many people who are still in it), which soft-core threatens members to avoid negative publicity, exerts psychological control on members in ways that seem scary and evil. This is context that I think some people reading the sterilized publicity around Leverage will lack.

There are many directions from which people could approach Leverage 1.0, but the one that I’m most interested in is lessons for people considering attempting similar things in the future.

I think there’s a really clear lesson here: don’t.

I’ll elaborate: Leverage was a multilevel failure. A fundamentally dishonest and charismatic leader. A group of people very convinced that their particular chain of flimsy inferences led them to some higher truth that gave them advantages over everyone else. A frenzied sense of secrecy and importance. Ultimately, psychological harm and abuse.

It is very clearly a negative example, and if someone is genuinely trying to gain some positive insight into a project from “things they did right” (or noticeably imitate techniques from that project), that would make me significantly less likely to think of them as being on the right track.

There are examples of better “secret projects”—the Manhattan project as well as other high-security government organizations, various secret revolutionary groups like the early US revolutionaries, the abolitionist movement and the underground railroad, even various pro-social masonic orders. Having as one’s go-to example of something to emulate an organization that significantly crossed the line into cult territory (or at least into Aleister Crowley level grandiosity around a bad actor) would indicate to me a potential enlarged sense of self-importance, an emphasis on deference and exclusivity (“being on our team”) instead of competence and accountability, and a lack of emphasis on appropriate levels of humility and self-regulation.

To be clear, I believe in decoupling and don’t think it’s wrong to learn from bad actors. But with such a deeply rotten track record, and so many decent organizations that are better than it along all parameters, Leverage is perhaps the clearest example of a situation where people should just “say oops” and stop looking for ways to gain any value from it (other than as a cautionary tale) that I have heard of in the EA/​LW community.

• That said, I just want to point out that (at least as far as I understand it), there is a significant collection of people within and around EA who think that Leverage is a uniquely awful organization which suffered a multilevel failure extremely reminiscent of your run-of-the mill cult (not just for those who left it, but also for many people who are still in it), which soft-core threatens members to avoid negative publicity, exerts psychological control on members in ways that seem scary and evil. This is context that I think some people reading the sterilized publicity around Leverage will lack.

I can’t comment on whether rumors like this still persist in the EA community, but to the degree that they do, I think there is now a substantial amount of available information that allows for a more nuanced picture of the organization and the people involved.

Two of the best, in my view, are Cathleen’s post and our Inquiry Report. Both posts are quite lengthy, but as you seem passionate about this topic, they may nevertheless be worth reading.

I think it’s fair to say that the majority of people involved in Leverage would strongly disagree with your characterization of the organization. As someone who works at Leverage and was friends with many of the people involved previously, I can say that your characterization strongly mismatches my experience.

• worth trying to make Leverage 2.0 work

Note that Leverage 2.0 is a thing, and seems to be taking a very different approach towards the history of science, with regular public write-ups: https://​​www.leverageresearch.org/​​history-of-science

• It seems like you’re misreading Jeff’s post. Perhaps deliberately. I will prefer it if people on this forum do this less.

• attempting similar things in the future

I intended this a bit more broadly than you seem to have interpreted it; I’m trying to include exploratory research groups in general.

gain any value from it (other than as a cautionary tale)

That is essentially what this post is: looking in detail at one specific way I think things went wrong, and thinking about how to avoid this in the future.

I expect tradeoffs around how much you should prioritize external communication will continue to be a major issue for research groups!

• Fair enough. I admit that I skimmed the post quickly, for which I apologize, and part of this was certainly a knee-jerk reaction to even considering Leverage as a serious intellectual project rather than a total failure as such, which is not entirely fair. But I think maybe a version of this post I would significantly prefer would first explain your interest in Leverage specifically: that while they are a particularly egregious failure of the closed-research genre, it’s interesting to understand exactly how they failed and how the idea of a fast, less-than-fully transparent think tank can be salvaged. It does bother me that you don’t try to look for other examples of organizations that do some part of this more effectively, and I have trouble believing that they don’t exist. It reads a bit like an analysis of nation-building that focuses specifically on the mistakes and complexities of North Korea without trying to compare it to other less awful entities.

• How many users have you got?

• I would like the ability to control what I see and make it show me things I like again on my main timeline at some future time. Kind of twitter meets anki.

• That sounds easy to implement. Definitely we’re going to give users many choices regarding feed algorithms and they are going to be able to plug any custom one.

• Remove the ability to strong upvote/​agree on your own comments/​posts and remove all existing such votes. (I think you can on posts—I haven’t written any so can’t check.)

• You automatically strong-upvote your posts, and automatically upvote your comments. I think it should stay this way. But I have status quo bias.

• I disagree. I like being able to sometimes say I think something is important. And the whole point is that my large amounts of karma give me this power.

I wish that some people had a lot more karma than they currently do though. I wish I could transfer some of my karma to other people.

Maybe we could just report how much people have upvoted their own comments.

I agree that personal upvoting shouldn’t affect one’s overall karma.

• Oh interesting. I was imagining that some people are just strong upvoting all of their own stuff, so it rewards dishonest behaviour. But maybe I’m just cynical :P

• No, I have some of that sense that the current system is very gameable.

(A better system might be that your strong upvoting is spread across everything you upvote every week so you can eitehr give a huge boost to a few things or a small boost to many)

• I agree that personal upvoting shouldn’t affect one’s overall karma

I don’t think it does?

• Habits of thought I’m working on

• Trying to be more gearsy, less heuristics-y. What’s actually good or bad about this, what do they actually think not just what general direction are they pulling the rope, etc

• Noticing when we’re arguing about the wrong thing, when we e.g. should be arguing about the breakdown of what percent one thing versus another

• Noticing when we’re skating over a real object level disagreement

• Noticing whether I feel able to think thoughts

• Noticing when I’m only consuming /​ receiving ideas but not actually thinking

• Listing all the things that could be true about something

• More predictions /​ forecasts /​ models

• More often looking things up /​ tracking down a fact rather than sweeping it by or deciding I don’t know

• Paraphrasing a lot and asking if I’ve got things right

• “Is that a lot?”—putting numbers in context

• If there’s a weird fact from a study, you can question the study as well as the fact

• Say why you think things, including “I saw a headline about this”

Habits of thought I might work on someday

• Reversal tests: reversing every statement to see if the opposite also seems true

• Can’t we produce really good text-to-image educational videos? E.g. Eliezer’s fictional writings are really fun, and have introduced many of us into this topic. Bonus point if these videos accurately predict the future, gaining us some sort of reputation.

• Hi friends, I am Olin, humanitarian & development professional currently based in Phnom Penh , Cambodia. I’m interested in GCBRs and governance. If anyone is course instructor/​ curriculum designer/​subject matter expert would be nice to connect/​chat. Thanks and cheers

• 6 Oct 2022 13:59 UTC
12 points
3 ∶ 0

As a woman who’s been involved in EA for a long time, I agree with parts of this post, but I also find parts of it make frustrating assumptions. For example, I know several men in the EA community taking time off work or planning to take time off work in order to raise their children. If your hypothesis is correct and EA is focused on the male experience, why wouldn’t it include them?

I think a more likely explanation is that these topics are hard to think and write about and no one’s managed to do it yet. Up until a couple years ago, there were only a couple thousand EAs globally—so when I see an article that should exist but doesn’t, my first reaction is there’s a gap because no one’s had time to do the topic justice, not that the community doesn’t care.

• A lot of this posts reads as an intro to HLI, what they do, why wellbeing matters. And this is important and I also agree, neglected.

At the same time, you write that this post is about why HLI should receive a grant for a specific proposal of theirs: https://​​docs.google.com/​​document/​​d/​​1zANITg1HuKAn5uEe7nzepTZXxyMDy44vowsdVcMFiHo/​​edit

And it seems to me you do not really address the value or specifics of this proposal? Your post reads to me more as ‘we should fund HLI’s research’ but the proposal asks for funding for a grants specialist and seed money. And it’s strange to me that you mostly recommend funding them based on prior work (which again, I also see as work of quality and importance) rather than also evaluating the proposal at hand.

For instance, HLI are requesting $100,000 as a seed fund to e.g. ‘make some early-stage grants’. This would effectively be a regrant of a regrant. People in the comments have expressed skepticism of this (e.g. Nuño’s comment: “FTX which chooses regrantors which give money to Clearthinking which gives money to HLI which gives money to their projects. It’s possible I’m adding or forgetting a level, but it seems like too many levels of recursion, regardless of whether the grant is good or bad.”) There’s a lot of dilution and I wonder what you think of this? Other people on Manifold (John and Rina) have pointed out how non-specific this proposal is, how lacking of a plan it appears in the current way it’s written, that there might be harm risks that aren’t considered at all. I understand there might have been word limits but other proposals are much more concrete. It would be great if Clearer Thinking publish more information on how they evaluated all of these final proposals. • Thanks for highlighting these concerns! Here is what I think about these topics: 1. I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes. 2. I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended. So far, we’ve looked quite narrowly at GiveWell-style ‘micro-interventions’ in low-income countries to see how taking a happiness approach changes the priorities. This sort of analysis is quite straightforward—it’s standard quantitative economic cost-effectiveness - but we’re not convinced that these sorts of interventions are going to be the best way to improve global wellbeing. We’ve hired Lily to expand our analysis more broadly: Are there systemic changes that would move the world in the right direction, not just benefit one group? What should be done to improve wellbeing in high-income countries? A world without poverty isn’t a world of maximum wellbeing, so how could moving towards a more flourishing society today impact the long-term? These are harder, more qualitative analyses, but no one has tried to tackle them before and we think this could be extremely valuable. I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant. 3. I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of: For example, the World Happiness Report has only been running for ten years but its annual league table of national wellbeing is now well known and sparks discussion amongst policymakers. Further funding to promote the report could substantially raise the profile of wellbeing. Other examples include the World Wellbeing Movement which aims to incorporate employee wellbeing into ESG investing scores and Action for Happiness which promotes societal change in attitudes towards happiness. That said, I wish they listed a couple of more organizations/​projects/​policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience. 4. I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received$55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.

5.

I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.

6.

As for Nuño’s comment. I don’t see a problem that money is passed further through a number of orgs. I sympathize with Austin’s fragment of the comment (please read the whole comment as this fragment is a little misleading on what Austin meant there)

I’m wondering, why doesn’t this logic apply for regular capitalism? It seems like money when you buy eg a pencil goes through many more layers than here, but that seems to be generally good in getting firms to specialize and create competitive products. The world is very complex, each individual/​firm can only hold so much know-how, so each abstraction layer allows for much more complex and better production.

Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value.

• 6 Oct 2022 13:52 UTC
1 point
0 ∶ 0

Some extra questions to think about or just have fun:

1 - Should the feed promote EA aligned posts over others?

2- Should we optimise for active users? Number of posts? Happiness?

3- Should it be running in a centralised server (like EA forum?) or in a decentralised way?

4- What should be the role of crypto inside the social network, if any?

5 - Should users control the network by coin voting?

6- Should users be able to fork the network and quickly create a spin-off?

7 - How should it create revenue and re-direct it to altruistic causes?

I hope is enough to kick start some conversation as I believe having a better social networks and aligned to EA values is vital.

• I have some sympathy for the second view, although I’m skeptical that sane advisors have significant real impact. I’d love a way to test it as decisively as we’ve tested the “government (in its current form) responds appropriately to warning shots” hypotheses.

On my own models, the “don’t worry, people will wake up as the cliff-edge comes more clearly into view” hypothesis has quite a lot of work to do. In particular, I don’t think it’s a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form “surely my political party will become popular, claim power, and implement policies I like”.)

I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/​advocacy/​improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.

Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community. By “under-resourcing it” I don’t just mean in terms of money, because as the Flynn campaign showed us it’s easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen—just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures. Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we’ve been able to do “approximately nothing.” The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will’s book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy, CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy’s hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between. • “I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings.” This is a problem I’ve spoken often about, and I’m currently writing an essay on for this forum based on some research I co-authored. People wildly underestimate how hard it is to not only pass governance, but make sure it is abided to, and to balance the various stakeholders that are required. The AI Governance field has a massive sociological, socio-legal, and even ops-experience gap that means a lot of very good policy and governance ideas die in their infancy because no-one who wrote them have any idea how to enact them feasibly. My PhD is on the governance end of this and I do a bunch of work within government AI policy, and I see a lot of very good governance pitches go splat against the complex, ever-shifting beast that is the human organisation purely because the researchers never thought to consult a sociologist, or incorporate any socio-legal research methods. • Hi Lawrence—this was an interesting read. Thanks especially for including your personal note at the end; that was a touching testimony to the emotional value of community. • How tractable are animal welfare problems compared to global health and development problems? I’m asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it’s more tractable. • I believe they are largely tractable, there’s a variety of different intervention types (Policy, Direct work, Meta, Research), cause areas (Alt Proteins, Farmed Animals, Wild animal suffering, Insects), organisations and geographies to pursue them in. Of particular note may be potentially highly tractable and impactful work in LMIC (Africa, Asia, Middle East, Eastern Europe) I will say animal welfare is a newer and less explored area than global health but that may mean that your donation can be more impactful and make more of a difference as there could be a snowball effect from funding new high-potential intervention or research. If you are quite concerned about traceability, perhaps you could consider donating to organisations that are doing more research or meta-work to discover more tractable interventions Either way, it’s not entirely clear and highly depends on your philosophy, risk tolerance, knowledge and funding counterfactuals. • What’s the best way to talk about EA in brief, casual contexts? Recently I’ve been doing EA-related writing and copyediting, which means that I’ve had to talk about EA a lot more to strangers and acquaintances, because ‘what do you do for work?’ is a very common ice-breaker question. I always feel kind of awkward and like I’m not doing the worldview justice or explaining it well. I think the heart of the awkwardness is that ‘it’s a movement that wants to do the most good possible/​do good effectively’ seem tautologous (does anyone want to do less good than possible?); and because EA is kind of a mixture of philosophy and career choice and charity evaluating and [misc], I basically find it hard to find legible concepts to hang it on. For context, I used to be doing a PhD in Greek and Roman philosophy—not exactly the most “normal” job- and I found that way easier to explain XD • Related questions: -what’s the best way to talk about EA on your personal social media? -what’s the best way to talk about it if you go viral on Twitter? (this happened to me today) -what’s the best way to talk about it to your parents and older family members? etc. I think, kind of, ‘templates’ about how to approach these situations risk seeming manipulative and being cringey, as ‘scripts’ always are if you don’t make them your own, but I’d really enjoy reading a post collecting advice from EA community builders, communicators, marketers, content writers...etc on their experiences trying to talk about EA in various contexts, and what tends to make those conversations go well or badly. (If you’d be interested to produce something like this, I’d be happy to collaborate with you on it) • I think a collection like the one you’re proposing would be an incredibly valuable resource for growing the EA community. • It took me a lot of time to consider seriously if I could get an ADHD diagnosis. • It is scary to take drugs that affect how my brain functions. • I find ADHD meds significantly increased my productivity (2-5 x ) • Conversational moves in EA /​ Rationality that I like for epistemics • “So you are saying that” • “But I’d change my mind if” • “But I’m open to push back here” • “I’m curious for your take here” • “My model says” • “My current understanding is…” • “...I think this because…” • “...but I’m uncertain about…” • “What could we bet on?” • “Can you lay out your model for me?” • “This is a butterfly idea • “Let’s do a babble • “I want to gesture at something /​ I think this gestures at something true” • I am unsure whether ADHD is “real” even while having a diagnosis. But I struggle with tasks I see my friends find easy and taking medication has made me much more productive. • Ambitious Altruism When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was “ambitious altruism.” It let me explain why just helping one person didn’t feel like enough without coming off as a jerk (i.e. “I want to be more ambitious than that” rather than “that’s not effective”). It doesn’t have the maximizing quality, but it doesn’t not have it either, since if there’s something more you can do with the same resources, there’s room to be more ambitious. • It doesn’t seem hard to allow this kind of feature to be natively supported by the forum, either by having wiki articles which aren’t tags or by allowing a mode where anyone can edit a post. • 6 Oct 2022 10:29 UTC 5 points 0 ∶ 0 What are the most ambitious EA projects that failed? If we’re encouraged to be more ambitious, it would be nice to have a very rough idea of how cost-effective ambition is itself. Essentially, I’d love to find or arrive at an intuitive/​quantitative estimate of the following variables: • [total # of particularly ‘ambitious’ past EA projects[1]] • [total # (or value) of successfwl projects in the same reference class] In other words, is the reason why we don’t see more big wins in EA that people aren’t ambitious enough, or are big wins just really unlikely? Are we bottlenecked by ambition? For this reason, I think it could be personally[2] valuable to see a list,[3] one that tries hard to be comprehensive, of failed, successfwl, and abandoned projects. Failing that, I’d love to just hear anecdotes. 1. ^ Carrick Flynn’s political campaign is a prototypical example. Others include CFAR, Arbital, RAISE. Other ideas include published EA-inspired books that went under the radar, papers that intended to persuade academics but failed, or even just earning-to-give-motivated failed entrepreneurs, etc. 2. ^ I currently seem to have a disproportionately high prior on the “hit rate” for really high ambition, just because I know some success stories (e.g. Sam Bankman-Fried), and this is despite the fact that I don’t see much extreme ambition in the water generally. 3. ^ Such a list could also be usefwl for publicly celebrating failure and communicating that we’re appreciative of people who risked trying. : ) • Related to this, I’m really curious about our biggest mistakes /​ practical examples on how we were unproductive. My biggest mistakes (2 YoE at a ~6 developers company): • Spent months going forward with a misguided rewrite in part because I didn’t want to throw away weeks of someone else’s work. • Tried to intermediate between coworker X that wanted to go for a quick/​MVP solution and coworker Y that wanted to go for a more complete solution, instead of having them talk to each other as soon as possible. • Underinvested in developer tooling (e.g. our build system and testing setup), thinking I would stay at the company for just a few months. Ended up staying there for two years, but even after 4 months the time investment would have paid itself. • I’d be excited for features that promote good epistemics, including: • Interactivity on the site • Being able to enter prediction markets without leaving the page you’re on • better polls than twitter • Promoting social /​ gamified regular challenges • Forecasting • BOTECs /​ Fermis • Good tweet/​thread-length explanations of tricky ideas • Hypothesis generation • What should Putin do /​ why is he doing this? • How should sex ed get taught? • What funding models are possible besides the ones we have? For science, for EA projects, other? • How much math should be taught in schools? • What different models of mental health are there? • Encouraging a sprint of learning a new thing • Using squiggle for the first time • Writing your first short form • Making your first tiktok • etc • Promoting regular reflections • What am I not currently taking seriously? • What am I confused about? • Examples of propagation of beliefs to other beliefs /​ actions • Promoting independent thinking • How much do you currently buy longtermism? What are some cruxes? • Which of the new cause areas are you most excited about? • etc • I agree with Chana, and would like an app that draws you through this process. • Agree, we’re very much looking to create a social network that improves though process. We choose twitter as a starting point because it has a nice structure (trees/​forest) and we plan to modify it to improve critical thinking. • I would love if you could expand a bit on the “interactivity” items, since is the one that we can handle more immediately. Thanks • So, being able to enter your prediction on prediction markets without leaving the page you’re on, being able to put an answer to a fermi or other kind of question but behind a spoiler tag, being able to collect a bunch of answers to a question (like this question post does), being able to comment on or collaborate on threads, maybe being able to fork threads the way you would have gone. Does that gesture at the kinds of things I want? • Nice post! Like other have said in the comments, it’s hard to come up with concrete takeaways. Personally, I’m going to spend more time with the very few quakers I know just to learn more about their general vibe. • 6 Oct 2022 10:10 UTC 3 points 2 ∶ 0 Some kind of default categorisation filter that makes the frontpage more relevant to you—look to existing forum software for inspiration (currently this feels more like a zillion-author blog) • 6 Oct 2022 10:06 UTC 5 points 1 ∶ 0 Embed timestamped Youtube videos • [ ] [deleted] • Thank you for raising this issue. You are in your 30s, I am in my 50s and I am part way through the Intro to EA program. If you can feel an outsider at 30 something, imagine how it might be for a 50 something. These are briefly my thoughts 1. There is such a predominance of youth, there is a sense that much of this has not been thought about before and therefore my lived experience has not much merit. Yet I have lived the life of an EA even if it had no name. 2. There is a a certain complacency in the idea that EA is using science for decision making (I noted Toby Ord’s reference to that in a talk ) without perhaps remembering that scientists are simply biased humans too. Galton was a much lauded academic statistician but perfected eugenics. 3. I have a bias here as someone whose neurodiversity means I have significant issues with mathematical concepts but yet managed to understand the excess risk taken in the City in 2006. I left my legal role as I was exhausted defending the spread of the much praised skills of hedge funders etc. I remain convinced that there is a substantial failure to admit that pure human behaviours are very strong over-rulers. Dominant men had new toys and they would be used—something that I felt comes through strongly in that excellent Forum post on the race for the nuclear bomb and had already begun to come through to me around AI (I had created a short cut explanation in my head ‘oh it’s the usual race thing and some overpowerful man will just one day set something going because he can’). 4. It is hard to find answers to what feels like some very basic questions; such as the choice of charities on Give Well. It seems to me that some hard questions don’t even get asked, for eg why should charitable donations make good what a Nigerian government is failing to do in its own program to distribute Vitamin A? I have searched for criteria that might address the choice of charity but cannot find them. I do not understand why there is no prioritising vaccine for Malaria over reducing risk of catching. This is of particular interest to me as I was the founding trustee of a charity in the UK that has its parent in the US. My scout mindset has still found no reason to doubt my support of it. I wonder about the potential for Give Well acting as a funnel that might adversely affect other charities—creating its own neglectedness criteria. I raise these in the Intro discussions but there is no traction or explanation. I hope that somehow I will find my place within the EA world—maybe I can set up “EA for Oldies; your contribution is relevant too”? I understand that I have only been looking at the Forum for a month or so; if someone can point to any area that does consider how those of us towards the end of our careers can contribute, I would be very grateful. • Agreevote and disagreevote on posts. Sometimes I’d like to upvote a post without implying I agree with it. • I mean, I opposed the original idea, but if we have it for comments we might as well have it for posts. • I’d guess the reason this was done for comments first is that posts are much longer and more complicated, such that it’s often not clear what “agreeing” with the post even means. I think it’s plausibly a good feature for posts, but I think it makes a lot more sense for comments. • A human “epistemic spot checker” who tries to find flaws in verifiable claims made in EA Forum posts • One worry is that if there’s negative feedback for making clearly falsifiable claims, people will stop making clear claims. Another worry is that the service is inaccurate, like sometimes happened with Facebook fact-checkers. • This would be awesome! I could imagine some people not liking this as it might make the forum a more intimidating place to post to. I imagine that the kind of person who says this would have less of an issue with: * people opting in by making the post a non-personal post * people opting in by adding a “check-me” tag Another mechanism could be for the forum team to pay out$25 bounties when people falsify claims (as a way to incentivise this kind of checking), and maybe take some of the authors karma.

• Sequences show up in search results

• Would you want to see the result as “Sequence A, post X”, or the opposite ordering, or just “sequence A”?

• Probably just “sequence A”, though anything would be fine to remove this gap in the search feature? I’ve searched the names of a sequence multiple times because that’s just what I remember, it doesn’t show up, and I have to Google it.

• Does Peter Singer still consider himself aligned to the Effective Altruism movement? And/​or do you forecast he will do in five years time?

• If “EA is a question,” and that question is how to do the most good, I think Peter Singer will always consider himself an effective altruist.

However, he seems to disagree about whether the answer to that question entails a predominant focus on common longtermist topics. I suspect, while he will always see himself as an EA, it will be as an EA that has important differences in cause area prioritization. For more info, he discusses his views about longtermism here, perhaps captured best by the following quote:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

• For what it’s worth I think my suggested voting pattern was dumb. I think I should have said agreevote for if you agree with the reasoning.

• Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs?

• Looking at your profile I think you have a good idea of answers already, but for the benefit of everyone else who upvoted this question looking for an answer, here’s my take:

Are there AI risk scenarios which involve narrow AIs?

Yes, a notable one being military AI i.e. autonomous weapons (there are plenty of related posts on the EA forum). There are also multipolar failure modes on risks from multiple AI-enabled superpowers instead of a single superintelligent AGI.

Why does most AI risk research and writing focus on artificial general intelligence?

A misaligned AGI is a very direct pathway to x-risk, where an AGI that pursues some goal in an extremely powerful way without having any notion of human values could easily lead to human extinction. The question is how to make an AI that’s more powerful than us do what we want it to do. Many other failures modes like bad actors using tool (narrow) AIs seem less likely to lead directly to x-risk, and is also more of a coordination problem than a technical problem.

• 6 Oct 2022 9:06 UTC
1 point
0 ∶ 0

This is great—thanks for writing this.

My addition to this would be that you can increase your empathy for the suffering of others by connecting with your own suffering. Experiences of pain and fear in my own life, definitely make it easier to connect with those feelings in others.

(And as well as helping empathy, connecting with the motivational usefulness of negative experiences can make the experiences themselves feel a little more meaningful (so a little less bad).)

• For what it’s worth, I’m on the list and closely connected to EA :)

• [ ]
[deleted]
• “I think it’s still possible to have some influence in systems where minor parties are unlikely to get elected.”

Thats good news Michael Dello If i may ask what your strategy would be if you were running a campaign in a ‘blue-ribbon’ National Party electorate? eg New England region in NSW :)

Also, how best could a small number of AJP volunteers be used effectively?

• [ ]
[deleted]
• Link is broken and excluded from the Wayback Machine.

• Wait, it’s a small thing, but I think I have a different understanding of decoupling (even though my understanding is ultimately drawn from the Nerst post that’s linked to in your definitional link); consequently, I’m not 100% sure what you mean when you say a common critique was ‘stop decoupling everything’.

You define the antonym of decoupling as the truism that ‘all causes are connected’. This implies that a common critique was that, too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress.

I can imagine this would be a common critique. However, my definition of the antonym is quite different.

I would describe the antonym of decoupling to be a lack of separating an idea from its possible implications.

For example, a low-decoupler is someone who is weirded out by someone who says, ‘I don’t think we should kill healthy people and harvest their organs, but it is plausible that a survival lottery, where random people are killed and their organs redistributed, could effectively promote longevity and well-being’. A low-decoupler would be like, ‘Whoa mate, I don’t care how much you say you don’t endorse the implications of your logic, the fact you think this way suggests an unhealthy lack of an empathy and I don’t think I can trust you’.

Are you saying that lots of critiques came from that angle? Or are you saying that lots of critiques were of the flavour, ‘Too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress’?

Like I said, it’s a minor thing, but I just wanted to get it clear in my head :)

Thanks for the post!

• Your read makes sense! I meant the lumping together of causes, but there was also a good amount of related things about EA being too weird and not reading the room.

• Thanks for the clarification!

• Welcome to the forum!

My guess is that you won’t get a lot of engagement under this post, since the topic has been hashed out here in prior years and most people have already said everything they’ll have to say until there’s new information—there’s a lot of good conversation if you type “climate change” into the search bar to see how the general consensus was built and what all went into the consensus view.

• Upvoted.

This has a lot of good thinking and good research, and more people should see it.

I almost didn’t read this because of the title; I imagine that twelve people bounced off right there for every person who clicked the link, and if asked for advice I’d recommend changing the title to something more indicative of the post itself such as: “Common Advice For EAs Doesn’t Account For or Analyze the Costs Incurred by Women in Childrearing” (or something pithier to the same general effect).

In particular I’d avoid the word cater, it comes across as value-laden and negative. I initially expected this post to be written by an angry man.

It would also greatly benefit from a concise Key Takeaways at the top, as One-time pad and Guy Raveh have already said. That would also signal to the reader what they’re going to read, which is important because people need to make a snap judgement about whether it’s worth their time to continue.

(I’m always awkward around giving advice; if any of this seems brusque or rude please know that it wasn’t intended that way and I appreciated the article.

This opening

### According to a flagship Effective Altruism (EA) organisation, you have 80,000 hours in your career over a lifetime: 40 hours per week, 50 weeks per year, for 40 years. But does this hold true for women? And if not, what are the implications of this (and related assumptions) for the EA research community and the practical EA community?

is almost perfectly designed to not be read. When I first read the post, my eyes skimmed through this without my brain processing it. I would have read it, if it had been written in normal font and if it were not above what looks like a stock photo.

Upon rereading it, I agree with my brain’s initial learned heuristic of not parsing things written in heading text above what look like stock photos. Don’t waste people’s time beating around the bush or burying the lede, people want to know what they’re reading as soon as possible so that they know if they should read it through and pick over the particulars.

If I’d written it, the opening would read:

A key assumption in 80,000 hours’ advice is that you have 80,000 hours in your career over a lifetime: 40 hours a week, 50 weeks a year, for 40 years. But this doesn’t hold for women, who swallow the larger half of the opportunity cost inherent to raising children.

take whatever from that you find helpful, and disregard whatever sounds to you clunky.

Moving on:

This post looks to explore the following 3 questions:

a) Does EA cater to women, as both a research field and a practical community?

b) If not, what (if any) female-specific considerations ought to be taken into

account by EA?

c) What (if any) practical steps can be taken to ensure that EA is more inclusive

of women, and considerate of women’s lives?

This is another point where as I reader I might have jumped off. You aren’t being paid by how long we stay on this webpage before clicking away! There’s no reason to imitate the style of websites which are paid by how long people have to stay on the webpage afore they can tell what they’re reading. This is safe to cut.

In seeking to answer these questions, this article does the following:

I’d make this clearer and more concise, for example:

Table of Contents:

1. What is effective altruism?

2. Demographics of effective altruism

3. My personal experience with effective altruism

4. Taking women into account (includes analysis of the Giving What We Can Pledge)

5. Preliminary recommendations

I really like this whole part:

How much of the article should you read?

And in fact I like the rest of it as well. Once the article hits its stride, it wears its use to the reader on its sleeve.

Thanks for writing it!

• Thank you for spending the time writing such detailed feedback Lumpyproletariat! This is the first ever blog-style post I’d written (and first contribution to the EA Forum), so it’s incredibly helpful that you’ve pointed out specific examples and provided alternatives—it’s helped me to see exactly what I can adjust in future. I can see for example how much more digestible the ‘Table of Contents’ is the way that you’ve drafted it (and how much more cognitive load it requires to read the ’In seeking to answer these questions [...] part as it is currently). Appreciate it a lot!

• (If I had more time, I’d look into the claim about delaying children leading to more successful careers to see if it’s something I need to quibble about; it feels like the sort of claim that might have correlation and causation confused. But I do not have the time to get into the weeds on this or any other point and for all I know it could be a perfectly good study. I hope more people see this article so that non-busy people can look into that!)

• FYI: I wrote a post about the statistics used in pairwise comparison experiments of the sort used in this post.

• 6 Oct 2022 5:34 UTC
1 point
0 ∶ 0

Great article! A practical and feasible plan to better humanity. This is exactly what we need right now.

• I liked the rigor in your post and learned a lot from it, thank you for writing it.

• 6 Oct 2022 4:52 UTC
18 points
1 ∶ 0

Why there hasn’t been a consensus/​debate between people with contradicting views on the AGI timelines/​safety topic?

I know almost nothing about ML/​AI and I don’t think I can form an opinion on my own so I try to base my opinion on the opinions of more knowledgeable people that I trust an respect. However what I find problematic is that those opinions vary dramatically, while it is not clear why those people hold their beliefs. I also don’t think I have enough knowledge in the area to be able to extract that information from people myself eg. if I talk to a knowledgeable ‘AGI soon and bad’ person they would very likely convince me in their view and the same would happen if I talk to a knowledgeable ‘AGI not soon and good’ person. Wouldn’t it be good idea to have debates between people with those contradicting views, figure out what the cruxes are and write them down? I understand that some people have vested interests in one side of the questions, for example a CEO of an AI company may not gain much from such debate and thus refuse to participate in it, but I think there are many reasonable people that would be willing to share their opinion and hear other people’s arguments. Forgive me if this has already been done and I have missed it (but I would appreciate if you can point me to it).

• Not exactly what you’re describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:

• Yudkowsky just has the pessimism dial set way higher than anyone else (it’s not clear that this is wrong, but this makes it hard to debate whether a plan will work)

• Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A’s ontology is unnatural to researcher B. See the comments to this for a long discussion on what counts as inner vs outer alignment.

• Much of the disagreement comes down to research taste; see my comment here for an example of differences in opinion driven by taste.

That said, I’d be excited about debates between people with totally different views, e.g. Yudkowsky and Yann Lecun if it could happen...

• There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/​rationalist communities’ origin story, link here: https://​​wiki.lesswrong.com/​​index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate

Prediction is hard and reading the debate from the vantage point of 14 years in the future it’s clear that in many ways the science and the argument has moved on, but it’s also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame.

• How irrational was it to be concerned about long covid pre-vaccination? That was my mistake. I presume I should’ve done something differently. But I don’t have much medical knowledge or a strong intuition for how to analyze a study.

• I think people are kinda not concerned enough right now. I just saw an article the other day saying people who caught COVID twice have a much much higher risk of death going forward. It’s not long COVID, but it’s in the same category I think.

• You’ve mentioned upthread that you’re uncertain what exactly EAs should do to be “more like peak Quakerism”, but can you take a stab at some concrete suggestions that illustrate what you mean? I’m just wondering what would change. Paraphrasing examples from other comments:

• emphasizing silence in EA meetups to improve the quality of debate and ideas?

• whatever other aspects of Quaker-style meetings that make them distinctive? (like what?)

• abstaining from alcohol?

• being more co-dependent on others?

• more rituals?

• more children? (probably not?)

• various forms of meditation like loving-kindness and insight, reflection, etc?

• emphasizing other personal behaviors the community should strive to include and praise because they push the group towards long-term success?

• While I would love to see a more detailed investigation on this issue, my first impressions are that:

• Current EA material (80k, OpenPhil) seem adequate at explaining why climate change is usually not a big priority area inside the EA community, while being sufficiently didactic and approachable for most people.

• The material might not be sufficient for a specific group of people: people with experience working on climate change research, activism or public policy.

I’m particularly worried about that last point because I believe there’s a lot of amazing talent currently working on climate change which have a greater fit for working in other causes.

In the same way, reaching activists or influencers working on climate change might be a highly effective way to reach similarly aligned groups of people.

Anecdotally, I’ve had climate activists ask me for introductory materials to EA after receiving conflicting information on it, and I would have loved to point out a specific resource better tailored to them.

Edit: Another point might be that we might emphasize too much on x-risk when talking about climate change. I feel like this does a disservice to many readers, especially considering that neglectedness seems like a more general counterargument for working in climate change.

• Are we including 80k’s problem profile on Climate Change here? This is the explanation that is included in the handbook (and in the intro fellowship) seemingly, precisely for this reason.

• My general sense of the 80k handbook is that it is very careful to emphasise uncertainty and leaves room for people to project existing beliefs without updating.

For example:

Working on this issue seems to be among the best ways of improving the long-term future we know of, but all else equal, we think it’s less pressing than our highest priority areas.

I value the integrity that 80k has here, but I think something shorter, with more direct comparisons to other cause areas, might be more effective.

• While I agree in general, the problem is “something shorter, with more direct comparisons to other cause areas” might have the opposite effect. That is the kind of argument that could induce emotional rejection on people that have already spent significant resources (or have modeled their identities) on fighting climate change. For that specific group of people, you probably need something with significantly more nuance.

• Thanks for sharing this!

• Hi Everyone, I’m a reproductive endocrinologist and Professor of OB/​GYN at Oregon Health & Science University in Portland, OR. My research focuses on innovative assisted reproductive technologies for the treatment of age-related infertility/​ovarian aging and prevention of heritable genetic diseases (germline gene therapy). I am most interested in reproductive ethics. Looking forward to learning from you all. Thanks, Elika!

• Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact!

• FYI I’m also interested in this.
I do think it’s consistent to be pro-choice and place a high value on future lives (both because people might be able to create more future lives by (eg) working on longtermist causes than by having kids themself, and because you can place a high value on lives but say that it is outweighed by the harm done by forcing someone to give birth. But I think that pro-natalist and non-person-affecting views do have implications for reproductive rights and the ethics of reproduction that are seldom noticed or made explicit.

• Richard Chappell wrote this piece, though IMHO it doesn’t really get to the heart of the tension.

• Looking forward to developing the Look-Up Timeline further!

• What are the methods (meditation, self-analysis) and tools (podcasts, books, support groups) you use to keep yourself motivated and inspired in Effective Altruism specifically, and in making a difference generally?

• Ability to mute posts/​comments from certain users.

• A way of sharing and upvoting external content along the lines of HackerNews.

There are linkposts already but people don’t post linkposts in the way people do on HN.

• “Listen to this post” button which links to an automatically generated AI narration, and a human narration if available.

• I use this a lot. Google assistant does this for me, and the Nonlinear Library podcast has AI narrations of all posts above some karma threshold.

• More powerful “customise the homepage” options.

In particular: follow a list of topics then see a list that contains only new posts (and classic posts) on those topics.

• Everyone is anon mode: browse Forum without seeing any usernames in posts, comments, etc.

• Nothing counts mode: browse the forum without seeing karma score and comment count.

• Annecdotally, a lot of busy people report visiting the Forum many times per week. Some report habitually over-consuming it. One of the biggest costs of the EA Forum might be that it reduces focussed work hours of the most talented people in the community.

So, I propose “EA Forum When Ready”.

Features would be something like:

1. Block schedule: block forum from loading during a period you schedule.

2. Hide the post list on homepage by default. You have to press “Show posts” to see them. (You can still search or browse subpages, just not see the homepage list).

3. Time limit: you can spend X mins per day on Forum, after that it is blocked.

HackerNews famously has a similar feature called “no procrast(inate)” mode.

• Get more wiki to happen:

Wiki additions can get upvoted and karma: update—this is already the case

New and upvoted wiki entries show up in feeds. update—this can happen, to some extent .

Revising my comment: consider ways of making the wiki more prominent and better integrating it into the forum (I realize this is now kind of vague)

• As in stackexchange: as you are posting it suggests posts that have similar content (based on ML I guess). … suggesting you make your post a comment on their post instead. (Or an edit/​addition to their post if this feature is also enabled.

• Thanks for this post. As someone hesitant to post on the forum, I would add that I am sometimes unsure if a large number of forum readers would actually be interested in a specific idea or topic I would want to write about. It might be nice if there were sub-forums for people interested in researching and learning about specific areas—that way, it would be virtually guaranteed that everyone reading the post would be interested in the content. On the other hand, it might make the space less interdisciplinary.

• Very inspiring, thanks!

• Within the field of AI safety, what does “alignment” mean?

• The “alignment problem for advanced agents” or “AI alignment” is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good outcomes in the real world.

Both ‘advanced agent’ and ‘good’ should be understood as metasyntactic placeholders for complicated ideas still under debate. The term ‘alignment’ is intended to convey the idea of pointing an AI in a direction—just like, once you build a rocket, it has to be pointed in a particular direction.

“AI alignment theory” is meant as an overarching term to cover the whole research field associated with this problem, including, e.g., the much-debated attempt to estimate how rapidly an AI might gain in capability once it goes over various particular thresholds.

Other terms that have been used to describe this research problem include “robust and beneficial AI” and “Friendly AI”. The term “value alignment problem” was coined by Stuart Russell to refer to the primary subproblem of aligning AI preferences with (potentially idealized) human preferences.

• 6 Oct 2022 2:05 UTC
1 point
1 ∶ 0

Material accounting controls on chemical weapons and precursors under the chemical weapons convention might fulfil some of your criteria.

• Wow Lawrence great post. I have been beating a similar drum for some years. I am a humanist/​Universalist but previously I was a Christian pastor, and I have No negative thoughts toward religion, in fact as a Universalist I admire all religions and as a humanist no religion. But like you I understand there is so much good useable stuff to be mined from religion. I think Anabaptists are also similar in many ways to Quakers and I’ve learned so much from them, though never a member myself.

As a community of Altruists wanting to reach peak effectiveness it seems so obvious to me we should do what you’re saying. The single greatest altruist communities on earth today are religious communities and it’s always been that way…Universities as a thing, hospitals as a thing directly created by religious groups. Research and Science all comes from religious origins. Of course religion in Europe got so toxic just at the same point the scientific revolution took off that science was a meaningful refuge away from religion…you could have a life that made sense and do good things in the world for the first time away from the toxic religious situation at that time so educated people began fleeing religion. That set up the current unfortunate situation of secular humanists looking down their noses at religious people, culture, etc. I feel it’s a pendulum thing where they had to swing away, to escape the toxicity of hundreds of years of European religious wars following the reformation in 1500’s but now we can swing back and realize how much good was in religion as you’ve shown.

Religion has these swings itself, where it goes good for some years and then usually gets a taste of political power and swings towards doing bad. I experienced this swing when the religious right in the 80’s discovered they could elect Presidents and Legislators and grabbed the power leading to the current poor state. I was in a big movement in the 90’s and 00’s where basically a bunch of us ejected ourselves out of the horrible directions things were going and started an alternative movement much inspired by Quakers, Anabaptists and others, called the Emerging Church Movement. It was a bunch of young leaders wanting to go in a way different direction and lead to a lot of great things. Unfortunately it was too white and male and we decided to lay it down and let others not white and male lead us forward…a very good death.

One example of a useful thing for EA is cross cultural communication. For millennia religious people have crossed borders to communicate their messages…motivated by pure pragmatism they slowly but surely got good at it. My Bachelors degree is 50% exactly that…we got into it at a deep level. It has helped me so much in so many areas of life over the years. I think EA needs it in this way — to cross the cultural divide between Elite/​STEM culture and creative/​business and working class culture. For EA to grow and prosper these folks need to come in and there is a lot of work to be done to be able to communicate to them in ways they feel culturally comfortable with. That’s the goal of cross cultural communication- for the other to hear you in their own language and culture, the place they feel comfortable.

Here’s where I think the trajectory of your post can be very fruitful…it’s not so much setting up a bunch of EA groups to study Quakerism…I doubt that would happen, rather it’s just to culturally swing the pendulum to a place where elite academic STEM people just feel comfortable and are more accepting of religious people and ideas, and lose their penchance to dismiss them, but rather have a new curiosity and openness.

This becomes pragmatically hugely important if you found a new charity in Africa or South America or Asia and find yourself needing to work with stakeholders locally that are frequently going to be religious. Then you are forced to get more religion friendly…why not start now and be ready.

• 6 Oct 2022 1:58 UTC
2 points
0 ∶ 0

One thing that would be really useful in terms of personal planning, and maybe would be a good idea to have a top level post on, is something like:

What is P(I survive | I am in location X when a nuclear war breaks out)

for different values of X such as:

(A) a big NATO city like NYC

(B) a small town in the USA away from any nuclear targets

(C) somewhere outside the US/​NATO but still in the northern hemisphere, like Mexico. (I chose Mexico because that’s probably the easiest non-NATO country for Americans to get to)

(D) somewhere like Argentina or Australia, the places listed as being most likely to survive in a nuclear winter by the article here https://​​www.nature.com/​​articles/​​s43016-022-00573-0

(E) New Zealand, pretty much where everyone says is the best place to go?

Probably E > D > C > B > A, but by how much?

As others have said, even (B) (with a suitcase full of food and water and a basement to hole up in) is probably enough to avoid getting blown up initially, the real question is what happens later. It could be that all the infrastructure just gets destroyed, there’s no more food, and everyone starves to death.

Of course another thing to take into account is that if I just decide to go somewhere temporarily and there’s a war, I’ll be stuck somewhere that’s unfamiliar, where I may not speak the local language, and where I am not a citizen. Whether that is likely to affect my future prospects is unclear.

If it turns out that we’ll be fine as long as we can survive the bombs and the fallout, that’s one thing. But if we’ll just end up starving to death unless we’re in the Southern Hemisphere, then that is another thing.

(Does the possibility of nuclear EMP (electromagnetic pulse) attacks need to be factored in? I’ve heard claims like ‘one nuke detonated in the middle of the USA at the right altitude would destroy almost all electronics in the USA’, and maybe nearby countries would also be in the radius. If true, likely it would happen in a nuclear war. And of course that would also have drastic implications for survivability afterward. I don’t know how reliable this is, though.)

Another important question is “how much warning will we have?” Even a day or two’s worth of warning is enough to hop on the next flight south, but certainly there are some scenarios where we won’t even have that much.

• A bunch of us here at the Prague Fall Season would like to know this for F) a medium NATO capital outside the US/​UK.

• Those are good questions on survival in different locations, and I haven’t seen estimates of those (lots of uncertainty in response). I think the EMP from a single detonation is not quite that bad, but I would expect many EMPs in a full-scale exchange. With two days warning, most flights will already be full, and the flight capacity over a few days is much smaller than the population, so I would not count on that. But driving is more feasible if you own a car (ride share would be more problematic).

• Of course that depends on whether everyone else is also evacuating. For instance do we expect that if a tactical nuke is used in Ukraine a significant amount of the US population will be trying to evacuate? As has been mentioned before there was not a significant percentage of the US population trying to evacuate even during the Cuban Missile Crisis, and that was probably a much higher risk and more salient situation than we face now.

• was the Cuban Missile crisis higher risk than actual nukes going off? actual nukes seem to me to be more salient.

• Is your altruism more effective now than it was four years ago? (Instead of the election question, “Are you better off now than you were four years ago?”)

• Who is Phil, and why does everyone talk about how open he is?

• 6 Oct 2022 0:57 UTC
2 points
0 ∶ 0

What are the strongest arguments for believing that conscious experiences can have negative value?

• 6 Oct 2022 0:50 UTC
1 point
0 ∶ 0

How do we tag the post?

The instructions say to tag the post with “Future Fund worldview prize”, but it does not seem possible to do this. Only existing tags can be used for tagging as far as I can tell, and this tag is not in the list of options.

• 6 Oct 2022 0:49 UTC
4 points
0 ∶ 0

How funny. I find myself intimidated just writing this comment. That said, this is an excellent post that quite accurately conveys the internal struggles and challenges of a newbie poster like me.

Though I’ve years and years of experience with brands, comms and strategy and have written, presented and engaged with boards, CEOs and rooms full of incredibly bright people, I’ve found that posting my thoughts on the EA forum is weirdly terrifying.

I’ve been trying to understand why for a few weeks now. As a member of a few other communities, in which I feel very comfortable, I thought it might be helpful to detail my insecurities here:

• As a newcomer I get the sense that I accidentally wandered into a super cool group of super clever people discussing super clever things and that I wasn’t supposed to get the invite. Of course being me, I then went on to misread the invite and showed up in a clown costume honking a horn

• I like to write. But I write in a more personal and expressive style (much like this comment) and, like others, I find the writing style here is very complex, formal and intimidating. It’s not like my style, so I think maybe I shouldn’t post

• Voluntarily offering my thoughts and ideas to be judged by very smart people isn’t my favorite thing to do so I’ve found I get stuck writing and editing in circles in order to sound clever instead of clear

• My expertise is in the highly subjective areas of strategies, brands, messages, creative thinking and customer experiences which feels massively superficial compared to the big stuff people talk about in here

• I suspect like many others, I’ve got an oversized fear of trolls, or of tripping on the outrage tripwire. I’m not saying trolls are here, just that outrage seems to be everywhere these days and a lot of us have learned to be hyper-cautious

• I came into the EA shop looking for clever, rational people and communities with interests in social, communal, behavioral, public policy, creative design and systems based EA and found AI, BioRisk and Animal Welfare communities—which are all great, but again, I think I misread the invite

• In my other groups I tend to respond really well to questions. I’ve got lots of thoughts and ideas of my own, but to answer a question feels a lot more helpful

• Though I really, really, love quant, my comfort zone is qualitative, so I’m more comfortable in posting simple questions that spark discussions (ie “Anyone read that Salon article?”) rather than proposing ideas and theories for robust testing and interrogation

To be fair, this is a really awesome forum that I might have found purely by mistake. I’m still not sure if I should be here, but I have been made to feel welcome, valued, and encouraged. I’ve got loads to say on the EA brand and its communications challenges and opportunities, but every draft post I’ve written (four so far) I’ve talked myself out of. I’ve no doubt my insecurities have gotten the better of me but, in my line of work, first impressions really do matter and I can’t seem to get past the fact that my first impression here matters more than in any other forum.

(Gulp!)

• 6 Oct 2022 0:34 UTC
2 points
1 ∶ 0

I think the answer to this question is simply a big “Yes”.

The people who are skeptical I’ve met often seem very open to being convinced, so a well-written article would clear up a lot (and maybe bring in a bunch of criticism from outside, which I assume is the reason why it hasn’t been done yet).

• Speaking for myself, I found Open Philanthropy’s investigation of Climate Change pretty convincing. Maybe we should publicize it more and see which part people find unconvincing?

• To me the big problem with the Open Phil document is that it’s from 2013 which was a long time ago both in terms of the evolution of EA and in terms of climate policy. Given the volume of public interest in the topic, it’s probably worth investing in an up to date treatment (and one that is kept up to date) that serves as a primer on neglectedness, true existential risk, and other key considerations without coming across as totally oblivious

• 6 Oct 2022 0:03 UTC
22 points
3 ∶ 0

Search among a specific user’s posts/​comments

• 1) What level of funding or attention (or other metrics) would longtermism or AI safety need to receive for it to no longer be considered “neglected”?

2) Does OpenPhil or other EA funders still fund OpenAI? If so, how much of this goes towards capabilities research? How is this justified if we think AI safety is a major risk for humanity? How much EA money is going into capabilities research generally?

(This seems like something that would have been discussed a fair amount, but I would love a distillation of the major cruxes/​considerations, as well as what would need to change for OpenAI to be no longer worth funding in future).

• What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as “important”?

• When I read Critiques of EA that I want to read, one very concerning section seemed to be “People are pretty justified in their fears of critiquing EA leadership/​community norms.

1) How seriously is this concern taken by those that are considered EA leadership, major/​public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX)

2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/​why not?

3) Are there publicly available compilations of times where EA leadership or major/​public facing organizations have made meaningful changes as a result of public or private feedback?

(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk—or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It’s unclear whether the recent Criticism/​Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered “EA Leaders”.)

1. ^

1, 2, 3, 4

2. ^

“EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders’ forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes.”

• Is anybody trying to model/​think about what actions we can do that are differentially leveraged during/​in case of nuclear war, or the threat of nuclear war?

In the early days of covid, most of us were worried early on, many of us had reasonable forecasts, many of us did stuff like buy hand sanitizers and warn our friends, very few of us shorted airline stocks or lobbied for border closures or did other things that could’ve gotten us differential influence or impact from covid.

I hope we don’t repeat this mistake.

• Very bad societal breakdown scenarios

There’s a spectrum here, but thinking about the more bad scenarios, the aftermath could be really chaotic, e.g. safety, crime, wouldn’t just be an issue but something like “warlords” might exist. You might have some sort of nominal law/​government, but in practice, violence and coercion would be normal [1].

I think the experiences with hurricane Katrina’s breakdown is a good example, but this would be more severe. Chaos in Libya and other nations are other examples.

This seems cartoonish, but in this situation, the most impactful thing for the average EA to do is to involve themselves and gain position in some level of “government”, probably in a nontraditional sense[2].

Strong generalist skills, work ethic, communication, charisma, leadership, interpersonal judgement, as well as physical endurance would be important. This would be valuable because I think success, even local, might give access to larger political/​power.

If this seems silly, the alternatives seem sillier? EAs trying to provision services (farming) or writing papers/​lobbying without knowledge of the new context doesn’t seem useful.

1. ^

Because I think some EAs are sensitive, I am trying to not show this, but I think movies like Threads or The Day After communicate the scenarios I’m thinking about.

https://​​en.wikipedia.org/​​wiki/​​Threads_(1984_film)

https://​​en.wikipedia.org/​​wiki/​​The_Day_After

2. ^

I think some EAs with a lot of access/​connections might get positions in the post-event central government, but this seems limited to those very lucky/​senior (most government employees will be laid off).

• This is wonderfwl. Thank you for writing this.

• # WWOTF: what did the publisher cut? [answer: nothing]

Contextual note: this post is essentially a null result. It seemed inappropriate both as a top-level post and as an abandoned Google Doc, so I’ve decided to put out the key bits (i.e., everything below) as Shortform. Feel free to comment/​message me if you think that was the wrong call!

## Actual post

On his recent appearance on the 80,000 Hours Podcast, Will MacAskill noted that Doing Good Better was significantly influenced by the book’s publisher:[1]

Rob Wiblin: …But in 2014 you wrote Doing Good Better, and that somewhat soft pedals longtermism when you’re introducing effective altruism. So it seems like it was quite a long time before you got fully bought in.

Will MacAskill: Yeah. I should say for 2014, writing Doing Good Better, in some sense, the most accurate book that was fully representing my and colleagues’ EA thought would’ve been broader than the particular focus. And especially for my first book, there was a lot of equivalent of trade — like agreement with the publishers about what gets included. I also wanted to include a lot more on animal issues, but the publishers really didn’t like that, actually. Their thought was you just don’t want to make it too weird.

Rob Wiblin: I see, OK. They want to sell books and they were like, “Keep it fairly mainstream.”

Will MacAskill: Exactly...

I thought it was important to know whether the same was true with respect to What We Owe the Future, so I reached out to Will’s team and received the following response from one of his colleagues [emphasis mine]:

Hi Aaron, thanks for sending these questions and considering to make this info publicly available.

However, in contrast to what one might perhaps reasonably expect given what Will said about Doing Good Better, I think there is actually very little of interest that can be said on this topic regarding WWOTF. In particular:

I’m not aware of any material that was cut, or any other significant changes to the content of the book that were made significantly because of the publisher’s input. (At least since I joined Forethought in mid-2021; it’s possible there was some of this at earlier stages of the project, though I doubt it.) To be clear: The UK publisher’s editor read multiple drafts of the book and provided helpful comments, but Will generally changed things in response to these comments if and only if he was actually convinced by them.

(There are things other than the book’s content where the publisher exerted more influence – for instance, the publishers asked us for input on the book’s cover but made clear that the cover is ultimately their decision. Similarly, the publisher set the price of the book, and this is not something we were involved in at all.)

As Will talks about in more detail here, the book’s content would have been different in some ways if it had been written for a different audience – e.g., people already engaged in the EA community as opposed to the general public. But this was done by Will’s own choice/​design rather than because of publisher intervention. And to be clear, I think this influenced the content in mundane and standard ways that are present in ~all communication efforts – understanding what your audience is, aiming to meet them where they are and delivering your messages in way that is accessible to them (rather than e.g. using overly technical language the audience might not be familiar with).

1. ^

Quote starts at 39:47

• (I don’t think this is done already but downvote or comment if so)

Some optional/​additional way to weight karma as a percentage of total users (active users? readers?) so that sorting all posts by karma doesn’t show only newer posts at the top, and older popular posts way down with newer less-popular posts.

• Adding to the causal evidence, there’s a 2019 paper that uses wind direction as an instrumental variable for PM2.5. They find that IV > OLS, implying that observational studies are biased downwards:

Comparing the OLS estimates to the IV estimates in Tables 2 and 3 provides strong evidence that observational studies of the relationship between air pollution and health outcomes suffer from significant bias: virtually all our OLS estimates are smaller than the corresponding IV estimates. If the only source of bias were classical measurement error, which causes attenuation, we would not expect to see significantly negative OLS estimates. Thus, other biases, such as changes in economic activity that are correlated with both hospitalization patterns and pollution, appear to be a concern even when working with high-frequency data.

They also compare their results to the epidemiology literature:

To facilitate comparison to two studies from the epidemiological literature with settings similar to ours, we have also estimated the effect of PM 2.5 on one-day mortality and hospitalizations [...] Using data from 27 large US cities from 1997 to 2002, Franklin, Zeka, and Schwartz (2007) reports that a 10 μg/​m3 increase in daily PM 2.5 exposure increases all-cause mortality for those aged 75 and above by 1.66 percent. Our one-day IV estimate for 75+ year-olds [...] is an increase of 2.97 percent [...]

On the hospitalization side, Dominici et al. (2006) uses Medicare claims data from US urban counties from 1999 to 2002 and finds an increase in elderly hospitalization rates associated with a 10 μg/​m3 increase in daily PM 2.5 exposure ranging from 0.44 percent (for ischemic heart disease hospitalizations) to 1.28 percent (for heart failure hospitalizations). We estimate that a 10 μg/​m3 increase in daily PM 2.5 increases one-day all-cause hospitalizations by 2.22 percent [...], which is 70 percent larger than the heart failure estimate and over five times larger than the ischemic heart disease estimate. Overall, these comparisons suggest that observational studies may systematically underestimate the health effects of acute pollution exposure.

• 5 Oct 2022 22:16 UTC
3 points
3 ∶ 0

RSS feeds for tags (be surprised anyone else wants this but maybe?)

• A set of related questions RE: longtermism/​neartermism and community building.

1a) What is the ideal theory of change for Effective Altruism as a movement in the next 5-10 years? What exactly does EA look like, in the scenarios that community builders or groups doing EA outreach are aiming for? This may have implications for outreach strategies as well as cause prioritization.[1]

1b) What are the views of various community builders and community building funders in the space on the above? Do funders communicate and collaborate on a shared theory of change, or are there competing views? If so, which organizations best characterize these differences, what are the main cruxes/​where are the main sources of tension?

2a) A commonly talked about tension on this forum seems to relate to neartermism versus longtermism, or AI safety versus more publicly friendly cause areas in the global health space. How much of its value is because it’s inherently a valuable cause area, and how much of it is because it’s intended as an onramp to longtermism/​AI safety?

2b) What are the views of folks doing outreach and funders of community builders in EA on the above? If there are different approaches, which organizations best characterize these differences, what are the main cruxes/​where are the main sources of tension? I would be particularly interested in responses from people who know what CEA’s views are on this, given they explicitly state they are not doing cause-area specific work or research. [2]

3) Are there equivalents [3] of Longview Philanthropy who are EA aligned but do not focus on longtermism? For example, what EA-aligned organization do I contact if I’m a very wealthy individual donor who isn’t interested in AI safety/​longtermism but is interested in animal welfare and global health? Have there been donors (individual or organizational) who fit this category, and if so, who have they been referred to/​how have they been managed?

1. ^

“Big tent” effective altruism is very important (particularly right now) is one example of a proposed model, but if folks think AI timelines are <10 years away and p(doom) is very high, then they might argue EA should just aggressively recruit for AI safety folks in elite unis.

2. ^

Under Where we are not focusing: “Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)”

3. ^

“designs and executes bespoke giving strategies for major donors”

• Excited about EA Global? The Centre for Effective Altruism’s Events Team is hiring for multiple roles, including an EA Global Events Associate. The deadline for applying is next Tuesday, October 11!

This is a test by the EA Forum Team to gauge interest in job ads relevant to posts - give us feedback here.

• will add events to the opportunity to the EA opportunity board!

• Allow commenting in “tag” pages, as if they were normal posts:

And have this be the default place for an AMA, or for discussing if an AI company is net-good or net-bad, or whatever, instead of having those conversations scattered (getting lost, repeating themselves) between different posts about the org

• There’s something here about collating all the relevant discussion around an org or tag, but I feel there’s a lot of possible options—not sure what the best solution is.

• You can actually comment on tag pages, by clicking on “Discussion” under the tag name. I don’t think that feature is really a good fit for your particular use case though—for that, I would suggest creating a post to start the discussion and tagging it with the organization name.

• YouTube and TikTok.

The literal boss of CEA online suggested a video format and there’s basically an EA TikTok account with 50K followers.

Let’s go!

• The literal boss of CEA online suggested a video format and there’s basically an EA TikTok account with 50K followers.

should be:

The literal boss of CEA online suggested a video format and made an EA TikTok account with 50K followers.

• I’ll also add that I think we should take a stance against TikTok in particular because (I assume) it makes our data accessible to the Chinese government, who I think are probably the most dangerous actor on the world stage.

• The vibe with TikTok is bad, but the concrete risks and causal linkages with harm seems unclear, this article seems balanced?

https://​​www.wired.com/​​story/​​tiktok-nationa-security-threat-why/​​

• I read most of the article and skimmed the rest. I think it’s short-sighted and misses a few points.

1. It’s very US-centric. like the quote “In a series of responses this summer both to lawmakers and the public, TikTok has staunchly maintained that it does not and would never share US user data with the Chinese government.”, which only refers to US data. So the Chinese authorities can still easily mine non-US data. Which is… a lot.

2. It believes the statements of Bytedance too much. They say they would never share US data with the Chinese authorities, but do you really buy it, when they could just share it and compel TikTok inc. to hide it someway?

3. It mentions that China is notorious for stealing data, but I think an app like TikTok is a whole new level. Imagine you were Robin Hood, and you discovered that instead of putting all this energy into assaulting rich guys in the forest, you can somehow open a bank and convince all the rich people’s kids to open accounts and pay fees. You’d take that for sure?

4. This is actually my main point—it underestimates the risks of data mining, which I think mostly come from advanced AI. You could, for example, use vast amounts of data to create models of American society to plan stratgic moves, or to find outliers and recruit spies, or to use existing AI to find people who criticise the CCP, etc.

• Hmm, yes that post contains a video, you’re right.

I can’t seem to replicate this, from limited effort. I (very quickly) tried drafting a post—it wasn’t obvious how to embed a YouTube video.

Also, I’m thinking about comments and I can’t find an easy way to do this in comments.

It’s possible that this embedding can be achieved by a different editing mode (selected by the EA Forum settings). If you can show this, that would be great.

You’re right, Youtube Links turn into videos (my links weren’t working, maybe they were too long)

• Emoji replies, like Slack

(Nathan, you can use this for polls!)

• Nudge people to write shorter posts:

For example: Allow users to sort their feed by some algorithm that puts shorter posts higher up (not ignoring the karma, but also taking into account the length)

• It might be tough to implement this in a way that doesn’t boost linkposts (which I think would be counter to your purpose).

1. I agree

2. No sorting algorithm is perfect. The relevant question, I think, is if this would be better than the current algorithm. (Would you prefer using it even though linkposts would be too high?)

3. With some extra effort, one could solve most of the link post problem. Specifically, I think the forum currently supports built-in link posts. Or one could search for “linkpost” or “link post” in the first line. But in practice I would just leave this problem as-is and see if anyone still uses this feed

• Nudge people to write shorter posts or at least to summarize them.

For example: Have the default text of a new post be “TL;DR:”

• A better feed

• A draft post about my main opinion on how to become a more productive developer, and many examples of ways I think people get it wrong:

https://​​docs.google.com/​​document/​​d/​​1LiS31GxerfaBHYcw3nBrw11rMesjh2Lr_VemjQtUBO0/​​edit#

I’m especially interested in feedback from:

1. Developers who want to be more productive

2. People who manage developers

[Is this subforum a good place to post early drafts and get comments?]

• Is this subforum a good place to post early drafts and get comments?

Yes!

• I mainly wonder if I’ll get comments on it

• ‘If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?’

or, another way to put this:

‘I worry that if I learn more about animal welfare, global poverty, and existential risks, then all of my previous meat-eating, consumerist status-seeking, and political virtue-signaling will make me feel like a bad person’

(This is a common ‘pain point’ among students when I teach my ‘Psychology of Effective Altruism’ class)

• What has helped me most is this quote from Seneca:

Even this, the fact that it [the mind] perceives the failings it was unaware of in itself before, is evidence for a change for the better in one’s character.

That helped me feel a lot better about finding unnoticed flaws and problems in myself, which always felt like a step backwards before.

I also sometimes tell myself a slightly shortened Litany of Gendlin:

What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
People can stand what is true,
for they are already enduring it.

If X and Y make me a bad person, then...I’m already being a bad person. Owning up to it doesn’t make it worse, and ignoring it doesn’t make it disappear. Owning up to it is, in fact, a sign of moral progress—as bad as it feels, it means I’m actually a better person than I was previously.

• My personal approach:

• I no longer think of myself as “a good person” or “a bad person”, which may have something to do with my leaning towards moral anti-realism. I recognize that I did bad things in the past and even now, but refuse to label myself “morally bad” because of them; similarly, I refuse to label myself “morally good” because of my good deeds.

• It doesn’t mean it’s okay to do bad things. I ask myself to do good things and not to do bad things, not because this makes me a better person, but because the things themselves are good or bad.

• The past doesn’t matter (except in teaching you lessons), because it is already set in stone. The past is a constant term in your (metaphorical) objective function; go optimize for the rest (i.e. the present and the future).

• I think it’s okay to feel guilty, shame, remorse, rage, or even hopeless about our past “mistakes”. These are normal emotions, and we can’t or rather shouldn’t purposely avoid or even bury them. It’s analogous to someone being dumped by a beloved partner and feeling like the whole world is crumbling. No matter how much we try to comfort such a person, he/​or she will feel heartbroken.

In fact, feeling bad about our past is a great sign of personal development because it means we realize our mistakes! We can’t improve ourselves if we don’t even know what we did wrong in the first place. Hence, we should burn these memories hard into our minds and apologize to ourselves for making such mistakes. Then we should promise to ourselves (or even better, make concrete plans) to prevent repeating the same mistakes or to repair the damages (e.g., eat less or no meat, be more prudent in spending or donate more to EA causes, etc.)

Nobody is born a saint, so keep learning and growing into a better person :)

• I might be missing the part of my brain that makes these concerns make sense, but this would roughly be my answer: Imagine that you and everyone in your household consume water with lead it in every day. You have the chance to learn if there is lead in the the water. If you learn that it does, you’ll feel very bad but also you’ll be able to change your source of water going forward. If you learn that it does not, you’ll no longer have this nagging doubt about the water quality. I think learning about EA is kind of like this. It will be right or wrong to eat animals regardless of whether you think about it, but only if you learn about it can you change for the better. The only truly shameful stance, at least to me, is to intentionally put your head in the sand.

My secondary approach would be to say that you can’t change your past but you can change your future. There is no use feeling guilt and shame about past mistakes if you’ve already fixed them going forward. Focus your time and attention on what you can control.

• Meta:

1. Seems like a more complicated question than [I could] solve with a comment

2. Seems like something I’d try doing one on one, talking with (and/​or about) a real person with a specific worry, before trying to solve it “at scale” for an entire class

3. I assume my understanding of the problem from these few lines will be wrong and my advice (which I still will write) will be misguided

4. Maybe record a lesson for us and we can watch it?

Tools I like, from the CFAR handbook, which I’d consider using for this situation:

1. IDC (maybe listen to that part afraid you’ll think of yourself as a bad person, maybe it is trying to protect you from something that matters. I wouldn’t just push that feeling away)

2. homunculus (imagine you’re waking up in your own body for the first time, in a brain that just discovered this EA stuff, and you check the memories of this body and discover it has been what you consider “bad” for all its life. You get to decide what to do from here, it’s just the starting position for your “game”, instead of starting from birth)

• Yanatan—I like your homunculus-waking-up thought experiment. It might not resonate with all students, but everybody’s seen The Matrix, so it’ll probably resonate with many.

• ‘Why don’t EA’s main cause areas overlap at all with the issues that dominate current political debates and news media?’

(This could be an occasion to explain that politically controversial topics tend not to be (politically) tractable or neglected (in terms of media coverage), and are often limited in scope (i.e. focused on domestic political squabbles and symbolic virtue-signaling)

1. As you said, they’re (almost by definition) not neglected

2. The media picks topics based on some algorithm which is simply different from the EA algorithm. If that wouldn’t be true, I guess we wouldn’t really need EA

• What do you mean by Peter Singer isn’t formally linked with EA?

I also see other philosophers who identify with EA or are otherwise involved with EA, e.g. have worked at or with Rethink Priorities. But maybe you’re thinking more prominent EA philosophers like MacAskill, Ord, Greaves and Bostrom?

• I answered your first question in a private message.

I didn’t realize the other names who might work at EA orgs (maybe I don’t even know the names). I am glad that there are!

And yes I am thinking of these major names. (Would love to see their names there)

(btw Bostrom probably won’t admit he is an EA)

• Thanks for your entry!

• 5 Oct 2022 19:38 UTC
29 points
0 ∶ 0

There is a real-money market on Russia offensively using a nuclear weapon in 2022 on Polymarket with good liquidity. Currently it is trading at 0.06

• Are there examples of EA causes that had EA credence and financial support but then lost both, and how did discussion of them change before and after? Also vice-versa, are there examples of causes that had neither EA credence nor support but then gained both?

• Holden Karnofsky wrote Three Key Issues I’ve Changed My Mind About on the Open Philanthropy blog in 2016.

On AI safety, for example:

I initially guessed that relevant experts had strong reasons for being unconcerned, and were simply not bothering to engage with people who argued for the importance of the risks in question. I believed that the tool-agent distinction was a strong candidate for such a reason. But as I got to know the AI and machine learning communities better, saw how Superintelligence was received, heard reports from the Future of Life Institute’s safety conference in Puerto Rico, and updated on a variety of other fronts, I changed my view.

• I think “it’s easy to overreact on a personal level” is an important lesson from covid, but much more important is “it’s easy to underreact on a policy level”. I.e. given the level of foresight that EAs had about covid, I think we had a disappointingly small influence on mitigating it, in part because people focused too much on making sure they didn’t get it themselves.

In this case, I’ve seen a bunch of people posting about how they’re likely to leave major cities soon, and basically zero discussion of whether there are things people can do to make nuclear war overall less likely and/​or systematically help a lot of other people. I don’t think it’s bad to be trying to ensure your personal survival as a key priority, and I don’t want to discourage people from seriously analysing the risks from that perspective, but I do want to note that the overall effect is a bit odd, and may indicate some kind of community-level blind spot.

• I strongly agree with your comment, but I want to point out in defense of this trend that nuclear weapons policy seems to be unusually insulated from public input and unusually likely to be highly sensitive/​not good to discuss in public.

• unusually likely to be highly sensitive/​not good to discuss in public.

I’m not sure I agree with this and would like a second opinion.

• I think EAs are broadly too quick to class things as infohazards instead of reasoning them through, but natsec seems like a pretty well defined area where the reasons things are confidential are pretty concrete .

Some examples of information that is pretty relevant to nuclear risk and would not be discussed on this forum, even if known to some participants:

How well-placed are US spies in the Russian government and in Putin’s inner circle?

How about Russian spies in the US government? Do the Russians know what the US response would be in the event of various Russian actions?

Does the US know where Russia’s nuclear submarines are? Can we track their movements? Do we think we could take them out if we had to? This would require substantial undisclosed tech. If we did know this, it would be a tightly held secret; degrading Russia’s second-strike capabilities (which is one effect of knowing where their subs are) might push them towards a first strike.

Relatedly, are we at all worried Russia knows where our submarines are?

In a similar genre, does the US know how to shoot down ICBMs? With 10% accuracy? 50%? 80%? Accuracy would have to be very good to be a game changer in a full exchange with Russia. (High accuracy would require substantial undisclosed technology, and be undisclosed for some of the same reasons plus to avoid encouraging other countries to innovate on weapon delivery.)

Does either side have other potentially game-changing secret tech (maybe something cyberwarfare-based?)

People making decisions on nuclear war planning have access to the answers to all of these questions, and those answers might importantly inform their decisionmaking.

• Also, even if the secret information that decision makers have isn’t decisive there will still be a tendency for people with secret information to discount the opinions of people without access to that information.

• Thank you for taking time to write this. This makes a lot of sense.

• 5 Oct 2022 18:54 UTC
4 points
0 ∶ 0

My guess is it will be hard to get someone to commit to keep replying to you if they get tired of interacting. The most likely cause for this working would be if you had some very novel and interesting take on a topic and someone else really wanted to get to the bottom of it, but I think one of the lessons from Ought’s research is that actually getting to the bottom of every branch of a debate can be very expensive.

• 5 Oct 2022 17:42 UTC
1 point
0 ∶ 0

Other than releasing anti-malaria (or similar diseases) gene drives, is there any other “physical” action that can be taken for less than a million dollars and has a chance greater than 5% of saving an enormous number of people?

• Note that unilaterally using gene drives in this way is usually considered a really bad idea because of poisoning the well against further use. Not just by usual conservative bioethics types, but by the scientist who first proposed using CRISPR to affect wild populations like mosquitos:
“Esvelt, whose work helped pave the way for Target Malaria’s efforts, is terrified, simply terrified, of a backlash between now and then that could derail it. This is hardly a theoretical concern. In 2002, anti-GMO hysteria led the government of Zambia to reject 35,000 tons of food aid in the middle of a famine out of fear it could be genetically modified. Esvelt knows that the CRISPR gene drive is a tool of overwhelming power. If used well, it could save millions of lives, help rescue endangered species, even make life better for farm animals.

If used poorly, gene drives could cause social harms that are difficult to reverse. What if gene drives get a bad rap? What if an irresponsible scientist moves too fast and prompts a strong political countermovement, like that which has stymied other genetically modified organisms since the 1990s? What if an irresponsible journalist — let’s call him Dylan Matthews — writes a bad article that misconstrues the issue and sends the project off the rails?

“To the extent that you or I say something or publish something that reduces the chance that African nations will choose to work with Target Malaria by 1 percent, thereby causing a 1 percent chance that project will be delayed by a decade, the expected cost of our action is 25,000 children dead of malaria,” Esvelt tells me. “That’s a lot of kids.””

• I know that might be a problem, but I asked for other ideas that have at least a 5% chance of saving a lot of people, even if they are bad in expectation. The hope is that they can somehow be modified into good ones, and I still don’t know whether that’s the case for gene drives. When I get enough free time, I’ll try to ask the researchers.

• 5 Oct 2022 17:25 UTC
28 points
1 ∶ 0

Why is scope insensitivity considered a bias instead of just the way human values work?

• Various social aggregation theorems (e.g. Harsanyi’s) show that “rational” people must aggregate welfare additively.

(I think this is a technical version of Thomas Kwa’s comment.)

• Quoting Kelsey Piper:

If I tell you “I’m torturing an animal in my apartment,” do you go “well, if there are no other animals being tortured anywhere in the world, then that’s really terrible! But there are some, so it’s probably not as terrible. Let me go check how many animals are being tortured.”

(a minute later)

“Oh, like ten billion. In that case you’re not doing anything morally bad, carry on.”

I can’t see why a person’s suffering would be less morally significant depending on how many other people are suffering. And as a general principle, arbitrarily bounding variables because you’re distressed by their behavior at the limits seems risky.

• I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not.

• Hm, I think that most of the people who participated in this experiment:

three groups of subjects were asked how much they would pay to save 2,000 /​ 20,000 /​ 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80,$78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay. would agree after the results were shown to them that they were doing something irrational that they wouldn’t endorse if aware of it. (Example taken from here: https://​​www.lesswrong.com/​​posts/​​2ftJ38y9SRBCBsCzy/​​scope-insensitivity ) There’s also an essay from 2008 about the intuitions behind utilitarianism that you might find helpful for understanding why someone could consider scope insensitivity a bias instead of just the way human values work: • Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you’re willing to pay$1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don’t run out of money) your willingness to pay is roughly linear in the number of birds you save.

Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and$88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only$0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only$1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.

Why should you be consistent? One reason is the triage framing, which is given in Replacing Guilt. Another reason is the money-pump; if you value birds at $1 per 100 and$2 per 1000, and are willing to make trades in either direction, there is a series of trades that causes you to lose both $and birds. All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren’t as strong and I don’t know them. • I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn’t make sense to talk about$1 per 100 avoided deaths in isolation.

• A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die.

This doesn’t follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.

• I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:

Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase on the table (less being too sparse, and more being too busy). The pebble-sorters of course go the extra mile.

Calling scope neglect a bias pre-supposes that we ought to value certain things linearly (or at least monotonically). This does not follow from any mathematics I know of. Instead it tries to sneak in utilitarian assumptions by calling their violation “biased”.

• There’s a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and reproductive fitness—but that doesn’t mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.

• 6 Oct 2022 5:24 UTC
2 points
1 ∶ 0
Parent

Then what does scope sensitivity follow from?

• Scope sensitivity, I guess, is the triumph of ‘rational compassion’ (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns.

But this is an empirical question in human psychology, and I don’t think there’s much research on it yet. (I hope to do some in the next couple of years though).

• That explanation is a bit vague, I don’t understand what you mean. By “quantitative thinking” do you mean something like having a textual length simplicity prior over moralities? By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world? Why do you call it a triumph (implying it’s good) over small-scope concerns? Why do you say this is an empirical question? What do you plan on testing?

• My credentials: I am an investor by profession and have experience negotiating governance structures. I have been a director of 3 private companies and a trustee of 4 non-profits.

Governance is often not ideal. That’s because it is a weird confluence of fitting within the law (often modern laws layered over common laws that don’t make much sense today), relationships and negotiation. For example, you pose the question about whether unusual governance structures have been tried by companies. In general, they haven’t because they aren’t legally possible.

In terms of structuring like a democracy, I don’t think democracies deal well with technical and minor issues. I say “minor” issue because if everything is going well, people will probably not consider it important. It’s also impossible for future people to participate in today’s democracy by definition.

The final point I’d make is that time horizons are important here. Many organisations struggle to manage both the immediate term and the long-term in the same framework. Within a corporate, it is good organisational practice to divide those responsibilities to a certain extent.

On social choice theory, I think it’s important to distinguish between decisions that have to be made (typically handled by the executive, e.g. there needs to be a new Chair of the Federal Reserve) and decisions about changes (typically handled by the legislature, e.g. we could improve the law on bank regulation). Budgets typically require approval of the legislature, but are really something that has to happen (the status quo of the government having no money used to be a reasonable option but is not in the modern day).

Some minor comments on the piece:

• I know you’ve tried to simplify things, but governance of for-profit corporations is a lot more complex than you make out. Board members are not as accountable to shareholders as you would expect, e.g. AGM votes often being non-binding, adoption of poison pills. There are also normally minority protections, e.g. takeover rules for public companies, investor vetoes in private companies. CEOs typically serve on the Board (which is different to non-profits), are sometimes also the Chair and can be the controlling shareholder, which adds a lot of additional dynamics. I think it’s also very important to consider not just the legal governance but the practical governance, e.g. the Chairman has significantly more influence than other board members even though they all have 1 vote each. Soft power is very important.

• With non-profits, I have observed a significant difference between UK and US boards. UK boards are typically filled based on expertise, whereas many US board are filled with donors and fundraisers. This is not a legal difference, but does affect the dynamic a lot.

• Non-profits can also have members that act a bit like shareholders. This is most common for membership organisations, e.g. sports clubs, mutual interest societies, but it’s also possible for non-profits to have another organisation as its sole member, i.e. a bit like a subsidiary.

• This comment is really insightful. It is short but has a huge amount of content. It draws from experience, expertise and reality, maybe why it can be concise and still accurate. Thanks a lot.

• 5 Oct 2022 17:03 UTC
−1 points
0 ∶ 1

From politics: there is an absolutely pivotal issue which EA and STEM types tend to be oblivious to. This is the role of values and ideology in defining what is possible.

For example, it’s obvious in a vacuum that the miracles of automation could allow all humans to live free of poverty, and even free of the need to work. …until conservative ideology enters the picture, that is.

When my conservative mother talks about AI, she doesn’t express excitement that machine-generated wealth could rapidly end poverty, disease, and such. She expresses fear that AI could leave everyone to starve without jobs.

Why? Because granting everyone rights to the machine-generated wealth would be anti-capitalist. Because solving the problem would, by definition, be anti-capitalist. It would deny capitalists the returns on their investments which conservatives regard as the source of all prosperity.

To a conservative, redistribution is anathema because prosperity comes from those who own wealth, and the wealthiest people have proven that they should be trusted to control that wealth because they’ve demonstrated the competence necessary to hoard so much wealth so effectively.

It’s circular logic: the rich deserve wealth because they own wealth. This isn’t a conclusion reached through logic—it’s a conclusion reached through a combination of 1) rationalized greed and 2) bombardment with conservative media. To a hardened conservative, this is the core belief which all other beliefs are formed in service of. All those other beliefs retcon reality into perceived alignment with this central delusion.

For example, this is also why conservatives think so highly of charity (as opposed to mandatory redistribution): charity redistributes wealth only at the voluntary discretion of the wealthy, and grants the power to allocate wealth in proportion to how much wealth a person owns. To a conservative, this is obviously the best possible outcome, because wealth will be allocated according to the sharp business sense of the individuals who have proven most worthy of the responsibility.

Of course, in practice, power serves itself, and the powerful routinely exploit their wealth to manufacture mass cultural delusions in service of their greed. See climate denial, crypto hype, trickle-down economics, the marketing of fossil gas as a “clean” “transition fuel”, the tobacco industry’s war on truth, the promotion of electric cars over public transport that could actually reduce energy consumption, the myth of conservative “fiscal responsibility” after the debt was blown up by both Reagan and Trump (and Mulroney here in Canada), the framing of conservative policy as “pro-growth” as if postwar high-tax policy didn’t bring about rapid economic expansion and public prosperity, the Great Barrington Declaration and other anti-science pandemic propaganda efforts, and the endless stream of money poured into “free-market” “think tank” corporate propaganda outlets. Of course, there are countless other examples, but I’ll stop there.

If we solve alignment but leave conservatives with the power to command AI to do whatever they want, then AI won’t be used for the benefit of all—it will be exploited by those who own the legal rights to the tech’s output. And all our alignment work will be for nothing, or next to nothing.

Obviously, then, we must redesign our political systems to value human (or sapient beings’) rights over property rights—a project as inherently progressive and anti-conservative as EA itself. The alternative is corporate totalitarianism.

• How can we “make altruism great again”? (Not trying to be political. I just wish to ask how we can inspire people to be altruistic, including using people’s longing for the good old days and other methods.)

• Do you think working to reduce s-risks instead of extinction risks is compatible with the arguments they make? That would still count as longtermist.

• I recommend you put some examples of the other things. That will make the question more fun.

• Can individuals make a difference?

• A favorite piece on this: Keeping Absolutes in Mind

• I like thinking in terms of “there are some battles that almost nobody is fighting, so I can be one of the only people in the world advancing those areas”, as opposed to, for example, trying to beat the stock market—which many many smart people are already trying to do, all competing with each other (and with me)

• Can individual drivers who practice safe driving make a difference? Of course they can! It’s individual drivers who yield, who drive carefully and cautiously (as opposed to recklessly and aggressively). Collectively, enough safe drivers can create a cooperative atmosphere, and increase overall traffic efficiency. And driving attitudes and behaviors can be changed through education and incentives.

• Like 5 people have commented on this, why has noone upvoted it?

• Link articles in text by typing [[ and then it brings up a search box of all forum articles.

Turns out this already exists.

• Ability to see search results in chronological order

• This is not quite what you’re asking for, but see the related feature in this timely PR!

• I like linking pull requests to things! I think the community could be much more aware of how this works and try and support it.

• [This might be totally wrong and please remember I have much less of an intuition about UX than you, but]

This seems to be focusing on visual components like “tabs” instead of the thing which would be intuitive to me, which I’d call “search capabilities”. I’d personally be happy to have all the searchable content of the forum pushed into some DB that works well with search, and use some open source search-UI that someone else built for that DB. This PR seems like rebuilding a UI like that which probably exists already, no?

[remember my disclaimer!]

[I can find a specific UI that seems nice to me if that would help]

• Man, I am excited to finally see a decent search page. It’s been sad for so long.