COVID-19 pandemic

TagLast edit: 15 Jul 2022 9:37 UTC by

The COVID-19 pandemic tag can cover posts about any aspect of the pandemic (biological, social, economic, etc.).

Cost of the pandemic

The Institute for Progress estimates that the COVID-19 pandemic has caused between $10 and$22 trillion dollars in economic damage and monetized health and life loss in the United States alone (which accounts for about 20% of gross world product and about 4% of world population).[1]

Bruns, Richard & Nikki Teran (2022) Weighing the cost of the pandemic, Institute for Progress, April 21.

Chivers, Tom (2021) How many lives has bioethics cost?, UnHerd, December 23.

Related entries

1. ^

Bruns, Richard & Nikki Teran (2022) Weighing the cost of the pandemic, Institute for Progress, April 21.

COVID: How did we do? How can we know?

30 Jun 2021 12:29 UTC
174 points

COVID-19 in de­vel­op­ing countries

22 Apr 2020 22:13 UTC
45 points

How we failed

23 Mar 2022 10:08 UTC
120 points

Ex­per­i­men­tal longter­mism: the­ory needs data

15 Mar 2022 10:05 UTC
182 points

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

27 Jan 2020 5:47 UTC
132 points

What we tried

21 Mar 2022 15:26 UTC
71 points

29 Aug 2021 6:06 UTC
21 points

COVID-19 brief for friends and family

28 Feb 2020 22:43 UTC
150 points

[Question] Are there his­tor­i­cal ex­am­ples of ex­cess panic dur­ing pan­demics kil­ling a lot of peo­ple?

27 May 2020 17:00 UTC
28 points

Coron­avirus Re­search Ideas for EAs

27 Mar 2020 21:01 UTC
118 points

[Question] How has biose­cu­rity/​pan­demic pre­pared­ness philan­thropy helped with coro­n­avirus, and how might it help with similar fu­ture situ­a­tions?

13 Mar 2020 18:50 UTC
103 points

The best places to donate for COVID-19

20 Mar 2020 10:47 UTC
22 points

Pri­ori­tiz­ing COVID-19 in­ter­ven­tions & in­di­vi­d­ual donations

6 May 2020 21:29 UTC
85 points

EA Cameroon—COVID-19 Aware­ness and Preven­tion in the Santa Divi­sion of Cameroon Pro­ject Pro­posal

25 Jul 2020 20:19 UTC
62 points

Three char­i­ta­ble recom­men­da­tions for COVID-19 in India

5 May 2021 8:23 UTC
71 points

Quan­tify­ing lives saved by in­di­vi­d­ual ac­tions against COVID-19

6 Mar 2020 19:42 UTC
27 points

[Question] Should re­cent events make us more or less con­cerned about biorisk?

19 Mar 2020 0:00 UTC
23 points

English sum­maries of Ger­man Covid-19 ex­pert podcast

8 Apr 2020 21:12 UTC
6 points
(www.notion.so)

Ubiquitous Far-Ul­travi­o­let Light Could Con­trol the Spread of Covid-19 and Other Pandemics

18 Mar 2020 18:30 UTC
44 points

[Question] Are there good EA pro­jects for helping with COVID-19?

3 Mar 2020 23:55 UTC
31 points

Will protests lead to thou­sands of coro­n­avirus deaths?

3 Jun 2020 19:08 UTC
77 points

Up and Down with the Pan­demic; Why Pan­demic Poli­cies Do Not Last and How to Change That

29 Mar 2021 23:04 UTC
9 points

An­nounc­ing Alvea—An EA COVID Vac­cine Project

16 Feb 2022 13:50 UTC
376 points

[Question] Will the coro­n­avirus pan­demic ad­vance or hin­der the spread of longter­mist-style val­ues/​think­ing?

19 Mar 2020 6:07 UTC
12 points

[Question] What ques­tions could COVID-19 provide ev­i­dence on that would help guide fu­ture EA de­ci­sions?

27 Mar 2020 5:51 UTC
7 points

COVID-19 re­sponse as XRisk intervention

10 Apr 2020 6:16 UTC
51 points

Food Cri­sis—Cas­cad­ing Events from COVID-19 & Locusts

29 Apr 2020 22:02 UTC
100 points

[Question] How bad is coro­n­avirus re­ally?

8 May 2020 17:29 UTC
17 points

[Question] What coro­n­avirus policy failures are you wor­ried about?

19 Jun 2020 20:32 UTC
19 points

Cus­tomized COVID-19 risk anal­y­sis as a high value area

23 Jul 2020 20:28 UTC
48 points

2018-19 Donor Lot­tery Re­port, pt. 2

14 Dec 2020 14:31 UTC
31 points

Non-phar­ma­ceu­ti­cal in­ter­ven­tions in pan­demic pre­pared­ness and response

8 Apr 2021 14:13 UTC
48 points

Defer­ence for Bayesians

13 Feb 2021 12:33 UTC
97 points

Max Roser on build­ing the world’s first great source of COVID-19 data at Our World in Data

29 Jul 2021 16:37 UTC
19 points

Pardis Sa­beti on the Sen­tinel sys­tem for de­tect­ing and stop­ping pandemics

29 Jul 2021 16:36 UTC
12 points

How about we don’t all get COVID in Lon­don?

10 Apr 2022 23:16 UTC
60 points

Ac­tivism for COVID-19 Lo­cal Pre­pared­ness

1 Mar 2020 6:11 UTC
9 points

Will EA Global San Fran­cisco be can­cel­led or resched­uled due to COVID-19?

1 Mar 2020 16:28 UTC
12 points
(www.metaculus.com)

[Linkpost] - Miti­ga­tion ver­sus Su­pres­sion for COVID-19

16 Mar 2020 21:01 UTC
9 points

Covid-19 Re­sponse Fund

31 Mar 2020 17:22 UTC
25 points

Utiliz­ing global at­ten­tion dur­ing crises for EA causes

18 Mar 2020 8:49 UTC
26 points

Op­por­tu­nity to sup­port a Covid19 re­lated sur­vey collaboration

28 Mar 2020 1:02 UTC
6 points

The marginal differ­ence of start­ing a new coro­n­avirus chain

9 Mar 2020 15:22 UTC
6 points

Re­quest­ing sup­port for the StandA­gain­stCorona pledge

17 Mar 2020 0:47 UTC
14 points

Coron­avirus and non-hu­mans: How is the pan­demic af­fect­ing an­i­mals used for hu­man con­sump­tion?

7 Apr 2020 8:40 UTC
72 points

Eleven re­cent 80,000 Hours ar­ti­cles on how to stop COVID-19 & other pandemics

8 Apr 2020 21:40 UTC
22 points

[Question] Are there any pub­lic health fund­ing op­por­tu­ni­ties with COVID-19 that are plau­si­bly com­pet­i­tive with Givewell top char­i­ties per dol­lar?

12 Mar 2020 21:19 UTC
25 points

Fundrais­ing for the Cen­ter for Health Se­cu­rity: My per­sonal plan and open questions

26 Mar 2020 16:53 UTC
14 points

Coron­avirus Tech Handbook

11 Mar 2020 14:44 UTC
23 points

[Question] Is COVID an op­por­tu­nity for non-EAs to give effec­tively?

22 Mar 2020 22:44 UTC
5 points

COVID-19 may cause per­ma­nent “re­duced lung func­tion” to young peo­ple, dam­ag­ing pro­duc­tivity and intelligence

20 Mar 2020 19:02 UTC
1 point

Coron­avirus: how much is a life worth?

23 Mar 2020 12:28 UTC
20 points
(medium.com)

EA char­ity re­sponses to COVID-19

4 Apr 2020 22:16 UTC
11 points

What COVID-19 ques­tions should Open Philan­thropy pay Good Judg­ment to work on?

18 Mar 2020 23:31 UTC
36 points
(www.openphilanthropy.org)

The Ham­mer and the Dance

20 Mar 2020 19:45 UTC
7 points

Ex­pert Com­mu­ni­ties and Public Revolt

28 Mar 2020 19:00 UTC
2 points

App for COVID-19 con­tact tracing

22 Mar 2020 6:25 UTC
5 points

COVID-19 Assess­ment Tool by the Hu­man Di­ag­no­sis Project

2 Apr 2020 7:27 UTC
13 points

A quick and crude com­par­i­son of epi­demiolog­i­cal ex­pert fore­casts ver­sus Me­tac­u­lus fore­casts for COVID-19

2 Apr 2020 19:29 UTC
9 points

Coron­avirus and long term policy [UK fo­cus]

5 Apr 2020 8:29 UTC
52 points

[Question] What promis­ing pro­jects aren’t be­ing done against the coro­n­avirus?

22 Mar 2020 3:30 UTC
5 points

[Question] Is donat­ing to AMF and malaria in­ter­ven­tions the most cost-effec­tive way to save lives from COVID-19?

27 Mar 2020 7:06 UTC
14 points

GiveDirectly plans a cash trans­fer re­sponse to COVID-19 in US

19 Mar 2020 5:44 UTC
5 points
(www.givedirectly.org)

Essen­tial facts and figures—COVID-19

20 Apr 2020 18:33 UTC
19 points

COVID-19 Risk Assess­ment App Idea for Vet­ting and Discussion

20 Feb 2020 5:49 UTC
38 points

More Dakka for Coron­avirus: We need im­me­di­ate hu­man tri­als of many vac­cine-can­di­dates and si­mul­ta­neous man­u­fac­tur­ing of all of them

13 Mar 2020 16:48 UTC
10 points

Re­spond­ing to COVID-19 in India

26 Apr 2020 19:07 UTC
57 points

Find­ing equil­ibrium in a difficult time

18 Mar 2020 2:50 UTC
150 points

162 benefits of coronavirus

11 May 2020 22:36 UTC
11 points

How can EA lo­cal groups re­duce like­li­hood of our mem­bers get­ting COVID-19 or other in­fec­tious dis­eases?

26 Feb 2020 16:16 UTC
23 points

The EA move­ment is ne­glect­ing phys­i­cal goods

18 Jun 2020 14:39 UTC
27 points

David Man­heim: A Per­sonal (In­terim) COVID-19 Postmortem

1 Jul 2020 6:05 UTC
32 points
(www.lesswrong.com)

[Question] Cost-Effec­tive­ness of COVID miti­ga­tion poli­cies per coun­try /​ globally

16 Jul 2020 15:21 UTC
12 points

I’m Linch Zhang, an am­a­teur COVID-19 fore­caster and gen­er­al­ist EA. AMA

30 Jun 2020 19:35 UTC
77 points

Work­ing to­gether to ex­am­ine why BAME pop­u­la­tions in de­vel­oped coun­tries are severely af­fected by COVID-19

3 Aug 2020 16:25 UTC
18 points
(medium.com)

Donor Lot­tery Debrief

4 Aug 2020 20:58 UTC
129 points

*up­dated* CTA: Food Sys­tems Hand­book launch event

29 May 2020 15:07 UTC
30 points

Preprint: Open Science Saves Lives: Les­sons from the COVID-19 Pandemic

24 Aug 2020 14:01 UTC
8 points
(www.biorxiv.org)

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

25 Aug 2020 9:14 UTC
58 points

How does a ba­sic in­come af­fect re­cip­i­ents dur­ing COVID-19?

3 Sep 2020 11:01 UTC
19 points

Rachel Wad­dell: GiveDirectly’s emer­gency cash re­sponse to COVID-19

7 Sep 2020 16:28 UTC
3 points

UK to host hu­man challenge tri­als for Covid-19 vaccines

23 Sep 2020 14:45 UTC
52 points
(www.ft.com)

Sugges­tion that Zvi be awarded a prize for his COVID series

24 Sep 2020 19:16 UTC
36 points

Up­dates on the Food Sys­tems Hand­book & up­com­ing ses­sions 30/​31 October

23 Oct 2020 11:26 UTC
17 points

Our recom­men­da­tions for giv­ing in 2020

23 Nov 2020 15:11 UTC
28 points

Kel­sey Piper on “The Life You Can Save”

4 Jan 2021 20:58 UTC
60 points
(www.vox.com)

The Elec­toral Con­se­quences of Pan­demic Failure Project

7 Jan 2021 20:57 UTC
11 points

A vastly faster vac­cine rollout

16 Jan 2021 0:23 UTC
20 points
(www.lesswrong.com)

The US Meat Sup­ply Crisis

28 May 2020 7:00 UTC
3 points
(us14.campaign-archive.com)

COVID-19 and Farm Animals

7 Apr 2020 7:00 UTC
3 points

[Question] Any up­dates to high-im­pact COVID-19 char­i­ties?

5 Feb 2021 21:42 UTC
9 points

Fiona Con­lon: Su­vita’s first year: vac­cines in the spotlight

21 Nov 2020 8:12 UTC
8 points

When to get a vac­cine in the Bay Area as a young healthy person

14 Mar 2021 16:11 UTC
9 points

[Question] [Coron­avirus] Is it a good idea to meet peo­ple in­doors if ev­ery­one’s rapid anti­gen test came back nega­tive?

24 Mar 2021 15:45 UTC
34 points

If Bill Gates be­lieves all lives are equal, why is he im­ped­ing vac­cine dis­tri­bu­tion?

21 Apr 2021 1:35 UTC
1 point

[Question] Effec­tive dona­tions for COVID-19 in India

30 Apr 2021 20:30 UTC
32 points

Mar­ket de­sign to ac­cel­er­ate COVID-19 vac­cine supply

8 May 2021 9:18 UTC
6 points

An­nounc­ing the UK Covid-19 Crowd Fore­cast­ing Challenge

17 May 2021 19:28 UTC
7 points

Long-Term Fu­ture Fund: May 2021 grant recommendations

27 May 2021 6:44 UTC
110 points

[Question] How well did EA-funded biorisk or­gani­sa­tions do on Covid?

2 Jun 2021 17:25 UTC
101 points

What is a pan­demic com­pared to our sewer sys­tem? An ex­am­ple of how a so­ciety nor­mal­izes risks

25 Jul 2020 14:59 UTC
25 points

Why did EA or­ga­ni­za­tions fail at fight­ing to pre­vent the COVID-19 pan­demic?

19 Jun 2021 15:32 UTC
5 points

20 Apr 2020 9:47 UTC
4 points

[Question] Which non-EA-funded or­gani­sa­tions did well on Covid?

8 Jun 2021 14:19 UTC
39 points

1Day Sooner is Hiring a Com­mu­ni­ca­tions Director

17 Jan 2021 20:09 UTC
7 points
(www.1daysooner.org)

27 May 2020 21:29 UTC
29 points
(www.lesswrong.com)

[Event] A Me­tac­u­lus Open Panel Dis­cus­sion: How Fore­casts In­form COVID-19 Policy

4 Oct 2021 18:17 UTC
3 points

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

8 Oct 2021 13:43 UTC
41 points

[Question] Global vac­cine equity (covid-19) giv­ing op­por­tu­ni­ties?

4 Nov 2021 16:16 UTC
9 points

Sub­mit com­ments on Paxlovid to the FDA (dead­line Nov 29th).

27 Nov 2021 19:30 UTC
31 points

Vi­talik cre­ates $100m org to fund covid sci­ence and re­lief pro­jects worldwide 30 Jan 2022 11:36 UTC 31 points 2 comments1 min readEA link Hinges and crises 17 Mar 2022 13:43 UTC 72 points 5 comments3 min readEA link NPR’s Indi­ca­tor on WHO fund­ing and pan­demic preparedness 19 Mar 2022 1:37 UTC 14 points 0 comments1 min readEA link (www.npr.org) Covid memo­rial: 1ppm 2 Apr 2022 18:27 UTC 117 points 8 comments7 min readEA link Case for emer­gency re­sponse teams 5 Apr 2022 11:08 UTC 233 points 47 comments5 min readEA link [Question] How did COVID-19 af­fect the me­dian per­son’s qual­ity of life? 6 Jun 2022 15:57 UTC 7 points 0 comments1 min readEA link Long Covid: mass dis­abil­ity and broad so­cietal con­se­quences [Cause Ex­plo­ra­tion Prizes] 11 Aug 2022 13:52 UTC 36 points 31 comments15 min readEA link [Cause Ex­plo­ra­tion Prizes] Les­sons from Covid 24 Aug 2022 10:51 UTC 13 points 0 comments13 min readEA link Covid Philan­thropy Idea—Build­ing Soft­ware Solu­tions for Peo­ple Impacted 17 Apr 2020 22:25 UTC 5 points 0 comments1 min readEA link Policy idea: In­cen­tiviz­ing COVID-19 track­ing app use with lot­tery tickets 22 Apr 2020 15:15 UTC 13 points 5 comments2 min readEA link EA should wargame Coron­avirus 12 Feb 2020 4:32 UTC 35 points 4 comments1 min readEA link [Question] (Link) Could an­i­mal ad­vo­cacy pre­vent fu­ture corona-viruses? 8 Mar 2020 18:10 UTC 3 points 0 comments1 min readEA link [Question] Are coun­tries shar­ing ven­tila­tors to fight the coro­n­avirus? 17 Mar 2020 7:11 UTC 9 points 1 comment1 min readEA link Three grants in re­sponse to the COVID-19 pandemic 24 Apr 2020 4:52 UTC 37 points 0 comments4 min readEA link (blog.givewell.org) Pan­demic pre­pared­ness orgs now on EA Funds 9 Apr 2020 9:15 UTC 14 points 1 comment2 min readEA link [Question] If it’s true that Coron­avirus is “close to patholog­i­cally mis­al­igned with some of our in­for­ma­tion dis­tri­bu­tion and de­ci­sion­mak­ing rit­u­als”, then what things would help the re­sponse? 24 Apr 2020 21:04 UTC 18 points 2 comments2 min readEA link [Question] Which peo­ple and in­sti­tu­tions are cur­rently mak­ing in­fluen­tial de­ci­sions re­lated to COVID re­sponse, and how could they be helped to have a bet­ter de­ci­sion mak­ing pro­cess? 25 Apr 2020 19:36 UTC 4 points 0 comments1 min readEA link [Question] What do you be­lieve are the most crit­i­cal open ques­tions/​hy­pothe­ses that could in­form a more effec­tive COVID re­sponse? 25 Apr 2020 19:33 UTC 5 points 13 comments1 min readEA link [Question] What are some tractable ap­proaches for figur­ing out if COVID causes long term dam­age to those who re­cover? 25 Apr 2020 19:27 UTC 0 points 4 comments1 min readEA link [Question] If you had$10-100mm and a skil­led team to im­prove the COVID re­sponse to min­i­mize eco­nomic and hu­man dam­age, what would you do? Or, how would you de­cide what to do?

25 Apr 2020 19:22 UTC
0 points

COVID Pro­ject idea: Tran­scrip­tion, trans­la­tion, con­tent re­for­mat­ting, and summarization

26 Apr 2020 21:23 UTC
5 points

[Question] Which per­son-per­son in­tro­duc­tions could be highly im­pact­ful, COVID re­lated and oth­er­wise?

26 Apr 2020 21:18 UTC
4 points

Mar­ket-shap­ing ap­proaches to ac­cel­er­ate COVID-19 re­sponse: a role for op­tion-based guaran­tees?

28 Apr 2020 10:10 UTC
39 points

[Question] Should I claim COVID-benefits I don’t need to give to char­ity?

14 May 2020 18:24 UTC
2 points

Is there a Price for a Covid-19 Vac­cine?

22 May 2020 17:20 UTC
11 points
(mattsclancy.substack.com)

Helping wild an­i­mals through vac­ci­na­tion: could this hap­pen for coro­n­aviruses like SARS-CoV-2?

12 May 2020 18:25 UTC
31 points

Covid offsets and car­bon offsets

23 Jul 2020 21:19 UTC
22 points

[Question] Sam­ple size and clus­ter­ing ad­vice needed

29 Jul 2020 14:21 UTC
15 points

[Question] Ex­am­ples of loss of jobs due to Covid in EA

28 Aug 2020 8:20 UTC
6 points

microCOVID.org: A tool to es­ti­mate COVID risk from com­mon activities

29 Aug 2020 22:28 UTC
85 points
(www.microcovid.org)

The Years 0 and 1 of the Policy En­trepreneur­ship Network

17 Sep 2020 16:51 UTC
60 points

Key­ne­sian Altruism

13 Sep 2020 12:18 UTC
43 points

N-95 For All: A Covid-19 Policy Proposal

28 Oct 2020 6:43 UTC
21 points

4 Years Later: Pres­i­dent Trump and Global Catas­trophic Risk

25 Oct 2020 16:28 UTC
23 points

The Next Pan­demic Could Be Worse, What Can We Do? (A Hap­pier World video)

21 Dec 2020 21:07 UTC
34 points

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

8 Feb 2021 12:42 UTC
7 points

[Question] How has Covid-19 af­fected the way you give?

15 Feb 2021 15:20 UTC
12 points

[Question] How to em­bed tools for im­proved in­sti­tu­tional dec­sion mak­ing in an or­ga­ni­za­tions soft­ware plat­form?

6 Apr 2021 16:19 UTC
11 points

At­tempted sum­mary of the 2019-nCoV situ­a­tion — 80,000 Hours

3 Feb 2020 22:37 UTC
37 points

Po­ten­tial High-Lev­er­age and In­ex­pen­sive Miti­ga­tions (which are still fea­si­ble) for Pandemics

4 Mar 2020 17:06 UTC
44 points

Lewis Bol­lard: Do­ing the most good for an­i­mals post-COVID-19

12 Nov 2020 16:46 UTC
15 points

Emily Grundy: Aus­trali­ans’ per­cep­tions of global catas­trophic risks

21 Nov 2020 8:12 UTC
8 points

Try­ing to as­sess the effec­tive­ness of COVID-19 char­i­ties in India

4 May 2021 7:42 UTC
34 points

[Question] Look­ing for: post-COVID pan­demic pre­pared­ness initiatives

11 May 2021 18:34 UTC
14 points

Is SARS-CoV-2 a mod­ern Greek Tragedy?

10 May 2021 4:25 UTC
8 points

The catas­trophic pri­macy of re­ac­tivity over proac­tivity in gov­ern­men­tal risk as­sess­ment: brief UK case study

27 Sep 2021 15:53 UTC
54 points

29 Oct 2021 17:52 UTC
14 points

[Creative Writ­ing Con­test] Affinity Maturation

30 Oct 2021 4:53 UTC
2 points

How a ven­tila­tion rev­olu­tion could help miti­gate the im­pacts of air pol­lu­tion and air­borne pathogens

16 Nov 2021 11:19 UTC
65 points

World Hap­piness Re­port 2022: Overview on Our Tenth Anniversary

21 Mar 2022 9:38 UTC
27 points
(worldhappiness.report)

How Many Peo­ple Are In The In­visi­ble Grave­yard?

19 Apr 2022 14:03 UTC
43 points

[Link] New Lancet study: Im­pact of the first year of COVID-19 vaccinations

29 Jun 2022 13:31 UTC
15 points
(www.thelancet.com)

North Korea faces Covid and Drought [Linkpost]

11 Jul 2022 2:07 UTC
13 points
(www.economist.com)

If we ever want to start an “Amounts mat­ter” move­ment, covid re­sponse might be a good flag/​example

24 Jul 2022 2:26 UTC
53 points

Ob­sta­cles and Strate­gies for Next-Gen Coron­avirus Vac­cine Devel­op­ment (re: Warp 2)

23 Jul 2022 16:20 UTC
13 points

[Question] Why aren’t EAs talk­ing about the COVID lab leak hy­poth­e­sis more?

13 Aug 2022 20:36 UTC
15 points

EJOR spe­cial is­sue on role of op­er­a­tional re­search in fu­ture pandemics

17 Aug 2022 13:34 UTC
20 points
(www.sciencedirect.com)

A Po­ten­tial Cheap and High Im­pact Way to Re­duce Covid in the UK this Winter

28 Oct 2022 11:36 UTC
47 points

Lat­est con­sen­sus on COVID-19 pub­lic health measures

4 Nov 2022 12:15 UTC
8 points
(go.nature.com)
• 1 Dec 2022 15:55 UTC
1 point
0 ∶ 0

So are you shorting US stonks or the USD?

• As both a member of the EA community and a retired mediocre stand-up act, I appreciate that you took the time to write this. You rightly highlight that some light-heartedness has benefited some writers within the EA community, and outside of it. My intuition is that the level of humour we can see being used is, give or take, the right level given the goals the community has. A lot of effort and money has been spent on making the community, along with many job opportunities within it, seem professional in the hope that capable individuals will infer that we mean business and consider EA on those terms.

A concept I referred to a lot when planning comedic performances, and public speaking occasions in general, is that an audience (dependent on the context and their reasons for being there) will have a given threshold for the humour they expect to find in your communication. To be funny, you must go beyond this threshold. Some way above that threshold is another boundary, a humour ceiling, defined by the social norms of the setting beyond which you no longer seem funny. Instead, you signal that you don’t understand the social norms around communication in that context. In stand-up, the humour threshold is really high, so it’s hard to qualify as funny at all, but nigh on impossible to be too funny. In presenting a dry subject to your boss and colleagues, the humour threshold is low and anyone could exceed it with a bit of practice, but landing safely between this threshold and the marginally greater humour ceiling is genuinely hard. You will too easily be too funny and seem a liability. When reading an obituary at a funeral, the threshold is set at essentially zero and the ceiling is coincident with it, only allowing the exemption of jokes told to highlight the cherished memories you have of the deceased.

I explain this because it seems to me untenably hard to commit to using humour all-out, or anywhere close to that, as a communicative and persuasive aid for EA without signalling that we do not “mean business”. Stick man illustrations and starchy acronyms, used sparingly, fall within the threshold-ceiling window for the work MacAskill and Karnofsky are trying to publicise, so these gags play out well. I don’t think they’ve got that much overhead clearance before readers would infer a lack of appreciation for the aesthetics of academic writing, and thus that they shouldn’t be taken seriously.

Since the advent of democracy and ancient Greek plays using jokes to point out the mistakes made by politicians of the day, comedy has proven a very effective method for poking holes in bad ideas and forcing people to change them, lest they be further laughed at. This seems to be the running theme of the cases you mention from John Oliver’s career. Much harder, I think, to propose an idea of your own that you wish people to believe is good and use humour to enhance that perception.

• Our impact market platform might help with this.

Obviously I’m biased here, and there are a number of other good approaches too (funds, Eigentrust, topping up Open Phil grants, donor lotteries, etc.).

Our platform allows anyone to publish a project proposal. Soon we’ll also have a Q & A system to replace the various Google forms that are currently used for grant applications.

If there’s no prize contest going on, it’s basically a centralized platform for grant applications, like a Facebook fundraiser but more geared toward using market mechanisms to highlight particularly promising projects.

If there’s a prize contest going on, it’s a proper impact market where even profit-oriented investors can seek to seed-invest into projects where they can make a sufficiently big profit in expectation.

This is a much too condensed summary, but this article that Amber Dawn has written for us should be more accessible.

• 1 Dec 2022 15:21 UTC
1 point
0 ∶ 0

I’m sure others can do a better job responding to this than I can, but a few thoughts:

• It’s true that the high US debt/​GDP ratio is not harmless, especially insofar as it leaves less fiscal headroom to deal with future recessions, wars etc.

• If investors thought the US were likely to need to default or inflate away debt, then 30-year treasury rates would be extraordinarily high (since otherwise they wouldn’t purchase them). That’s the opposite of what we see, where long-run interest rates are less than short-run interest rates.

• The US economy is growing faster than inflation. So cost of living adjustments aren’t such a big deal since tax revenue is growing as well

• I’d expect US debt to grow much less in a potential upcoming recession than in 2008 or 2020. It made sense to be doing a lot of fiscal stimulus in response to the 2008 financial crisis, but if the fed intentionally triggers a recession as part of controlling inflation, fiscal stimulus doesn’t make as much sense to do.

• Could I get a list of every current/​former billionaire who has committed a lot of money to EA (this is for a post I will probably never finish writing)? The ones I know off the top of my head are:

• SBF (obviously)

• Dustin Moskovitz

• Jaan Tallinn

have I missed anyone?

• Out of curiosity, what would the post be about?

• The angle is “maybe existing rich people becoming EAs is better than the other way round”. You can probably guess the argument...

• Wow. I hadn’t realised Jaan Tallinn was a billionaire.

• I’d be interested to know, if any of the powers that be are reading, to what extent the Long Term Future Fund could step in to take up the slack left by FTX in regard to the most promising projects now lacking funding. This would seem a centralised way for smaller donors to play their part, without being blighted by ignorance as to who all the others small donors are funding.

• I’m confused about how you’re dividing up the three ethical paradigms. I know you said your categories were excessively simplistic. But I’m not sure they even roughly approximate my background knowledge of the three systems, and they don’t seem like places you’d want to draw the boundaries in any case.

For example, my reading of Kant, a major deontological thinker, is that one identifies a maxim by asking about the effect on society if that maxim were universalized. That seems to be looking at an action at time T1, and evaluating the effects at times after T1 should that action be considered morally permissible and therefore repeated. That doesn’t seem to be a process of looking “causally upstream” of the act.

When I’ve seen references to virtue ethics, they usually seem to involve arbitrating the morality of the act via some sort of organic discussion within one’s moral community. I don’t think most virtue ethicists would think that if we could hook somebody up to a brain scrambler that changed their psychological state to something more or less tasteful immediately before the act, that this could somehow make the act more or less moral. I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

And of course, we do have rule utilitarianism, which doesn’t judge individual actions by their downstream consequences, but rules for actions.

Honestly, I’ve never quite understood the idea that consequentialism, deontology, and virtue ethics are carving morality at the joints. That’s a strong assertion to make, and it seems like you have to bend these moral traditions to the categorization scheme. I haven’t seen a natural categorization scheme that fits them like a glove and yet beating distinguishes one from the other.

• You’re absolutely right to criticize that section! It’s just not good. I will add more warning labels/​caveats to it ASAP. This is always the pitfall of doing YAABINE.

That said, I do think the three families can be divided up based on what they take to be explanatorily fundamental. That’s what I was trying to do (even though I probably failed). The slogan goes like this: VE is “all about” what kind of person we should be, DE is “all about” what duties we have, and Consequentialism is “all about” the consequences of our actions. Character, duty, consequences – three key moral terms. (And natural joints? Who knows). Theories from each family will have something to say about all three terms, but each family of theory takes a different term to be explanatorily fundamental.

So you’re absolutely right that, in their judgments of particular cases, they can all appeal to facts up and down the causal stream (e.g. there is no reason consequentialists can’t refer to promises made earlier when trying to determine the consequences of an action). Maybe another way to put this: the decision procedures proposed by the various theories take all sorts of facts as inputs. You give a number of examples of this. But ultimately, what sorts of facts unify those various judgments under a common explanation according to each family of theory? That’s what I was trying to point at. I thought one way to divvy those explanatorily fundamental facts was by there situation along the causal stream but maybe I was wrong. I’m really not sure!

Some specific replies:

I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

I completely agree that actual virtue ethicists would not do so, but the theory many of them are implicitly attached to (“do as the virtuous agent would do, for all the reasons the virtuous agent would do it”) does seem to judge people based on how you were feeling/​what you were thinking right before you did it.

• Will the results of this research project be published? I’d really like to have a better sense of biosecurity risk in numbers.

• 1 Dec 2022 13:11 UTC
4 points
1 ∶ 0

For anyone interested, especially university students, here’s my (unsolicited) story of working at SoGive:

Two years out my three main takeaways were probably (1) getting feedback on my writing and practice writing for EA contexts, (2) experience with charity evaluation, and (3) support exploring topics of my own interest, plus (3.5) I really liked working with Sanjay.

I volunteered with SoGive during the last year of my bachelors and later went on to work as an RA for the Founders Pledge Climate Team. During undergrad years prior to SoGive, I was an RA for an academic research lab at my Uni (sciences), had a campus job as a tour guide, held a leadership position with my student co-op, and did a data science internship.

Critically, the things I benefitted from most while volunteering with SoGive were things my other roles didn’t provide. I think specializing in EA research too early probably isn’t a great longterm career move, and diversifying your extracurriculars to get a healthy mixture of community, fun, and targeted skill/​career capital building is really important for both well-being and intellectual growth. Because my university had strong research programs for undergraduates, academic labs were probably a more direct way of “testing my fit” for research, but I expect this won’t be the case for most students. This work was a good fit for me as an undergraduate, but especially so because it met criteria others didn’t and provided mentorship from someone I respect (Sanjay).

TLDR; I’d encourage interested students to check out this program and listen to Alex Lawsen’s 80k episode on advice for students.

• 1 Dec 2022 13:10 UTC
1 point
0 ∶ 0

Big ask. Humour is incredibly difficult.

• I’m quite skeptical of post-hoc articles with titles like ‘X was no surprise’, they’re usually full of hindsight bias. Like, if it was no surprise, did you predict it coming?

Although there’s almost nothing about SBF here, is this part 1 of a series?

• You’re right that post-hoc articles are usually full of hindsight bias, making them a lot less valuable. That’s why I tried not to make the article about SBF too much (no this is not part 1 of a series). I lay that out from the beginning:

Please don’t read too much into the armchair psychological diagnosis from a complete amateur – that isn’t the point.

If you want predictions I give one right after this:

The point, to lay my cards on the table, is this: virtue ethicists would not be surprised if many EAs suffer (in varying degrees) from “moral schizophrenia”

I reiterate this when I say “I fear it is widespread in this community” where “it” is a certain coldness toward ethical choices (and other choices that would normally be full of affect).

SBF is topical and I thought this was a good opportunity to highlight this lesson about not engaging in excessive reasoning. But I agree my title isn’t great. Suggestions?

• Summary

• Demand for octopus and squid is growing

• Wild populations of octopus and squid are unstable

• So octopus and squid are likely to be factory farmed in the future

• Octopuses are really smart. It would be an animal welfare disaster for them to be factory farmed.

• There would also be “environmental, social, public health” concerns

• The Aquatic Life Institute (ALI) is a non-profit trying to prevent this future

• ALI is campaigning to ban octopus farming in countries/​regions where this is being considered (i.e Spain, , Mexico, the EU)

• ALI “will work with corporations on procurement policies banning the purchase of farmed octopus”

• ALI “supports research to compare potential welfare interventions”

• So far, ALI has sent some letters to government officials, organised a tweet campign, planned a couple of protests, and run some online events.

• They started a coalition with of 110 animal protection organizations (so like SPCA and stuff), called the Aquatic Animal Alliance (AAA)

• ALI has five welfare concerns for farmed octopuses:

1. Environmental Enrichment (octopuses could get bored)

2. Feed Composition (only feeding octopuses fishmeal/​fish is unsustainable)

3. Stocking Density & Space Requirements (octopus are solitary by nature, and high density stocking “could result in cannibalism”)

4. Water Quality

5. Stunning & Slaughter (“slaughter methods have been studied, however, none have been scientifically approved as humane”)

“Approximately 500 billion aquatic animals are farmed annually in high-suffering conditions and, to date, there is negligible advocacy aimed at improving welfare conditions for these remarkable beings.”

[Suggestions for how to improve this summary are welcome]

• [ ]
[deleted]
• I could see this backfiring. What if insilling false beliefs just later led to the meta-belief that deception is useful for control?

• [ ]
[deleted]
• I don’t agree with that, Karolina. There are dozens of EA terms that were so far extremely poorly translated to Polish; even “doing the most good” is very problematic in translation to Polish. I think for Polish, we need a professional translator, like Elżbieta de Lazari (Peter Singer’s translator), who will tackle classic EA terms, because, when translated literally, they sound horrible and make the EA language very awkward and unappealing. I already saw many EA Poland group translations that sounded quite poorly. It was very clear to me that a translator is very much needed. Check EA Poland website or social media posts from some time ago to see how unfortunate the translations can be. They’re just not written in natural language, so someone with a lot of experience in translation should really give it a go.

• [ ]
[deleted]
• Yes (it reflects the scientific consensus as well), and he’s particularly strong in consilience. For a fast impression I can highly recommend Deutsch’ TED Talks. (I couldn’t find the systems books in the image.)

• In the paper she co-authored, Gebru makes a good case for why real AI technologies put to work now are harming marginalized communities and show potential for increasing harm to those communities. However, in this Wired article, Gebru is associating EA with the harms caused by existing and likely future AI technologies. Gebru is claiming that because major investors in AI are or were involved in funding AI safety research, that the same research is co-opted by those investor’s interests. Gebru identifies those interests with narrow financial agendas held by the investors, ones that show no regard for marginalized communities that are likely to be impacted by the use of current AI technologies.

I think it’s worth exploring to what extent her actual agenda, one targeting the environmental, social, and economic harms or exploitation that AI research involves now, could be accomplished, regardless of her error in believing that EA is co-opted by financial interests pushing for increasingly harmful AI technologies.

I’m thinking about how to solve problems like:

• carbon footprint of AI training and deployment hardware and software and its disproportionate impacts on marginalized communities in the near term.

• social harms of deployable and tunable LLM’s used for example, as propaganda generators

• social harms of now open-sourced and limitation-free image generators (and upcoming video generators) such as Gebru’s linked WAPO article discusses.

• exploitation of labor to produce AI datasets.

• technological unemployment caused by AI technology.

• concentration of power with organizations deploying AGI technology.

Fundamentally, an ambiguous pathway toward AI safety is one shared with both a path toward an AI utopia but also an AI dystopia. The best way to thoroughly disprove Gebru’s core belief, that EA is co-opted by Silicon Valley money-hungry hegemonic billionaires, would be to focus on the substantive AI impact concerns that she raises.

The suggestions outlined in her paper are appropriate, in my view. If LLM’s were removed from public access and kept as R&D experiments only, I would not miss them. If ASR was limited to uses such as caption generation, I would feel good about it. But what do you think?

• I think there’s something to be said for the value of self-interest in your thought experiment about the person saving their partner over a stranger. A broader understanding of self-interest is one that reflects a rational and emotionally aligned decision to serve oneself through serving another. Some people are important in one’s life, and instrumental reasoning applies to altruistic consequences of your actions toward them. Through your actions to benefit them, you also benefit yourself.

With respect to love, trust, and especially in romance, where loyalty is important, self-interest is prevalent. A person falls “out of love” when that loyalty is betrayed, typically. Work against someone’s self-interest enough, and their emotional feelings and attachment to you will fade.

All to say that consequentialism plays a role in serving self-interest as well as the interests of others. With regard to the dissonance it creates, in the case of manifesting virtues toward those who we depend on to manifest those virtues in return, the dissonance eases because those people serve our interests as we serve their interests.

• So I feel like your comment misses the point I was trying to make there (which means I didn’t make it well enough – my apologies!) The point is not that consequentialists can’t justify saving their spouse, as if they don’t have the theoretical resources to do so. They absolutely can, as you demonstrate. The point is that in the heat of the moment, when actually taking action, you shouldn’t be engaging in any consequentialist reasoning in order to decide what to do or motivate yourself to do it.

Or maybe you did understand all this, and you’re just describing how consequentialism self-effaces? Because it recommends we adopt a certain amount of self-care/​self-interest in our character and then operate on that (among other things)?

• Came here to post this same article—I think it does a good job outlining all the ways in which this really was a fraud and not some sort of accounting mistake as seems to be presented by some media outlets.

• 1 Dec 2022 9:27 UTC
16 points
2 ∶ 0

I think this is a great initiative. It is great to see what you are looking for and why and try to bring on more relevant professional expertise. I hope more EA organisations follow your example.

• According to https://​​finance.yahoo.com/​​news/​​ftx-japan-unit-drafts-plan-124544036.html, FTX Japan has about 150 M in assets, which isn’t much compared to what the whole FTX conglomerate owes. • Trying to “do as the virtuous agent would do” (or maybe “do things for the sake of being a good person”) seems to be a really common problem for people. Ruthless consequentialist reasoning totally short-circuits this, which I think is a large part of its appeal. You can be sitting around in this paralyzed fog, agonizing over whether you’re “really” good or merely trying to fake being good for subconscious selfish reasons, feeling guilty for not being eudaimonic enough—and then somebody comes along and says “stop worrying and get up and buy some bednets”, and you’re free. I’m not philosophically sophisticated enough to have views on metaethics, but it does seem sometimes that the main value of ethical theories is therapeutic, so different contradictory ethical theories could be best for different people and at different times of life. • One of the reasons I no longer donate to EA Funds so often is that I think their funds lack a clearly stated theory of change. For example, with the Global Health and Development fund, I’m confused why EAF hasn’t updated at all in favour of growth-promoting systemic change like liberal market reforms. It seems like there is strong evidence that economic growth is a key driver of welfare, but the fund hasn’t explained publicly why it prefers one-shot health interventions like bednets. It may well have good reasons for this, but there is absolutely no literature explaining the fund’s position. The LTFF has a similar problem, insofar as it largely funds researchers doing obscure AI Safety work. Nowhere does the fund openly state: “we believe one of the most effective ways to promote long term human flourishing is to support high quality academic research in the field of AI Safety, both for the purposes of sustainable field-building and in order to increase our knowledge of how to make sure increasingly advanced AI systems are safe and beneficial to humanity.” Instead, donors are basically left to infer this theory of change from the grants themselves. I don’t think we can expect to drastically increase the take-up of funds without this sort of transparency. I’m sure the fund managers have thought about this privately, and that they have justifications for not making their thoughts public, but asking people to pour thousands of pounds/​dollars a year into a black box is a very, very big ask. • [ ] [deleted] • [ ] [deleted] • Anyone who wants to help improve EA’s incredibly weak meme game is welcome to join us over at the Dank EA Memes facebook group: https://​​www.facebook.com/​​groups/​​OMfCT • I think it is a bad idea to set up a database of negative articles on EA, or to spend too much time worrying about them: 1. It would be an attention sink to spend time tediously rebutting this stuff—effective altruists’ time is valuable, and a classic failure mode of online movements is to become “too online” until you are a bunch of internet atheists compiling databases of arguments and fallacies with which to do battle against an equally dedicated army of internet creationists. 2. EA is in some ways essentially an elite movement—we’re not trying to be as viral as we can possibly be (if we were, our main mode of communication wouldn’t be asking people to read long dry nonfiction essays on the Forum!) to appeal to the widest possible audience. Instead we’re trying to be as insightful and correct as we can possibly be, in order to appeal to smart people who respect the truth. These smart, careful people are exactly the kind of people who are least likely be swayed by obviously dumb, bad-faith hit-pieces that deploy the language of wokeism to make nonsensical attacks in random directions. 3. By contrast, setting up an organized database of “misinformation” and trying to dispatch internet footsoldiers to crusade against our enemies would likely be a huge turn-off to those smart, careful people. When I think of a group that does this stuff, I think “scientology” or maybe “oppressive governments” or “fringe political movements like antifa” or other paranoid and crazy organizations/​individuals. • This makes sense. Definitely a strong argument for a closed or limited-access database, or no database at all. It would be an attention sink to spend time tediously rebutting this stuff—effective altruists’ time is valuable I think this is definitely true for most people but not all. I’ve met lots of people affiliated with EA who have mundane software engineering jobs and are interesting in mainly contributing casually every now and then. a classic failure mode of online movements is to become “too online” until you are a bunch of internet atheists compiling databases of arguments and fallacies with which to do battle against an equally dedicated army of internet creationists Strong agree on this one, although I think the justifications are only the tip of the iceberg. The risks are much greater IMO, especially related to social media, but it involves information I’m not willing to talk about here on a public forum. These smart, careful people are exactly the kind of people who are least likely be swayed by obviously dumb, bad-faith hit-pieces that deploy the language of wokeism to make nonsensical attacks in random directions. I somewhat disagree on this one. I used to be a strong advocate for actively preventing large numbers of woke nonsensical people from dominating EA and trying to turn it into one of Bernie Sanders’s cause areas. But now I think that mostly, people start out obsessed with the language of nonsensical wokeism and gradually choose to become smart, careful people after meeting large numbers of other people who are already careful and smart. Everyone has to start somewhere, and some people have better starting points than others. trying to dispatch internet footsoldiers to crusade against our enemies would likely be a huge turn-off I think this is pretty easy to prevent. Just put a disclaimer at the top of the database telling people not to do that. You don’t even need to make it limited-access, although that would help. The only reason that journalists are using misinformation to target EA is because they know there’s absolutely nothing stopping them, like a bully targeting the smallest kids on a playground. It’s basically open season. Increasing awareness (or even accountability) makes sense here. • Formatting thing: you may have meant to indent some bullets under “Work on any single area can gain from our working on multiple areas:” I think this b/​c it ends with a “:” • It doesn’t matter his intentions, it matters his actions and outcomes. And really, from an EA perspective, all that matters is the general view from the public and that impact on current and future members/​organisations/​grants. • Thank you for sharing this project. It looks great. A few minor comments and ideas. Wordpress is very flexible but requires lots of plugins to interface with each other for many functions to work. Consider chatting to Aqeel or JJ Hepburn from Sangro/​AI Safety Support who recently used wordpress for a learning management system to see how they found it. Consider also using an existing platform with more pre-built features (e.g., Thinkific) where cross-compatibility might be less painful (see our uni EA fellowship site). At least at the start, these help projects like this get off the ground more easily. Most projects add their bells and whistles later. My PhD student is doing a thesis very close to this project. She’s trying to accelerate knowledge translation in developing countries. Our hypothesis, like yours, is that online learning will rapidly and cost-effectively close the research-practice gap. The first study in her thesis is a systematic review of randomised trials using online learning in healthcare. We want to know how well online learning teaches professionals, and how well the training helps people translate it into practice. She’s aiming to find what features help the interventions work better. If you’re interested in the review, she’s looking for team members. Being a team member means you learn the results much more quickly and become an author on the paper, which can be good for credibility. If you want to find out more, email me at noetel [at] gmail.com or send me a message on the forum. Her second study is a cost-effectiveness analysis of an online nursing intervention. Her third study is a series of interviews in LMICs to see how professionals from those countries feel about online learning. It sounds pretty well aligned with the kind of scoping your team is doing. If you’d be interested in the findings of a study like that, and possibly have some contacts from healthcare in LMICs, then again we’d be interested in collaborating (email me). She could run the interviews but you might find the results valuable. • I don’t think these Twitter polls are at all helpful and risk being misleading. This is not just because they are highly non-representative as you admit but also there is no pre-event test to compare results to so it’s hard to draw conclusions about how the FTX situation contributes to the distribution of results you did get. It’s also impossible to know how the first poll connects to the second poll as they could be going to somewhat different audiences (for example—the first question got more votes) and you can’t do crosstabs between the two questions. Rethink Priorities is doing some work looking into this and we could very easily and quickly do more work with representative samples if further commissioned to do so. It’s a bummer we don’t have EA Pulse running yet as this would be an ideal use case. • Peter—your concerns are valid regarding non-representative sampling, and the lack of a pre/​post test comparison over time that would be more informative about the FTX effect. I wouldn’t draw strong conclusions from any specific Twitter poll. My main goal here was to try to spark some further, more systematic research, with some more representative samples. Also, I was just curious how my followers viewed EA at the moment, and having indulged that curiosity, thought I might as well share some results with this forum. • 1 Dec 2022 1:26 UTC 1 point 0 ∶ 0 This is nice, but I’d also be interested to see quantification of moral weights for different animals when accounting for all the factors besides neuron count, and how much it differs from solely using neuron count alone. • 1 Dec 2022 1:19 UTC 18 points 10 ∶ 3 My only update is that I think this community (based on the EA forum only) is under-rating the PR damage from all of this. For a lot of people SBF =~EA, and this interview does not appear to be playing well (source: twitter, group chats etc.). I’m not sure what to do about it but I thought I’d share that observation from outside the EA/​LW bubble. A few other thoughts: • Unfortunately, I think SBF’s comments to Vox about ethics (“so the ethics stuff—mostly a front?”) have been misread to mean that his entire earning to give /​ EA worldview was somehow a cynical sham. While I think FTX’s downfall indeed involved some risky and unethical business dealings, I don’t think same is saying anything like this (obviously). In fact, he may have even EV’d himself into taking some of these risks in service of his earnest philanthropic goals (epistemic status: who knows). • Some people who really don’t like EA, and longtermism in particular, are using the FTX downfall as a sort of proof that EA exists to launder the reputations of the wealthy. While I think these arguments have little merit, they are getting a lot of play in left-leaning circles and I think have the potential to do damage, especially to people with limited exposure to EA who are “getable” in the sense that they care about similar things to EAs and may now be less likely to work on /​ support EA cause areas. • (...) for a lot of people SBF =~EA This seems weird. I think PR wise, our biggest worry is what the first impressions of newcomers will be, and the vast, the vast majority of people haven’t heard of it yet. I worry more about what the first articles on Google are going to be, rather than how we are actively being perceived right now. I’m still worried, but from my perspective general attitudes haven’t changed that much yet and at most, people with pre-existing negative beliefs about EA have seen those confirmed. Plus, I don’t think we’re under-rating the damage, it’s just that there doesn’t seem like there’s much we can do. (I should probably say my view is quite partial: I’m an organizer for a Spanish-speaking group and for the most part, the situation has seemed distant) • Things that I think played particularly badly: • Not being totally direct on his parents’ real estate acquisitions. These are a bad look even if you buy the argument that the only way to find space in the Bahamas is to buy a few hundred million of extreme lux resort condos. I know a small handful of very wealthy people who would be able to immediately talk about how they have financed /​ held their RE assets. • Not being totally direct on his relationship to Alameda: the guy lived with principals of and founded/​owned 90% of the firm which was also located in the Bahamas (and maybe dated the CEO?). I have no more intel than what’s available in the press but it just does not read as truthful as an outsider. • Drug use: who cares, but it has become a meme and I think he would have been better served to say: ‘yeah, I have a special patch-based scrip for ADHD meds I’ve used throughout my career—not sorry about that’. • 1 Dec 2022 0:56 UTC 1 point 1 ∶ 1 Not sure how you can comment in the direction of believing about mistakes. If you had money in my bank and I showed you £10,000 but then actually used this money for my bad trades and still showed you £10,000. Would you believe that it was an honest mistake if then when you tried to take your money back, they were not there but burnt in trades? Is it a mistake to use other people money for your means? If so, yes then he made a mistake. Did not steal. I would have thought that for money WYSIWYG principles apply. E.g. what you see is what you get and have. And not that what you see is …maybe...what you have but not get. • Hi, I hope you do leave this up a week longer at least! The FTX fiasco meant a lot of people were overwhelmed and are just now getting spoons back for tasks like this. I will be repromoting this survey to the EA Austin community tonight as I think hardly anyone here will have filled it out. Hoping the link is not dead when people click it 😅🙏 • My own impression is that SBF seemed honest. He probably took a significant legal risk by agreeing to the interview, and many of the mistakes he pointed out seem coherent with most accounts of the FTX collapse (where SBF would have committed negligence by doubling down on a series of decisions while operating with limited information). The New York Times was right in pointing out that there is some contradictory evidence, especially because of how the accounts were set up, but I don’t think this is strong evidence either. That being said, I don’t think we should update significantly either way from this interview. • Agree with your impression. But I would give this interview more weight than you do. In my experience—around 15 years of legal work (including as both a lawyer, and as a defendant) - it is exceedingly rare for a defendant who has bad intentions to speak openly about what they did. SBF is probably already a defendant in any number of cases, and some of them may eventually be criminal. The fact that he is speaking openly and transparently comes at significant personal risk and is much more consistent with the notion that he acted in good faith. I was probably 75-25 on good faith/​fraud prior to this interview. I’m probably 80-20 or 85-15, after this interview. PS Great summary. This is helpful, as I missed part of the interview. • He’s not speaking openly and transparently. His answers are sometimes really evasive and he doesn’t admit to any mistakes in a lot of detail. There are lots of reasons he might do interviews (thinking he’s smarter than his lawyers; thinking he’s in trouble either way and may as well enjoy the spotlight; thinking he’s got a message to share that’s more important than his personal fate; somehow thinking he’s still got a shot at fundraising money[?], etc.) I’m in favor of giving people a lot of goodwill if they transparently explain themselves, but you have to actually look if they’re doing that vs. if they just say/​pretend that they’re doing that. • I guess there are multiple aspects to this. While he might seem to be open at the cost of personal legal risk, it might be that he’s also telling an inaccurate story of what happened. (EDIT: slightly edited the wording regarding openness/​good faith in this one paragraph after reading Lukas’s take) (Heavy speculation below) A crucial point given SBF’s significant involvement in EA and interest in utilitarianism is whether he actually believed in all of it, and how strongly. There are some signs he believed in it strongly: being associated with EA rather than a more popular and commonly accepted movement, being very knowledgeable about utilitarianism, early involvement, donations etc. If he did believe in it strongly, it could be that this is just him “doing [what he believes is] the most good” by potentially being dishonest about some things (whether this was due to bad intentions), in order to, perhaps (in his mind), deflect the harm he’s caused EA and the future, at the cost of personal legal risk (which is minor in comparison from the utilitarian perspective). (Then again, at the same time, another (naive) utilitarian strategy might be to say “muhahaha I was evil all along!” and get people to think that he used EA as a cover and that he isn’t representative of it? If that also works (in expectation, to him), I’m not so sure why he picked one over the other.) This is all speculative, and a bit unusual for the average defendant, but SBF is quite unusual (as is EA, to be fair) and we might have to consider these unusual possibilities. • 30 Nov 2022 23:30 UTC 16 points 3 ∶ 8 Background: I practiced securities litigation in the post 2008 time period. I don’t know Sam personally, though I’ve had some chats with him over the years. My assessment of his impact on the movement I am most involved in—animal rights—is mostly negative, though I think his negative impacts were very well intentioned. I disagree with Daedelus’s comment that Sam’s statements were not credible. I have worked with much larger and more sophisticated financial institutions which, when caught up in a mania (or a panic), had similarly poor controls. I thought Sam came across as credible and honest. I’d also add that, the fact that he spoke openly about the collapse, is a strong sign to me that there was not bad intent. Lawyers will almost always tell people not to say anything once litigation, much less criminal prosecution, is a possibility. (I have given many clients this advice myself and, indeed, there is a thread in this forum giving the same advice.) That advice is not unreasonable, if your goal is just self interest and preservation. But the fact that he was willing to speak openly about this, in direct contradiction to his lawyers’ advice, suggests strongly that his intentions are good. I’d go so far as to call it brave. I have a personal friend who lost his entire savings in the collapse of FTX. It’s a terrible thing. But I don’t know that this is a case of fraud. Seems just as likely, if not moreso, that this is just a classic bank run. • Either brave and innocent, or incredibly naive and conceited. I will say that if he’s guilty of massive and repeated fraud, this kind of performance is how you potentially get a lifetime achievement award from a U.S. District Court. Well, technically only de facto life like Madoff’s 150 years, since I doubt any potential charges have an actual life sentence. Keeping a low profile is expected and won’t be held against you at sentencing. Going on a media tour proclaiming your innocence when you are flagrantly guilty is not going to sit well with the judge. • Agreed on how interviews will play with a judge. If there is evidence of his guilt, and he went on Dealbook to double down on his fraud, that is going to affect sentencing significantly. Speaking publicly about the case is a pretty good signal that Sam genuinely does not think he has criminal liability. • . . . or thinks he can talk his way out of it, like many more ordinary criminal targets who are arrested but decide to talk to the police investigator. People even choose to talk to grand juries when they know they are a target. • I have a personal friend who lost his entire savings in the collapse of FTX. It’s a terrible thing. But I don’t know that this is a case of fraud. Seems just as likely, if not moreso, that this is just a classic bank run. An exchange that promises to “never invest customer deposits” cannot be subject to a bank run, as far as I can tell. Banks can be subject to bank runs because they practice fractional reserve banking. Crypto exchanges are not allowed to do that. Many different sources have now indicated that he knowingly used customer deposits to cover Alameda liabilities, which really seems like a very straightforward case of fraud. • It seems like FTX offered borrowing and lending to its clients, and this was prominently marketed. I don’t think you can call FTX offering margin loans to Alameda a “straightforward case of fraud” if they publicly offer margin loans to all of their clients. (There may be other ways in which their behavior was straightforwardly fraudulent, especially as they were falling apart, but I don’t know.) In general brokers can get wiped out by risk management failures. I agree this isn’t just an “ordinary bank run,” the bank run was just an exacerbating feature once the damage had already been done. “Never invest customer deposits” sounds like a misleading tweet. In the conversation with Kelsey, SBF clarifies that he meant that FTX never invests customer deposits, that it is just acting as an exchange that facilitates borrowing and lending between its customers one of whom was Alameda. I think this is probably technically true but misleading. It seems like there are two distinctive problems: 1. By November 2 Alameda’s collateral looks like it was very bad. Nominally the market value might have exceeded their liabilities, but it was exceptionally illiquid even before considering the fact that a lot of it was FTT and hence correlated with FTX solvency. They were evidently not automatically liquidated as any normal customer would be, and it sounds like their liabilities were not tracked by the normal system at all. If this was just a huge risk management and accounting failure that would be merely bad, but the beneficiary was a hedge fund mostly owned by SBF whose leadership he had a close relationship with. From the outside it looks quite likely that they knew this was a possibility, but didn’t want to liquidate Alameda after its losses in 2022 (or perhaps deliberately avoided getting clarity about the accounting situation?) because they had a good chance of making money during a market recovery and wanted to take one last gamble at the customer’s expense. This isn’t technically FTX investing customer’s money, but the risk management is so bad, the joint ownership so extreme, and the organizational boundaries so blurred by bad accounting, that it seems likely to be willful negligence and fair to say that they were effectively investing customer money. 2. I assume FTX also didn’t pay back customers who weren’t participating in asset lending and who weren’t getting paid interest on their deposits. I don’t actually know details here, either about FTX or about finance in general, and they would affect my view about how bad this part was (though point 1 is bad at any rate). My guess is that it this outcome was basically inevitable once the exchange was having liquidity problems (I’d have been surprised if they halted withdrawals for margin accounts while they still had a ton of and liquid coins in other accounts) and the question is just how much of a betrayal that is.

For normal brokerage accounts, my vague sense of expectations is that margin accounts would be at risk of going down with the exchange, whereas normal accounts wouldn’t. I don’t know if that expectation is grounded in legal reality. I don’t know if FTX had any such distinction.

I would guess that FTX customers had the choice to opt in to lending, though I wouldn’t be surprised if it was opt-out or if uptake of lending was very large. And legally I don’t know if customers who don’t use margin accounts or who don’t click the “lend asset” box or whatever are supposed to be protected (kind of like more senior creditors). Morally it seems like they have a reasonable expectation of not losing it but I expect this is also not straightforward fraud.

Overall my sense is that this is probably not straightforward fraud, but I don’t know and would appreciate a bit more clarity.

• It seems like FTX offers borrowing and lending to its clients, and this was prominently marketed. I don’t think you can call FTX offering margin loans to Alameda a “straightforward case of fraud” if they publicly offer margin loans to all of their clients. (There may be other ways in which their behavior was straightforwardly fraudulent, especially as they were falling apart, but I don’t know.)

I think Matt Levine mentioned something similar, but I don’t think I understand this, so let me try to get some more clarity on this by thinking through it step-by-step.

I think there are two separate documents here whose representation might have been fradulent:

1. The tweet by Sam Bankman Fried (or FTX Official, I don’t remember, and I think it’s now deleted) that said “We never invest customer deposits, not even in something as secure as treasuries”

Like, if you use your customer funds to finance loans to other clients, then clearly this does not count as “never investing customer deposits”, since you did just invest your customer deposits into a loan to another client. And while there was no legal contract signed in this case, I think this presentation is quite deceptive, and seems pretty fradulent to me (though to what degree things said on Twitter should be legal fraud, and what the actual scale of the liability for this is, seems quite unclear to me).

I do think calling this fraud, or at least “fradulent marketing” seems accurate to me, and I would be pretty surprised if nobody somehow gets fined for this, though the magnitude seems likely to be much less than the full position.

The FTX Terms of Service seem to have a few different things in them that pertain to the stuff that happened. The most relevant piece I could find is:

Any available Assets held in your Account is available to be locked and used as collateral for margin trading, or to fund trades, in relation to any Services or part thereof offered through the Platform by FTX Trading or its Affiliates.

This pretty clearly says that your funds can be used as collateral for margin trading or to fund trades, though it is quite ambiguous about whether it’s used as collateral for you, or for the position of other clients (or Alameda) in this case.

It wouldn’t surprise me if this happened to be written intentionally ambiguously, though people with more finance experience might have better takes here. When reading this the first time, I definitely thought this was trying to say “we might use any assets in your account as collateral for your other trades”, not “we might use any assets in your account as collateral for some random other person’s trades”.

(A) Title to your Digital Assets shall at all times remain with you and shall not transfer to FTX Trading. As the owner of Digital Assets in your Account, you shall bear all risk of loss of such Digital Assets. FTX Trading shall have no liability for fluctuations in the fiat currency value of Digital Assets held in your Account.

(B) None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Trading does not represent or treat Digital Assets in User’s Accounts as belonging to FTX Trading.

(C) You control the Digital Assets held in your Account. At any time, subject to outages, downtime, and other applicable policies (including the Terms), you may withdraw your Digital Assets by sending them to a different blockchain address controlled by you or a third party.

This is a pretty tricky section. The ToS is between the client and FTX Trading, so it sure seems that somehow in the process of sending those assets to Alameda, it had to somehow enter the possession of FTX Trading.

I do have trouble understanding of how somehow FTX could end up with less than the depositors money/​tokens in the bank/​blockchain without somehow violating this part of the service agreement. Like, maybe there is some legal trickery where you get to say that due to the use of the customer funds as a collateral the assets directly transferred from client to Alameda or something, or to the trading partners of other clients who made a loss, but that sure seems like a huge stretch, and I feel like this section just straightforwardly says that if somehow FTX ended up sending a fraction of the customer deposits to any external party, whether it is to cover some losses or not, then they violated the ToS.

I can imagine there being some fancy legal defense here, but at least to me it looks really pretty clear. The ToS said that no assets deposited in FTX shall transfer to FTX Trading. Even if they were used as collateral, in the moment that FTX actually sends them to the debtor, they have to transfer through FTX Trading (who is the liable party), and as such violate the ToS.

So doing some more direct research here, both the public marketing statements made on Twitter, and the ToS seem to have been violated by the actual actions of FTX, though both sure could have some loophole that actually makes this a non-straightforward case of fraud, but at least I can’t currently see any obvious loopholes (though someone with more legal experience very well might).

As I currently understand, the ToS basically committed FTX to cover the potential losses from leveraged trades using its own profits or the assets of the users who had lending explicitly enabled. Since they were committed to never take ownership over the assets that did not explicitly have lending enabled.

About $3B of the customer funds did seem to have lending enabled, and I think sending those to Alameda was likely fair game, though they can’t cover close to the full amount, as far as I can tell. • It seems significant if only$3B of customers were earning interest on cash or assets, and the other customers had the option to opt in to lending but explicitly did not. I think all the other customers would have an extremely reasonable expectation of their assets just sitting there. I’m not super convinced by the close reading of the terms of service but it seems like the common-sense case is strong.

I’m interested in understanding that $3B number and any relevant subtleties in the accounting. I feel like if that number had been$15B then this would plausibly just be a failure of risk management, in which case I guess that number is central to the clear-cut fraud vs willful negligence question. The $3B estimate seems plausible but all I’ve seen is an out of context screenshot which is not great. • Yeah, I also don’t have anything else better. I am piecing some of this together from this alleged report of someone who worked at in the two days before the end, from which I inferred that Caroline and Sam knew that they took an unprecedented step when they loaned that customer money that nobody knew about, which seems like it just wouldn’t really be the case if they had only used money from the people who had lending enabled: https://​​twitter.com/​​libevm/​​status/​​1592383746551910400 I am definitely very interested in a better estimate of how many customers had lending enabled (but even separately from that, given various other aspects of FTX’s finances, it seems very likely that the people who did deposit their money as non-lending in FTX will not get their money back, which seems like it would have required some kind of transfer of ownership from clients to FTX at some point, and therefore violating the ToS). • Per the screenshot at tweet 17 of this thread, it seems like 2.8B of customer assets were opted-in to lending. Not nearly enough to explain the amount that went to Alameda. • I don’t see this screenshot as evidence as to the amount that FTX was entitled to use for other customers’ leverage, including Alameda. It seems to be a snapshot of current margin lending? If the point of this screenshot is to show there was hidden margin lending to Alameda, that wasn’t being disclosed, fair point. But it’s possible that Alameda had a special vehicle that wasn’t picked up on the dashboard because of the size of its allowed margin and because FTX (arrogantly) figured that Alameda was involved in relatively risk-free trading. That seems to be what it was started to do—engage in supposedly “risk-free” arbitrage. The notion of risk-free arbitrage is probably incorrect. But Sam would not be the first person to believe in it. • Exactly. The terms and conditions said that deposited funds were not being lent to Alameda (“FTX Trading”), The terms and conditions said that title of digital assets would belong to the user, and not transfer to, or be loaned to FTX trading. Which would seem to make impossible loaning these funds from FTX trading to Alameda (end edit) whereas Sam said in today’s NYT/​CNBC interview that FTX allowed Alameda to take out an$8B line of credit, using I think money that was not given to FTX for lending. It immediately looks like he defrauded his customers.

In today’s interview with NYT/​CNBC Sam tried out a few defenses:

• there was another line in the T&C that allowed this (sounds dubious absent further details)

• FTX didn’t have visibility into the size of Alameda’s loans on its own dashboard; only Alameda knew about the loan (implausible; he was housemates with Alameda’s CEO, who talked about these borrowed funds at a leaked company meeting during the collapse—to which he simply said that he wouldn’t be able to clarify others’ comments), and

• Alameda was a small fraction of trading activity, and he paid attention to this rather than the size of the line of credit (also super implausible—how can one not be aware of a multibillion dollar line of credit?).

So I don’t see how any of these defenses work. There’s also a question of if he defrauded customers, how long this was going on for. When asked when the comingling of funds began, he just talked about it getting bigger from mid-2022. That would mean at least four months, but the fact that he didn’t give it a straight answer at least suggests to me that this might have actually begun significantly before that, possibly years.

You can also look at the predictions here, here, here, here, and here, which collectively suggest that Sam committed fraud, and is likely to be criminally charged and spend years in prison. Personally, if he’s not imprisoned, I personally would guess it’s >50% that avoided facing the US justice system altogether, by somehow avoiding extradition.

• Thanks for these replies.

I haven’t seen any evidence that FTX promised to never invest customer deposits. Does anyone have a link? My understanding is that FTX offered customers the opportunity to make leveraged trades, i.e., to bet more than the money they had in their accounts. This suggests to me that FTX was not just an exchange but a lender, which is a very different sort of financial beast (with a different risk profile). I also understand that there was a significant interest rate on the customer accounts -- 6% -- which adds weight to that conclusion. You can’t get a return on investment without risk.

I’d also love to see links to the terms of service, as I’ve seen many state that Sam violated the terms of service but little evidence to that effect. I’ve seen one document that indicated the property of customers was their own. That’s not legally dispositive of the issue, as FTX never claimed to have an ownership interest in the customer deposits. The question is what it was allowed to do with customer deposits and, for example, whether it was required to keep segregated accounts. Far larger and more sophisticated entities, like MF Global, have failed with account segregation, so it would not surprise me if FTX failed as well.

The basic problem is this. Even if you WANT to prevent yourself from using customer deposits, if you are a lender to various account holders (including ordinary retail customers) and you are not properly segregating the funds, then when a bunch of people borrow money from you, beyond what they invested into their account, they will inevitably tap into the money of other customers. Corporate controls, account segregation, etc will prevent this. But early stage companies very often fail with this sort of thing because it’s hard, and often very labor intensive. I can see FTX not imposing a limit on Alameda’s right to leverage because: (a) they were overconfident in the ability of Alameda to manage risk; (b) they never thought that Alameda’s leverage would dip into the amount other customers needed; and/​or (c) they assumed incorrectly that some form of legal segregation of accounts would prevent the technical removal of customer accounts from happening.

I don’t know what the right answer is to any of these questions. What I do know, having looked at the inside of collapsing financial institutions, is that it is far far far more complicated than most people think. It is one of the reasons why there was no criminal prosecution in the MF Global case, despite what seemed to me far clearer evidence of wrongful conduct.

Re: prediction markets, they are interesting, and I hadn’t seen them before. Slightly moves my needle towards actual fraud, maybe back to 75-25. But I’m not sure this is a scenario where we should expect prediction markets, at least in the short term, to be particularly reliable, due to herding effects. See here.

• I haven’t seen any evidence that FTX promised to never invest customer deposits. Does anyone have a link? My understanding is that FTX offered customers the opportunity to make leveraged trades, i.e., to bet more than the money they had in their accounts. This suggests to me that FTX was not just an exchange but a lender, which is a very different sort of financial beast (with a different risk profile). I also understand that there was a significant interest rate on the customer accounts -- 6% -- which adds weight to that conclusion. You can’t get a return on investment without risk.

The issue is that as a user of FTX, you were supposed to be able to choose whether your money was being lent out or not—e.g. there was a “lend/​stop lending” button in the interface. It seems totally reasonable to me that FTX loses your money if you lend it. But my current impression is that the amount lent to, and lost by, Alameda was much more than the amount that users agreed to have lent out. Agree that segregation of funds, if implemented properly, would solve the problem here.

• I haven’t seen any evidence that FTX promised to never invest customer deposits.

Here is a screenshot of the (now deleted) Tweet where SBF claims FTX does not invest customer deposits:

Here is the relevant section of the FTX ToS:

(A) Title to your Digital Assets shall at all times remain with you and shall not transfer to FTX Trading. As the owner of Digital Assets in your Account, you shall bear all risk of loss of such Digital Assets. FTX Trading shall have no liability for fluctuations in the fiat currency value of Digital Assets held in your Account.

(B) None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Trading does not represent or treat Digital Assets in User’s Accounts as belonging to FTX Trading.

• I think what Sam is saying here is true, if customer deposits are being used as leverage and not as investments.

It’s arguably a bit misleading, if Alameda was a customer with a seemingly unlimited line of credit. Bu t what Sam is saying is still true.

I respond to the TOS points above. But to me—and I could be wrong on this, as I have not been in securities litigation for many years—the terms of service aren’t really dispositive.

• I would also add, as a long time criminal defendant myself and someone who is very much against mass incarceration, that I may have a bias here, as I do not want to see anyone sent to prison. Even my worst enemy. Jails and prisons are cruel places, and there is relatively little evidence (if any) that they serve any rehabilitative purpose. That may be biasing my assessment of Sam because I do not WANT to see him (or anyone else, really) in prison.

• Doesn’t FTX pay interest on deposits and prominently offer margin loans? Do you have a citation for the claim that the terms of service excluded the prospect of lending? (All I’ve seen are some out-of-context screenshots.)

Why do you say “Alameda (FTX trading)”? Aren’t these just separate entities?

• Why do you say “Alameda (FTX trading)”? Aren’t these just separate entities?

You’re right—fixed.

• I’ve heard (unverified) that customer deposits were $16B and voluntary customer lending <$4B. It would make sense to me that a significant majority of customer funds were not voluntarily lent, based on the fact that returns from lending crypto were minimal, and lending was opt-in, and not pushed hard on the website.

• I disagree with the legal conclusion in the Axios article, as someone who has litigated securities fraud cases. The relevant language isn’t dispositive to me. This appears to be the key term:

None of the Digital Assets in your Account are the property of, or shall or may be loaned to, FTX Trading; FTX Trading does not represent or treat Digital Assets in User’s Accounts as belonging to FTX Trading.

There are a few issues with trying to say this proves that using customer accounts (e.g., for Alameda) is a violation of the terms of service. First, the provision only states that FTX trading cannot be the recipient of a loan. It does not say other account holders cannot be the recipient of a loan. Second, using non-segregated customer deposits for another customer’s leveraged trading may not be a loan. This is something I would have to legally research, but the usage of a customer’s funds is probably considered something like a bailment and not a loan. For example, there’s no principal or interest.

I’m honestly not particularly interested in the terms of service, however, in determining whether Sam committed fraud. I’m more interested in how FTX was marketing its product, and whether it promised risk-free deposits. Most people do not read the terms of service, and if FTX promised account segregation and risk-and-interest free deposits, I would say that there’s a greater argument that Sam committed fraud—rather than poor risk management.

Separately, assuming you are the same Ryan Carey I met many years ago, it’s nice to speak to you again. Hope all is well.

• FTX’s legitimate operation—for those who didn’t opt in to lending—was supposed to be like a valet parking service. “The terms of service didn’t explicitly say I couldn’t lend out your car to the local dragracing club” is not a good defense to an argument that the valet converted the customer’s car. You need actual permission from the car/​crypto owner to do that.

• 30 Nov 2022 23:23 UTC
12 points
2 ∶ 1

Three social forces at the root of FTX’s collapse

Hi folks, I shared some thoughts I wrote up about Sam Bankman-Fried. I worry that there’s a bit of a social cascade that’s leading us to draw the wrong lessons from what happened. I’m not 100% confident in either the facts of what happened—though, as a former securities litigator in the post 2008 period, I think I have more experience than most—but I don’t see a particularly compelling case for fraud. I also think the focus on a single person’s supposed indiscretions, whether true or otherwise, may distract us from deeper systemic problems that FTX’s collapse represents.

Very interested in others’ thoughts, and especially thoughts on my diagnosis of cultural norms in EA that may have contributed to the problem at FTX. Here’s the link:

https://​​simpleheart.substack.com/​​p/​​in-defense-of-sam-bankman-fried

• Haha, I wrote a similarly titled article sharing the premise that Sam’s actions seem more indicative of a mistake than a fraud: https://​​forum.effectivealtruism.org/​​posts/​​w6aLsNppuwnqccHmC/​​in-defense-of-sbf

I appreciated the personal notes about SBFs interactions with the animal welfare community. I do think the tribalism EA tribalism element is very real as well. Also appreciate the point about trying to work on something intrinsically motivating—I’m not sure that it’s possible for every individual but I do feel like my own intrinsic love of work helps a lot with putting in a lot of time and effort!

• I wrote a post as a summary in parallel of the interview.

• Very interesting analysis! I didn’t realize clustered standard errors weren’t in use pre-2000. Is there are reason you published this result here rather than as a letter in the AER?

• My non-EA group chats, mostly finance professionals, are tearing this apart. His claimed lack of knowledge of a number of key metrics and figures is not credible. The real estate question about his parents got an even more heat (I think justifiably so).

• [ ]
[deleted]
• I didn’t downvote either of your articles on misquoting. Skimming over the first article now, it seems reasonably well argued.

However, I agree with the following points made on this comment (which you also referred to in your second article):

• There’s too much to read, so people don’t have extensive time to engage with everything. Try to be succint.

• One of your post spent 22 minutes to say that people shouldn’t misquote. It’s a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.

• Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.

• You can think with purely abstract stuff—but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn’t work for most other people. So adapting to other reasoning types is useful.

From skimming your first misquoting article, I don’t think you’ve made the case that misquoting is a particular problem within EA. I don’t think there are any examples? In which case, some people might read it, get to the end and think “well that was a waste of 22 minutes and hardly seems relevant to EA, so I’ll downvote it to deter others from spending time reading it”.

• What sort of examples do you want? Do you want me to call out specific individuals who misquoted and say that’s bad? You could look through my comment history and find some examples if you want to, but I thought drawing attention to and shaming those people would be bad.

It’s easier to discuss whether misquoting is very bad for truth seeking, and mistreats a victim, without simultaneously making it a discussion about whether particular individuals in the community are bad.

The deadnaming article has a one paragraph summary near the start. It also has the text:

I think this norm [against deadnaming] is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

The links clarify that EA does not have a strong norm against misquoting. What’s the problem? Maybe you missed that part when skimming? It’s in the introduction immediately before the article summary. The rest of the article does not attempt to argue this point; it’s talking about something else which builds on this premise.

Why is this even controversial? If I tell you a misquote or poor cite in the sequences or some other literature you like, you aren’t going to care much or start taking actions regarding the problem (such as checking whether the same author made more errors of a similar nature), right? You don’t believe that misquoting is like deadnaming someone and should have a similar norm against it because it’s hurtful to the victim in addition to being poor scholarship, do you? Don’t you disagree with me and know that you disagree with me? The norm I’m advocating is not normal nor popular with any large group. So, fine, disagree with me – but I find it a really bizarre reaction for people who disagree with me to dismiss my arguments on the basis that I’m obviously right and this is a waste of time due to being uncontroversial common knowledge. Most people think stuff like “People are sloppy sometimes, which isn’t a big deal.” instead of thinking, “Being sloppy with quotes in particular isn’t acceptable. Use copy/​paste. If you must type something in, triple check it. There’s no real excuse for quotes to be inaccurate in tiny ways; that’s really bad even if the wording changes do not substantively change the meaning.”

I’d like to first establish that this issue matters, and only second, potentially point out some specific examples. As long as I don’t think anyone considers misquoting to actually be very bad, I don’t think it’s a good idea to bring up examples of people doing it. Also I don’t think the problem is a few individuals behaving badly; it’s a widespread problem of community attitudes and norms. The community simply doesn’t value this kind of accuracy and is OK with misquotes; in that context, it’s unfair to be very hard on individuals who get caught misquoting, so that’s another reason not to name and shame anyone. If i give examples people will just tell me that the misquote didn’t change the conclusion in that case and therefore doesn’t really matter (rather than agreeing with me), which is not the point. Misquotes mistreat the person quoted like deadnaming, and also like other inaccuracies they’re bad for truth seeking whether or not they change the conclusion. These are not popular claims, but I think they’re important, so I tried to argue and explain them, and neither of these claims would be served well by examples because they’re both about concepts not concretes. And if people don’t like conceptual articles, or struggle to understand them, or don’t like long articles … fine whatever, but saying that people agree with me, when they don’t, is really weird.

• What sort of examples do you want? Do you want me to call out specific individuals who misquoted and say that’s bad? You could look through my comment history and find some examples if you want to, but I thought drawing attention to and shaming those people would be bad.

It’s generally a good sentiment to not want to call out specific individuals, particularly if they are not repeat offenders. However, if this is a widespread issue that is worth the attention of the community, then providing lots of examples will help demonstrate the scale of the problem without it seeming like you’re picking on one or two people.

If it is only one or two people who are repeat offenders, and these are senior members of EA orgs (and/​or regular posters on the EA Forum), then it may be justified in shaming them.

It’s easier to discuss whether misquoting is very bad for truth seeking, and mistreats a victim, without simultaneously making it a discussion about whether particular individuals in the community are bad.

Without examples to demonstrate that it’s a common issue in the EA community, you may find that the discussion is very short, as I suspect most people will just think “yeah, misquoting is indeed bad for truth seeking, which is why I don’t do it”.

• Hmm, I posted stuff that got downvotes, and the few comments I received were along the lines of “provide examples” or “what are you talking about?”

• Like the others, I loveed this post. let’s be friends ;)

I wonder how much of this “decline of friendship” phenomenon is related to culture, region, income, education and generation. My tentative hypothesis is that people bond by doing things together in-person; this has become rarer for educated millenial westerners.

• (this is partially echoing/​paraphrasing lukeprog) I want to emphasize the anthropic measure/​phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it’s discussed more in the full document, but for most of the post it’s ignored.

Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they’re going to have to decide whether to leave the light on or off. They know that one of the robots hates the dark, but the other strongly prefers it.
The robot who prefers the dark also happens to be running on 1000 redundant server instances having their outputs majority-voted together to maximize determinism and repeatability of experiments or something. The robot who prefers the light happens to be running on just one server.

The dark-prefering robot doesn’t even know about its redundancy, it doesn’t lead it to report any more intensity of experience. There is no report, but it’s obvious that the dark-preferring robot is having its experience magnified by a thousand times, because it is exactly as if there are a thousand of them, having that same experience of being in a lit room, even though they don’t know about each other.

You turn the light off before you go.

Making some assumptions about how the brain distributes the processing of suffering, which we’re not completely sure of, but which seem more likely than not, we should have some expectation that neuron count has the same anthropic boosting effect.

• My colleague Ahmed Ahmed and I summarized research on fertility in the context of the US Child Tax Credit expansion in this UBI Center report last year. We cited the Lyman Stone article from here:

Stone’s research suggests that making it permanent could close between 15% and 65% of the gap to a replacement fertility rate.

My nonprofit PolicyEngine has also been scoping how to predict fertility impacts in our app that computes the impact of custom tax and benefit reforms. Our shallow dive hasn’t turned up standard elasticities with respect to current-year policy changes though, so while we could create ones like % change to births with respect to % change to net income of parents of newborns, I don’t know how well this would connect well to the literature.

In general, though, Stone finds that baby bonuses are most cost-effective at spurring births. Other evidence suggests that reducing infant poverty improves developmental outcomes more cost-effectively than interventions later in life, and baby bonuses could be easily administered at any level of government (just run payments through the hospitals). That all makes baby bonuses an underexplored plausibly cost-effective intervention, both from a lobbying/​policy perspective and through philanthropic means (a la GiveDirectly).

• Hi Sean,
(Sorry for the late question!)
Being in the field of providing solutions for mental health, what is your opinion on S-risk and Longtermism? Do you think such topics are directly in line with the cause prioritization of mental well-being?

• 30 Nov 2022 19:38 UTC
2 points
0 ∶ 0

Great work, and I was just about to ask for the code.

I think including personal fit (with say a 5 or 6 OOM range) will flip the sign on this though. Would also be good to show the intervals.

• Yup! The bounty is still ongoing. We have been awarding prizes throughout the duration of the bounty and will post an update in January detailing the results.

• Thanks for writing this up!

As someone who is getting started in AIS movement building, this was great to read!

i) I conceptualised the AI Safety community differently from some of my readers

I would be curious, how does your take differ from others’ takes?

I have read Three pillars for avoiding AGI catastrophe: Technical alignment, deployment decisions, and coordination and I feel the two posts are trying to answer slightly different questions but would be keen to learn about some other way people have conceptualised the problem.

• 30 Nov 2022 19:17 UTC
4 points
1 ∶ 0

One suggestion (which I offer as a general matter and not specifically to RP): At least in some jurisdictions, boards can delegate many of their powers and duties to committees of the board that can include some non-board members. (The board members should be a majority of the committee.) So if you have a board candidate who is really strong in one area—say, they have tons of experience conducting performance evaluation of senior staff—but isn’t the best choice overall, you may be able to ask them to serve on a board committee that handles the thing they excel at.

• Nice post. I am currently asking a lot why my comments are being downvoted recently. I would appreciate more information on them so I can be better at sharing information.

I’m curious as to why my reductionist approach is also being disliked as in real world scenarios it is very productive. Yeah just hope people explain further to what extent they do not like but yeah I also understand the difficulty of explaining.

• “the more measurable a metric we choose, the less accurate it is, and the more we prioritize accuracy, the less we are currently able to measure”

Can you expand on this? Is it a reference to Goodheart’s Law?

• No, it’s not a reference Goodheart’s Law.

It’s just that one reason for liking neuron counts is that we have relatively easy ways of measuring neurons in a brain (or at least relatively easy ways of coming up with good estimate). However, as noted, there are a lot of other things that are relevant if what we really care about is information-processing capacity, so neuron count isn’t an accurate measure of information processing capacity.

But if we focus on information-processing capacity itself, we no longer have a good way of easily measuring it (because of all the other factors involved).

This framing comes from Bob Fischer’s comment on an earlier draft, btw.

• Thanks, that makes sense. For some reason I read it as a kind of generalisable statement about epistemics, rather than in relation to the neuron count issues discussed in the article.

• 30 Nov 2022 18:12 UTC
4 points
0 ∶ 0

To me, the scariest implication of octopus farming is that it updates me downward, maybe significantly, the probability that factory farming will be eliminated/​replaced entirely. If humans are so eager to develop a type of factory farming that is so difficult and inefficient, I am afraid I just can’t see how we can guarantee that factory farming won’t continue into the far future. (yes, I am talking about the type of “far” that the average longtermists speak of)

• I’m donating 10% of my pre-tax income this year, and most of it will be distributed to the usual suspects identified by GiveWell, Happier Lives Institute, and Animal Charity Evaluators. A small amount will be reserved for some local charities whose work I am familiar with.

What I would love some advice on is ways to donate to Ukraine. There is probably no way to really know the effectiveness of any donations to Ukraine, but in general I think supporting the norm of respect for national sovereignty is actually quite important, apart from the (also quite important) humanitarian considerations. Does anyone have any thoughts?

• will add this opportunity to the EA opportunity board!

• will add this opportunity to the EA opportunity board!

• Hi Stijn,
Interesting post!

1. In the “Deathprint of meat” section you clearly cite the sources for the meat-to-emissions conversions, but not the meat-to-animal conversions. From reading further down the piece it seems they probably come from Saja, K. (2013). Is that correct?

2. Saja (2013) seems to calculate 2 kg of chicken meat for “Average animal products per one animal life”, which would be 0.5 chickens per 1kg though in your table you have 0.667 for animals killed per kg meat for chicken meat. I think that 0.667 is Saja’s figure for Fish (1.5kg of meat per 1 fish)?

3. If you did use Saja (2013), I wonder if you could elaborate on why, especially since as you note it “excludes the animals used as feed (e.g. fish meal and insect meal given to farm animals).” One could also use the conversion factors from Faunalytics (2020) which I believe do include feed fish (here 1kg of chicken meat would be associated with 0.87 animal deaths). There are of course also other more recent conversions Warren (2018), Hurford (2014) for number of animal deaths, and for days of life (or suffering) e.g. Drescher (2017), Tomasik (2007).

• Hi Neil,

my meat-to-animal conversions were not based on Saja, but simply on the weight of edible meat produced by an animal. For chickens, I used the slightly more conservative value of 1.5 kg edible meat per broiler chicken, instead of Saja’s 2 kg. That means 11.5=0.66 animals/​kg. Perhaps broiler chickens in the US grow heavier and are closer to Saja’s 2 kg per chicken?

Haven’t thought about using those other sources like Faunalitics. Thanks for mentioning it.

• Starting a NYE donation push. An EA has already committed $40k. Aiming for$111k, tho the ultimate goal is to get a group together to match and discuss opportunities that can lead to a new year of giving. (I’m recruiting many folks who don’t give substantially before. I hope to have more EAs who can fuel conversation toward effectively combating preventable suffering.) Also, experimenting with Twitter promotion in this age of Elon.. https://​​twitter.com/​​bbertucc/​​status/​​1597980309256957952 .. Feel free to email or DM me if I’m too slow on the form reply .. blake[at]philosophers[dot]group ..

• Octopi are some of the most intelligent creatures, with a fascinatingly alien path to getting there and unrecognizable brain structure. I encourage anyone who doesn’t know about octopi intelligence to look into it—they aren’t social, don’t teach each other skills, don’t live long, and don’t have centralized processing but they rank among the highest intelligence we are aware of.

Something I felt was missing from the post was a mention of how intelligent the octopi and cephalopods are which are likely to be farmed. I thought only a few species of octopi were intelligent, and assume many are average or low levels of cognition for the animal world. I might prefer it to chickens and cows depending on the species...

Your other points about why it would be a terrible subject for farming are compelling, and I appreciate you spelling them out so concisely. Even if they are species average in perceptiveness they might be far worse to farm than other species.

• I think this is a good article and attacks some assumptions I’ve thought were problematic. That being said, I think it’s worth elaborating on the claim that intelligence scales with moral worth. You say:

“Furthermore, it certainly is not the case that in humans we tend to associate greater intelligence with greater moral weight. Most people would not think it’s acceptible to dismiss the pains of children or the elderly or cognitively impaired in virtue of them scoring lower on intelligence tests.”

This seems true, and is worth further discussion. Most famously, Peter Singer has argued in Animal Liberation that intelligence doesn’t determine moral worth. It also brings to mind Bentham’s quote: “The question is not, Can they reason?, nor Can they talk? but, Can they suffer?” We don’t think children have less moral worth due to their decreased intelligence, nor do we think that less intelligent people have less moral worth–so why should we apply this standard to animals? This is why Singer has argued for equal consideration of interests between species. What does this imply, then, about how we should determine the interests of animals?

Perhaps, we may try to count the neurons involved with pain, pleasure, and other emotions–rather than neurons as a whole—and use this as a metric for moral worth. This isn’t perfect, it still has many problems, but would probably be better than other approaches.

• Thanks, I agree on these points. In regards to focusing on neurons involved in pain or other emotions, while I agree this would be the ideal thing to look at, the problem is that there is so much disagreement in the literature about issues that would be relevant for deciding which neurons/​brain areas to include. There are positions that range from thinking that certain emotions can be localized to very specific regions to those who think that almost the whole brain is involved in every different type of experience, and lots of positions in between. So for that reason we tried to focus on more general criticisms.

• accidental duplicate post

• 1) You believe casinos should be illegal due to societal harm caused.

Here’s an important factor: if you really believe this, you’ll probably do a bad job running a casino. You’ll probably be more successful if you try to run something else.

One might naively think that this isn’t relevant to an ‘EA’ perspective, but I think it’s a pretty straightforward application of an EA principle that the ends don’t justify the means (https://​​www.lesswrong.com/​​posts/​​K9ZaZXDnL3SEmYZqB/​​ends-don-t-justify-means-among-humans).

• It sounds like the poster would be taking an active role in managing/​operating the project, so this makes sense. But would your answer be the same if he were being approached as a passive investor?

• 30 Nov 2022 14:53 UTC
2 points
0 ∶ 0

Some people here are interested in doing legal research on court dockets. For those so inclined, I submit that the following question would be more helpful to many in the community:

The Madoff trustee has basically stated that he did not pursue clawback litigation when the expected costs exceeded the expected recovery. Looking at a sample of the clawback complaints in Madoff, what does the trustee’s trigger amount seem to have been (bonus points for separately estimating for non-US residents for whom the trigger may be higher).

That result would still only be an estimate of what might happen here. But I suspect it would be far more fruitful than trying to estimate the odds that an EA figure would increase their chances of being deposed or served with discovery if they spoke freely about FTX.

• 30 Nov 2022 14:29 UTC
9 points
2 ∶ 0

While I’m sympathetic to your conclusion, it feels like a major omission to not think about added wild animal deaths due to climate change, in addition to the added human deaths. Did you look at all at the state of research on how climate change will impact wild animal populations? (Note: you might get into some morally tricky questions there. Climate change is going to hurt lots of existing species, but other species may replace species that are hurt by habitat loss and temperature changes. How exactly to morally weight that is probably more complex than just tallying up deaths).

• Not much is known about the impact of climate change on wild animals, so therefore I excluded it. It is very complicated. First, it could still be the case that at the expected level of warming, the decrease in cold deaths of wild animals could be larger than the increase in heat deaths. Less freezing days, but more heat waves and forest fires… Second, it might be the case that most wild animals have a net negative welfare and that climate change decreases population sizes, which means fewer animals with net negative welfare will be born, and that is good in the long run. Third, animals have a shorter lifespan and higher reproduction levels than humans, which means the identities of future born animals may be much more dependent on what we (CO2 emitting beings) do, compared to the influence of our actions on the identities of future born humans. Compare the world where we take climate measures with a business as usual world. Already after a few years you will see that those two worlds will contain different animals. That brings us to the difficult non-identity problem in population ethics. So… it becomes very complicated.

• Thanks for your efforts here. How likely do you think it is that the farm will succeed in creating a commercially viable product, apart from public pressure? Sounds like there are significant biological and ecological barriers.

Also, unrelatedly, seems to me like octopus and squid would be relatively easier to create vegan alternatives to. People mostly likely them for the texture, there isn’t really a flavor. But I know nothing about the science of alt protein.

• Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.

Summary: When exploring/​prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/​interventions that specifically target alternative futures, as well as add a “robustness across future worlds” dimension to the ITN framework.

Epistemic status: low confidence

In cause/​intervention exploration, evaluation and prioritization, EA might be neglecting alternative future scenarios, e.g.

• alternative scenarios of the natural environment: If the future world experienced severe climate change or environmental degradation (which has serious downstream socioeconomic effects), what are the most effective interventions now to positively influence such a world?

• alternative scenarios of social forms: If the future world isn’t a capitalist world, or is different from the current world in some other important aspect, what are the most effective interventions now to positively influence such a world?

• ...

This is not about pushing for certain futures to realize. Instead, it’s about what to do given that future. Therefore, arguments against pushing for certain futures (e.g. low neglectedness) do not apply.

For example, an EA might de-prioritize pushing for future X due to its low neglectedness, but if they think X has a non-trivial probability to realize, and its realization has rich implications for cause/​intervention prioritization, then whenever doing prioritization, they need to think about “what I should do in a world where X would be realized”. This could mean:

• finding causes/​interventions that are robustly impactful across future scenarios, or

• finding causes/​interventions that specifically target future X.

In theory, the consideration of alternative futures should be captured by the ITN framework, but in practice it’s usually not. Therefore it could be valuable to add one more dimension to the ITN framework: “robustness across future worlds”.

Also, there’re different dimensions on which futures can differ. EA tends to have already considered the dimensions that are related to EA topics (e.g. which trajectory of AI is actualized), but tends to ignore the dimensions that aren’t. But this is unreasonable, as EA-topic-related dimensions aren’t necessarily the dimensions in which futures have the largest variance.

Finally, note that in some future worlds, it’s easier to have high altruistic impact than in other worlds. For example in a capitalist world, altruists seem to be at quite a disadvantage to profit-seekers; in some alternative social forms, altruism plausibly becomes much easier and more impactful, while in some other social forms, it may become even harder. In such cases, we may want to prioritize the futures that have the most potential for current altruistic interventions.

• 30 Nov 2022 14:21 UTC
9 points
2 ∶ 0

This is great! Thanks so much for donating and for writing this guide/​reflection!

• Thank you for this post, very relevant to what I’m researching—Goal Misgeneralization problem

• As far as I know, not much. But I’m personally very conflicted about what to think about this case and how to respond to it, based on information that came out in the wake of her death, that make the case that she was probably unwell and hurting others, and that she made at least one confirmed false accusation in the past. See this statement from Kelsey Piper in particular:

The trouble is I don’t know exactly how much this should change how I read her statement, so much of both what she said and what others said about her are too vague to easily work through for me. It would be terrible not to take this seriously enough, and it is a possibility I have to keep in mind that responses like this one in the wake of her death were exaggerated out of motivated reasoning. I suspect something should have been done anyway, but I’m not sure what should have been done, and as far as I know not much was done. Presumably there is still time to change that, but I don’t have any ideas for how in particular. But this is one reason I suspect many people had a hard time reacting.

• In case you don’t get adequate responses here, another possibility is to reach out to Julia Wise. She’s both the person who does the most work in this area that I know of, and someone whose work Forth admired. I probably can’t give an adequate response to your question in particular (just the messy reactions I suggest above), but she might have more of a concrete idea of anything that did or didn’t happen at the institutional level.

• 30 Nov 2022 14:04 UTC
2 points
0 ∶ 0

Nice post, I’d like to add few more information. Whatever the investment, it should to take into account geographical and geopolitical factors. EA is mostly concentrated in United States and few other countries, notably in Europe, which is in a vulnerable position right now.

There’s no enough diversification in this regard, and it is a major risk. FTX probably would not have happened the same way in other countries. I doubt a similar company registered in a country where you do not need to “Make money” to succeed and prove your social status would have passed below radars as easily (ex: Denmark, Sweden, Switzerland) .

Similarly, “Tech Stock” do not necessarily need to be in the US. I would suggest to look for diversification in LATAM (Brasil), Africa (Nigeria for instance), and Asia (India /​China /​Indonesia notably). China is much more advanced technology speaking that most Americans would think, notably in terms of AI

Finally, EA tend to think of ROI in terms of “Dollar”. We have seen currencies being highly volatile last few years. I would consider adding other comparisons (ex: per ounce/​kg of gold), as this takes into account other factors.

In short: real diversification should be “Non-US-centered”. Then, it’s up to everyone to decide how much US-centered diversification shoud be

• Amazing. Well done. I am proud of you!

Thank you so much for sharing your experience, it’s really helpful. I have previously wondered what the process looks like in the UK. I am sorry to hear about your mum.

• 30 Nov 2022 13:49 UTC
−3 points
0 ∶ 1

Hi there,

I believe declining in it is a better option, as it doesn’t provide a long-termist approach to helping the future become better. Choose another project that can in the future be bigger or big enough to probably cover for the negative impact the casino may provide.

I see this problem a present vs. future discussion.

If you want a business approach take, you can look up discussions on how opportunity costs are computed. A relevant economic concept.

• Thanks for the post. This makes sense—so many more deaths are caused by eating more chicken that it makes sense to avoid it altogether. This reminds me of the recommendations done in One Step for Animals.

However, I am a bit bothered that the climate change study doesn’t include the impact on famines, wars, infectious diseases, floods and other risks—since they are by far the biggest risks. The biggest danger of climate change isn’t heat—it’s that it changes absolutely everything else in the Earth system.

I have a found another study that thinks there is 1 additional deaths for 1000 tons emitted : https://​​www.frontiersin.org/​​articles/​​10.3389/​​fpsyg.2019.02323/​​full#h5. What do you think about it ?

This doesn’t change your conclusion that meat should be avoided altogether I think.

• Thanks for referring to that study. That 1 death per 1000 tons is in the same order of magnitude of the 1 death per 4000 tons that I used based on Daniel Bressler’s study. So I think the main takeaways are still valid. But yes, there is a possibility that deaths from climate induced famines, wars,… are some orders of magnitude larger than deaths from temperature change

• Thank you so much for sharing!

I was only confused by this paragraph:

I can’t find anything on his work on preserving sperm for artificial insemination, apparently economically crucial. I worry that is his one negative invention.

Why do you consider this potentially negative?

Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/​orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)

Detailed considerations:

This can be seen as a version of retroactive funding, but it’s special in that the funder makes a pre-commitment.

(I don’t know a lot about retroactive funding/​impact markets, so please correct me if I’m wrong on the comparisons below)

Compared to other forms of retroactive funding, this leads to the following benefits:

• less prebuilt infrastructure is needed

• provides stronger incentives to prospective “grantees”

• better funder coordination

• better grantee coordination

… but also the following detriments:

• much less flexibility

• perhaps stronger funding centralization

• potentially unhealthy competition between grantees

Compared to classical grant-proposal-based funding mechanisms, this leads to the following benefits:

• better grantee coordination

• stronger incentives for grantees

• more flexibility (i.e. grantees can use whatever strategy that works, rather than whatever strategy the funder likes)

… but also the following detriments:

• lack of funds to kickstart new projects that otherwise (ie if without funding) wouldn’t be started

• perhaps stronger funding centralization

• potentially unhealthy competition between grantees

Important points:

• The goals should probably be high-level but achievable, while being strategy-agnostic (i.e. you can use whatever morally acceptable strategies to achieve the goal). Otherwise, you lose a large part of the value from pre-committed awards—sparkling creativity from prospective grantees.

• If your ultimate goal is too large and you need to decompose it into subgoals and award the subgoals, make sure your subgoals are dispersed across a diverse range of tracks/​strategies. For example, if your ultimate goal is to reduce meat consumption, you may want to set subgoals on the alt protein track, as well as on the vegan advocacy track, and various other tracks.

• Explicitly emphasize that foundation-building work will be awarded, rather than awarding only the work that completed the one last step to the goal.

• Attribute contribution using an open and transparent research process. Maybe crowdsource opinions from a diverse group of experts.

• Such research will be hard. This is IMO one of the biggest barriers to this approach, but I think it applies to other versions of retroactive funding/​impact markets too.

• Four podcasts on animal advocacy that I recommend:

• Freedom of Species (part of 3CR radio station)
Covers a wide range of topics relevant to animal advocacy, from protest campaigns to wild animal suffering to VR. More of its episodes are on the “protest campaigns” end which is less popular in EA, but I think it’s good to have an alternative perspective, if only for some diversification.

• Knowing Animals (hosted by Josh Milburn)
An academic-leaning podcast that focuses on Critical Animal Studies, which IMO is like the academic equivalent of animal advocacy. Most guests are academics in philosophy, humanities and social sciences. (and btw, one episode discussed wild animal suffering, and I liked that episode quite a lot)

• The Sentience Institute Podcast
EA-aligned. Covers topics rangeing from alt proteins to animal-focused impact investing to local animal advocacy groups to digital sentience.

• Animal Rights: The Abolitionist Approach Commentary (by Gary L. Francione)
A valuable perspective that’s not commonly seen in EA. Recommended for diversification.

Off-topic: I also recommend the Nonlinear Library podcasts; they turn posts on EA Forum and other adjacent forums (LW, AF) to audio. There’re different versions that form a series, including a version containing all-time top posts of EA Forum. There’s also a version containing the latest posts meeting a not-very-high karma bar—I use that version to keep track of EA news, and it saved me a lot of time.

• It seems like every now and again someone suggests cardiovascular disease as a potential high-impact cause area on the EA forum. The problem is tractability. It’s really hard to convince people to eat better, exercise more and stop smoking. Doctors spend a lot of time trying to do this and billions have been spent on public health campaigns trying to convince people to do this. The medications that treat cholesterol, hypertension, and diabetes are among the most commonly prescribed in the world already.

You’ve identified a serious problem but I don’t see a cost-effective solution

• I agree that lifestyle changes are hard to do, but I would like to push back in two ways:

• Currently, it isn’t standard practice to measure apoB and base treatment on that statistic. This would result in doctors prescribing medication to people that are at risk, but that are currently unaware.

We calculated the number of clinical events prevented by a high-risk treatment regimen of all those >70th percentile of the US adult population using each of the 3 markers. Over a 10-year period, a non-HDL-C strategy would prevent 300 000 more events than an LDL-C strategy, whereas an apoB strategy would prevent 500 000 more events than a non-HDL-C strategy.

• Furthermore, there is the option of early treatment with medication, that currently isn’t deployed. Like I described in the article, you can have a lifetime risk of 39-70% while also having a 10-year risk <10%, which means you won’t get treatment. While it is seems plausible that these people would benefit a lot by this.

To summarize it bluntly, it seems that the world would benefit from prescribing more cholesterol lowering medication. Advocacy for doing this would be the cost-effective solution.

Having said that, I didn’t start writing this article while having EA in mind. So I haven’t done an intensive cost/​benefit analysis.

• Great work. I got malaria in 2016 for a clinical trial of a novel anti-malarial (results published here). I was paid AUD2880 and gave it all to the Against Malaria Foundation. It’s one of the best things I’ve ever done. • It’s great that you know the results. While relatively minor in the grand scheme of things, it’s frustrating that trials, at least here in the US, don’t often share results with participants, even though it’s theoretically as simple as a mass email along the lines of “here’s what we learned” — presumably an email they’re already sending to colleagues, funders, etc., in some form. I had to ask the people running the Shigella trial for my data (not available yet, but I really wanna see if I got the placebo or not)! • Of course, we can also try to spot asteroids in the sky. Astronomers have identified a large majority of near-Earth asteroids[3] larger than 1km across, and many smaller examples. From these surveys, we know that the chance of an Earth-impact for asteroids 1-10km in diameter in an average century is about 1 in 6,000, and about 1 in 1.5 million for asteroids larger than 10km across — that is, roughly the size of the asteroid that caused the Cretaceous–Paleogene mass (dinosaur) extinction event. Interestingly enough, the importance of asteroid size might be overestimated, compared to impact angle and impact site. The asteroid that killed the dinosaurs wouldn’t have been nearly as deadly, hadn’t it struck at one of the worst possible places at one of the worst possible angles. This 2017 paper used computer models to see if the rock composition of the impact site could’ve made a difference. The computer calculated the amount of soot and sulfates that would be ejected into the atmosphere as well as what that would mean for our planet, since both soot and sulfates can block the sun’s light. The blocked out sun started a global winter that lasted years and this is what killed the dinosaurs, not the impact of the asteroid directly. The researchers found that the composition of the impact site was especially unlucky. And since the Earth is constantly spinning and moving in space, this means that if the asteroid had just arrived a couple minutes later it wouldn’t have hit such a problematic piece of land or might have even hit the ocean where a lot of its impact would have been lessened (in terms of the amount of rock that got ejected into the atmosphere). This 2020 paper concluded that the asteroid hit from a pretty steep angle, about 45-60 degrees. This vaporized more rock than a shallow strike and released more climate-changing gases than other angles, with 2-3 times as much carbon dioxide released as a vertical impact and 10 times as much as a shallow impact. Seeing how unlucky the impact timing was means that asteroids probably aren’t as big of a risk as they are imagined to be. And even if we don’t develop the technology to completely deflect asteroids, changing the angle or delaying it so it hits a different impact site might be enough to change a mass extinction into a mere disaster. • Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it’s worth expanding into a top-level post) link to the talk; alternative version with clearer audio, whose contents—I guess—are similar, but I’m not sure. (This shortform doesn’t cover all content of the talk, and has likely misinterpreted something in the talk; I recommend you to listen to the full talk) Epistemic status: An attempt at steelmaning the arguments, though I didn’t really try hard—I just wrote down some arguments that occur to me. The claim: Creating a mass social movement around animals, is more effective than top-to-bottom interventions (e.g. policy) and other interventions like vegan advocacy, at least on current margins. • This is not to say policy work isn’t important. Just that it comes into the picture later. • My impression is that the track record of mass movements in creating change is no less impressive than that of policy reforms, but EA seems to have completely neglected the former. A model of mass movements: • Analogous to historic movements like the civil rights movement in the US, and recent movements like Extinction Rebellion. Both examples underwent exponential growth, which will be explained in the next bullet point. • You start with a pool of people in the movement, and these people go out and try to grab attention for the movement, using tactics like civil disobedience and protests. Exposure to the ideas leads to more people thinking about them, which in turn leads to more people joining. With the enlarged people pool, you start the cycle again. This then leads to an exponentially growing pool. • After the movement is large enough and has enough influence, policy reforms and other interventions aimed at the top of society will become viable. • Research showed that few, if any, movements failed after reaching a size threshold of 3.5% of the entire population. • Many movements died down because their base number of exponentiation is smaller than 1, but successful movements can have much higher base number. • Other interventions like vegan outreach and policy work may also have similar exponential growth, but it’s plausible that their base numbers are much less likely to be >1 (or to be very high) when compared with mass social movements. Strategies for mass movements: • Strategy is super important! • Start from your ultimate goal (e.g. stop animal exploitation), and then set milestones for achieving this goal, and then design concrete actions and campaigns in service of milestones. • One key point is “escalation”—how to make the movement grow exponentially starting from the initial pool • You need to be momentum-driven: convert the attention you get to new movement members, seize more attention with your enlarged membership, and repeat the cycle • You need to force people in the general public to take sides, possibly by non-violent disruptions and making salient sacrifices (e.g. arrests) • You may need to show concrete demands (rather than abstract ones) that resonate with people • Another key point is “absorption”—when large numbers of new members join the movement, how to rapidly and effectively absorb them • Decentralized movements can absorb more rapidly. (e.g. people trained can go off independently and train other new people) • There’s no silver bullet; we still need deep thinking and discussions and coordination to guide our strategy. Do you think it’s worth expanding into a top-level post? Please vote on my comment below. • Statement: This shortform is worth expanding into a top-level post. Please cast upvote/​downvote on this comment to indicate agreement/​disagreement to the above statement. Please don’t hesitate to cast downvotes. If you think it’s valuable, it’ll be really great if you are willing to write this post, as I likely won’t have time to do that. Please reach out if you’re interested—I’d be happy to help by providing feedback etc., though I’m no expert on this topic. • 30 Nov 2022 9:30 UTC 13 points 1 ∶ 0 The credits mention a transcriptionist but I can’t find a transcript...is there one? • Thanks for doing this, Spencer. Excited to listen to it. • I don’t think we should give too much information value to SBF’s interviews, considering his track record and his writing that ethics was mostly a front to build his reputation. • Thanks for the link and highlights! Sam claims that he donated to Republicans: “I donated to both parties. I donated about the same amount to both parties (...) That was not generally known (...) All my Republican donations were dark (...) and the reason was not for regulatory reasons—it’s just that reporters freak the fuck out if you donate to Republicans [inaudible] they’re all liberal, and I didn’t want to have that fight”. If true, this seems to fit the notion that Sam didn’t just donate to look good (i.e. he donated at least partly because of his personal altruistic beliefs) What do you mean that this donation strategy would be from Sam’s “personal altruistic beliefs”? Donating equally to both political parties has been a common strategy among major corporations for a long time. It’s a way for them to push their own agenda in government. It’s generally an amoral self-interested strategy, not an altruistic one. • I read somewhere (sorry can’t remember where) that he only donated to republicans that were pushing longtermist things like pandemic preparedness • In this case, it seems like a very good strategy for the world, too, in that it doesn’t politicize one issue too much (like climate change has been in the US because it was tied to Democrats instead of both sides of the aisle). • That’s a good point. I hadn’t thought about that. I’ve added your observations to that part. • In this hypothetical it sounds like you think the harm done will occur regardless, and so the tradeoff is between the additional benefits you can accrue from your ROI, VS the moral discomfort you will have in buying in? So there’s both a moral uncertainty based on how strongly you personally think casinos should be illegal, how strongly you think the means justifies the ends (i.e. how important is it that you aren’t the one responsible for the harm here, and an empirical Q of how much additional ROI you are actually getting compared to the next best option. It also depends if this is something done in a personal capacity VS something that an EA organization is doing, because then there are other considerations that are harder to measure. E.g., even if you fully take the “ends justifies means” view, if CEA buys out a bunch of casinos, what negative effect does that have on movement building, and is that worth the marginal extra ROI compared to the next best investment? • I don’t have a lot of experience with non-profit boards but I have been involved with boards of C corps so that’s my bias in my suggestion /​ question: Is there an accounting firm that RP works with? Some modern accounting firms provide monthly financial reviews, partial CFO hours for budgeting and financial forecasting, HR, tax & compliance support and their fee structure would work well with the paid board member type compensation. So maybe one thing to consider is adding an unpaid board member that can interface with professional external help that is compensated. Maybe there’s already something like this in place so maybe this suggestion doesn’t add much to help with what you might need. Just wanted to put it out there in case it’s helpful. • 30 Nov 2022 7:02 UTC 2 points 0 ∶ 0 Thanks for sharing! In exchange for a 50% reduction in developing major vascular events. Where did you get this figure from? This seems too high to me. (sorry just skimmed and searched for various iterations of “50” in the text). I did find this in the text though, which is more in line with my understanding of the evidence. Collaborators which demonstrated a 25% reduction in major vascular events...per 1.0mmol/​L reduction in LDL-C Even if true, this figure is likely to be more relevant for high risk individuals, and it would be a mistake to extrapolate those risk reduction figures to an otherwise healthy 30yo with no risk factors (not suggesting you have done this). • The 50% is just a guess that I tried to made plausible in the previous parts. This is also why I wrote this article, to discuss if this guess is correct. To me it seems plausible that a mild reduction in your LDL-C/​apoB during your whole life will have a lot of impact, because the disease takes 4 decades to develop. Current treatment strategies treat very aggressively, but only in the last decade. At which point you have already collected 3 decades worth of plaque. This study I highlighted also seems to point into that direction. For instance, a paper from 2006 called, Sequence Variations in PCSK9, Low LDL, and Protection against Coronary Heart Disease, looks at the presence of mutations in a gene called PCSK9 which is associated with a lowered LDL-C and apoB. Black participants had a 28 percent reduction in mean LDL-C and an 88 percent reduction in the risk of coronary heart disease[6]. White participants had a 15 percent reduction in LDL-C and a 47 percent reduction in the risk of coronary heart disease. • RE: the 2006 paper: Unless you have that gene mutation yourself, those findings aren’t particularly relevant to you. If I’m interpreting that quote right, the association of that gene mutation with both a 15% reduction in LDL-C and a 47% reduction in CHD doesn’t mean if you reduce your LDL-C by 15% in other ways (e.g. via statins) you will reproduce the reduction in CHD risk. Here’s a systematic review of 18 RCTs that push back on the 50% figure. The 50% is just a guess that I tried to made plausible in the previous parts. I’m not following sorry—do you mean you added up the 25% in major vascular events with the 15% reduction in vascular mortality with the 9% reduction in risk for all-cause mortality to get to ~50%? Or are you saying you expect a 2mmol/​L reduction in LDL-C, because it’s a 25% reduction per mmol/​L, and 25*2 is 50? The first approach will double count, and the second approach is pretty unlikely given the reference range for someone with normal LDL-C levels is 2.6 mmol/​L. To me it seems plausible that a mild reduction in your LDL-C/​apoB during your whole life will have a lot of impact Sure, but 25% and 50% reduction could both be interpreted as “a lot of impact”. I’m not suggesting it’s not possible for 50% to be correct, I’m just not confident the evidence you’ve provided makes a strong case for it, and making sure you’re right about whether it’s 25% or 50% for the population with no risk factors is pretty decision relevant if the tradeoff is a 30% increase in diabetes. I’m probably going to check out here, but definitely a conversation to continue having with your doctor who will hopefully have a better sense of the evidence base and understanding of your medical history + needs than I do. I do think all the things mentioned in your “first line of defence” are great and important to continue working on regardless of what decision you end up going with RE: statins. Good luck! (not medical advice etc) • Thanks for all the engagement, let’s see if I can clarify. • The 50% reduction would be a lifetime reduction for people starting in their 20s and 30s with no obvious risk factors. • The 50% number has been pulled out of thin air, but a number that seems plausible to me. When I say plausible, I mean that given what I have read so far, it wouldn’t surprise me if that was the case. • There is indirect evidence based on the PCSK9 genetic mutation, but like you said, this doesn’t guarantee that you would achieve the same results by lowering LDL-C artificially. But, it does make it more plausible. • Mechanistically, it also seems plausible to me, because plaque accumulation happens over the span of 4 decades. • Your meta analysis doesn’t push back on this number, because it does an analysis over a relatively short treatment period. Its selection criteria are a treatment period of at least one year and a follow-up of six months. As far as I’m aware there doesn’t exist a trial with a 10-year period or longer. What I’m discussing here, when I say the 50% number, is a 60 year treatment period. I wish I could give a number for lifetime treatment of atherosclerosis in a population with no obvious risk factors, because that is exactly what I’m looking for! • I really dislike the term Mass Good, but like the speculation on what alternative names to EA there could be. Mass Good sounds very clunky, with lots of back of the mouth vowels I dislike hearing. It also originally made me think of “mass” as in a catholic mass, then “mass” as in matter (making me click on the article, thinking I’d see a funny post about a cause area devoted to maximizing the amount of mass in the universe or something). • In the past, I spoke to a Rethink Priorities board member about a project idea that nominally competed with RP (it was a certain kind of think tank). The board member and I then discussed the idea. This project idea was actually given by a respected Rethink staff member prior to this meeting. The above seems good and seems like what we want. I think it is possible because we believe we are aligned in goals, and share trust and values. As many know, board members are nominally legally obligated to maximize the interests of the non-profits they are working in, not the interests of EA or the EA cause area the organization works in. In theory, changing the board could result in people who see things differently and act differently. It’s not impossible a very impressive and strong board would change norms. I think written above is understood already. In the case of RP, I don’t think this is a real danger. I wanted to say the above. Not everything in EA is unexceptional. • There’s an academic literature (and caselaw) concerning what interests a board can/​should consider when making decisions, and this may vary to some extent on state law. It can get tricky, but the scope isn’t as narrow as some people may assume. Don’t want to derail this thread, but I did feel I should flag the complexity on this topic for benefit of those who serve as board members. Hopefully organizations are offering good training /​ onboarding for board members, but I don’t think that always happens. Maybe the community could think about funding webinars on nonprofit corporate governance and encouraging orgs to send new (or current) board members. Should be pretty cheap; the main cost would likely be the members’ time. • This seems great and I appreciate your contributions. It seems tough, but it would be useful if you or someone else shared information about this. This content could be academic legal content, or just as importantly public articles or statements from senior people working on non-profit boards that confirms this Respectfully, I’ve talked with many lawyers of great quality, including on the board I was involved in. Overall, I’m worried of the substance or practicalities about this thread. I expect that most board members and officers and will just “round down” and obey the instructions in the documents they signed, which seemed quite unequivocal to me about conflicts. On the object level, one complication is that like many, my non-profit was registered as both a 401k in the US, and also a charity in the UK. Things like layering on jurisdictions or other things probably increase the issues in practice, and seem impractical to attend to. Other issues involve board dynamics (that increases with the activity of the board, that you principledly aim to increase). Based on my experiences, I think principled but “de jure” violations can be weaponized by opposing factions on a board. The board member I spoke to at Rethink Priorities, was not at all the most junior board member, and clearly expressed concern about the conflict themselves. As mentioned above, the issue with advising was because the idea involved the creation of a new think tank, which seems like a clear conflict. This could be under the umbrella of Rethink Priorities, or not. The considerations about which way to do this are immensely complicated, and probably only understood by a small group of people with an enormous amount of context. • I sense that you are referring to a specific past situation that I don’t think it is helpful to attempt to hash out here (although I have no idea what the backstory is). A brief discussion of various constitutencies the board needs to consider can be found at https://​​corpgov.law.harvard.edu/​​2012/​​04/​​15/​​nonprofit-corporate-governance-the-boards-role/​​, but I’m sure someone could do better with more Google searches. If an organization’s management is dictating to the a board what its job is, or is controlling the onboarding process, I think that organization has a board problem. I don’t tell my supervisor at work how to do his job. • Hi Jason, I think my example gives intuition well for why conflict of interest and de jure constraints can be bad. There is no further subtext. As an aside, because of where we are, and the ideas in your other comments, I want to say, on the subject of lawyers or other ideas about institutions, I do not share anything like Habryka’s aesthetics which you spent a lot of time pushing back on. I am not from California and I did not come through an EA club at some HYPS school. I’m grateful for your discussions on this and other legal matters. • Writing to onlookers: What Jason said about EA orgs adding or changing its board to have 2-3 non-EA board members, especially to an organization with 4 directors, is something you might do to an organization after a major crisis such as misconduct by the management, or a major pivot, maybe after a massive funding change. To calibrate and give intuition, if we were talking about an employee, not an organization, the magnitude of this change would be like being put on a PIP, being demoted, or moved to another department involuntarily. If you were changing the board this way and done poorly (or sometimes well) many executives or staff would consider leaving. The issue at hand is changing board, from the close network containing the CEO and often close friends. Yes, independent governance is often nil and the CEO often dominates decisions in the modal (almost all really) start-up as well as most small nonprofits. This happens everywhere, including smaller organizations in EA. I’m 80% sure this was how GiveWell was built. Nil governance could be bad or good, but the advice being discussed here is far too basic. If there actually was misconduct on the level of FTX fraud, this advice be easily co-opted. For example, Tyler Shultz was the relative of a Theranos board member and extensively explained the outright fraud to his board member relative, and was ignored. SBF could have constructed a performative board to dominate as well. The level of discussion being given to this on the EA forum is low and risks cargo culting (wasting time working on processes that need true management ability to be effective), or create systemic issues, e.g. “Matthew effects” ( board members become a currency, orgs that can attract them to win the game of funding). There is unlikely to be unusually high base rate of fraud in current respected EA organizations. Ultimately, the limiting issue in EA is management and talent, and people have worked on this for a long time. Some reactions can be counterproductive. • To clarify, I said that “organizations” should “aim” for “at least one—preferably two or even three—board members who are not ‘full-time’ EAs.” That statement did not refer to RP, and was not intended to suggest that an organization with four directors should immediately jump to adding 2-3 board members in the category I indicated. I also didn’t specify a board size -- “even 3″ makes more sense for a 9+ member board than for a smaller one. • (Not necessarily on the above project) I believe in theory, if it was net positive for impact, Peter, Marcus and Abraham would agree, and even use resources that don’t directly improve Rethink Priorities, to help start a new think tank or other entity. In fact, as an EA, I would feel obligated to do so, if on balance it made sense. I could easily see such actions being opposed by an outside, muscular, board member who has other visions for Rethink Priorities, who won’t understand or care about the considerations. This might not be absolutely wrong, but would alter the EA landscape in complicated ways. • Out of curiosity, where does the money come from? • Thanks for asking! Manifold has received a grant to promote charitable prediction markets which we can regrant from. But otherwise, we could also fund these donations via mana purchases (some of our users buy in more mana if they run out, or want to support Manifold.markets) • I just want to flag that I’m particularly excited about the paid board member positions. I think that having a designated board member or two be formally responsible to spend a solid chunk of time each month going through a list of check-ups, maintenance, and other sorts of duties, could be really promising. I look forward to working with whoever takes these roles to try to figure out how we think strong nonprofit boards should really work, when assisted with regular ongoing work. There clearly is a lot to figure out in how to make charity boards go very well. If we can make this happen, I’d feel more confident in the future of RP. We could also take some of the lessons learned and recommend them to other EA organizations. • Strong upvote. I’d add that organizations should aim to have at least one—preferably two or even three—board members who are not “full-time” EAs. Diversity of perspectives is important—for instance, a tech company’s board should include people outside tech. It’s an important way to mitigate the risk of groupthink that is inevitable in any tightly-knit community. • I trust your intentions and your ideas seem extremely valuable It would be good to get a description with deep understanding or causal relationships for how a larger board, board quality, or governance in general would have prevented the FTX collapse, especially in a deceptive environment, like FTX, where low quality efforts can be coopted. Famously, such cooption probably happened at Theranos. I’d add that organizations should aim to have at least one—preferably two or even three—board members who are not “full-time” EAs. Just as importantly, it would be good to have a detailed understanding of how boards would be involved in improving EA org operation. Diversity of perspectives is important It’s an important way to mitigate the risk of groupthink that is inevitable in any tightly-knit community. Frankly, this recent governance thread on the forum, has traits that, from the outside, seem to reflect local online trends instead of substance. This can produce “the wrong hill to climb”, wasting effort, disillusioning people, or be coopted. To give a concrete sense of this issue, there is currently a post on the EA forum from an EA org looking for board members. One of the commentators does object level work orthogonal to EA efforts, and seems to be one of the few people who understood to some degree the risks of FTX, and did not seek FTX funding or associations. Another commentor is a longtime EA, associated with the org, and like almost everyone else, did less to avoid FTX funding. The person who avoided FTX gave a detailed comment that added in depth considerations to changing governance, and mentioned creating a novel new think tank. On the other hand, the person more associated with the organization, gave a fairly generic positive comment. This is the response to the two comments. In the past, I’m lucky to have had to chance to speak to Ozzie, who is one of the strongest and most principled people in EA, about his view of increasing board activity. If I understood and recall correctly, like him I imagined using boards as a device to seat and empower talent and provide institutional governance. However, much of the response to these ideas about boards was negative from other people, including senior people. The general view is that boards can be negative and easy to poorly executed. I think I now agree with both views. • Quick things: > It would be good to get a description with deep understanding or causal relationships for how a larger board, board quality, or governance in general would have prevented the FTX collapse, especially in a deceptive environment, like FTX, where low quality efforts can be coopted. I think it would have been tough for a non-FTX board to have fixed the issue. However, if the boards of EA orgs that heavily interacted with FTX were really on their game, maybe they could have realized that EA should have been more cautious around FTX, and taken corresponding actions. I think FTX itself basically didn’t have a board, and if it did, it could have been much better too. > Just as importantly, it would be good to have a detailed understanding of how boards would be involved in improving EA org operation. I think of the board as the ED’s boss. If the org isn’t doing a great job, it’s kind of the ED’s responsibility. If the ED isn’t doing a good job, it’s sort of the board’s responsibility. Boards do have limited abilities in practice (it’s a huge pain to actually fire an ED), but I they definitely have some power. I think good boards help prevent corruption, align incentives from EDs, and help choose new EDs when needed. > Frankly, this recent governance thread on the forum, has traits that, from the outside, seem to reflect local online trends instead of substance. This can produce “the wrong hill to climb”, wasting effort, disillusioning people, or be coopted. I see it a bit more like a “window of opportunity/​interest”. My hunch is that a lot of EA orgs have struggled a bit with middle/​upper management (this is very common for orgs!), and the board seems like a good place to help improve things. • Thanks for the thoughts here. (And the kind words!) There’s a lot going on here. I’m finding it a bit terse and subtle. Maybe it would help to discuss some of this privately? (That might help with directness a bit). Feel free to send me a PM to chat there, or have a call, if that could be useful. • Summary How to improve shrimp welfare: 1. Don’t cut their eyes off 1. Aka, “eyestalk ablation”, this is routinely done to female shrimp to increase fertility 2. “shrimps have a recoil reaction to ablation (Taylor et al., 2004)” 3. “ablated shrimps were more likely to flick their tails and rub the area of the wound (Diarte-Plata et al., 2012)” 4. “when the wound was covered or anaesthetic administered, the shrimps reduced these responses” 5. eyestalk ablation also negatively impacts mortality, biomarkers of stress, and weight loss 2. Combat disease with protocols regarding disinfection, biosecurity, hygiene, and pond preparation 3. Slaughter via electrical stunning 1. “Shrimps are typically slaughtered by either asphyxiation (suffocation) or immersion in ice slurry (chilling) (Weis 2022, at 2min,15sec)” 2. Electric stunning seems like a more reliable way to quickly render shrimps unconscious. 4. Only keep 6-15 shrimps per m2 1. (Note that this number is tentative) 2. Shrimps tend to distance themselves at high density (Da Costa et al. 2016) 3. High stocking density is associated with: 1. Difficulty accessing feed 2. Reduced water quality (dissolved oxygen, un-ionised ammonia, water hardness) 3. Increased disease 4. Increased mortality 5. Physical injury 6. Cannibalism 7. Increased serotonin (stress biomarker) 8. More frequent movement (stress related?) 4. At densities below 6 shrimps/​m2, “dominance hierarchies become more prominent, and feed consumption diminishes due to the absence of social cues (Bardera et al., 2020)” 1. Fish studies have also shown negative effects of very low density 5. Enrich environments 1. Including “feeding methods that mimic natural behaviours, hiding sites, different tank shapes and colours, plants, substrates, and sediments.” 2. “In the wild, decapods spend much of their time sheltering in the dark, and should therefore be given access to dark environments (Birch et al., 2021, p.70).” 3. For fish environmental enrichment has led to “increased social interaction, less abnormal behaviour, and reduced captivity-related stress (Arechavala-Lopez et al., 2022)” 4. “For L. vannamei maturation tanks, tanks with dark backgrounds and rounded shape are recommended (FAO, 2003, p.22). ” 6. Improve handling practices 1. Reduce injury from trawling (fishing by pulling net behind a boat) 2. Imposing maximum packing weight to prevent shrimps from being crushed and not suffocating (hypoxia) 7. Improve nutrition 1. Inadequete nutrition leads to soft shell syndrome, aggression, cannibalism 2. “Overfeeding, on the other hand, may lead to the build-up of toxic ammonia (Alune, 2020) and increased turbidity” 8. Improve water quality 1. Dissolved oxygen between 5 and 8 mg/​L. 2. Un-ionised ammonia <0.05 mg/​L 3. pH of 7.8 to 8.2 4. temperatures of 28-30°C 5. salinities between 0.5‰ and 45‰ The report focused on whiteleg shrimp “due to the scale and intensity of farming (~171-405 billion globally per annum) (Mood and Brooke, 2019)” • I tried to extract out the key data because I was wondering how anyone can know what shrimp like. They’re simple alien creatures. It seems like the presence of these factors are used to indicate “this is bad for shrimps”: • Mortality • Disease • Stress biomarkers (eg serotonin) • Recoil reflex (ie, eyestalk ablation) • An injury-rubbing behaviour that diminishes with covering the wound or anaesthetic • Cannibalism • Deviation from natural behaviour • Physical injury • More movement (?) Going through this was a useful exercise. At first I thought the report was making the silly claim of knowing the preferences of shrimps. Actually it’s mostly a much more commonsense “how do we prevent shrimps from dying or getting disease or being physically mutilated.” • nice work and what a great line-up. tuning in! • Hi Alene! 1. I like this! In accordance with some of the past discussions on the name ‘EA,’ I always felt a bit awkward leaning into the group name for most of the reasons you note above. I was also struck by a recent episode of Bad Takes (Laura McGann and Matt Yglesias’ new podcast) about SBF where she describes not wanting to like EA because of this ‘we—a bunch of nerds—have figured it out’ vibe. She was eventually positive after learning more, but it seems really bad to be screening out people like that. 2. Scale seems to be one of the most important drivers of things EAs care about (factory farming, malaria prevention, future generations). +1 for ‘Mass’ capturing that in an intuitive way. Though, to be fair, I haven’t spent any time exploring other name ideas. 3. Minor point: ‘Mass Good’ struck me as having a religious undertone (maybe just because of the word ‘mass’). I actually kind of liked it for that reason! As much as some want to avoid it, EA really does feel like a secular-religion to me—it’s a community with shared values, supporting one another in pursuit of living those values. What’s not to like? I’m not confident this is the right rebranding, but a community shake-up might be the right time to be thinking seriously about one. So I’m glad you wrote this! • Thanks for sharing Alene! I agree that effective altruism isn’t a great name but disagree that Mass Good is a better one. I unfortunately don’t have time to discuss all my reasons. With this said, I really do appreciate you taking the time to write up and share your arguments and would welcome more thoughts about how to best communicate the ideas of EA. In addition to the above, I also want to use this comment to signal that I would like a way to agree and disagree vote on posts, rather than just comments. I don’t want to downvote well intended posts that I disagree with but I do want a way to signal my level of agreement with them. I also want a way to observe other people’s degree of agreement with the post. • 30 Nov 2022 0:48 UTC 4 points 2 ∶ 1 I don’t think the primary reason rich people are rich is from having frugal lifestyles. I also would be surprised that the primary reason once-rich people become poor is through extravagant consumption (as opposed to bad investments).[1] The takeaway is that for normal, non-wealthy people, there are two ways to gain praise for altruism: donations and lifestyle. In contrast, the ultra-wealthy only have only avenue for praise: donations. This is very bad To the extent that you think shaming works, you should/​could instead shame people for not donating enough to charities, or donating to obviously-ineffective-but-feel-good causes. I continue to think that in general we care way too much about the personal virtues and excesses of powerful people, and not enough about their consequences on the world. 1. ^ Though I don’t have data on this, happy to be corrected otherwise. • I never claimed that the rich are rich because they are frugal. I made the normative claim that the rich ought to be much more frugal, and in order for them to do that, we should praise their frugality as altruism (so long as they donate the money and not pass it down as generational wealth). I also never claimed that the rich “become poor” through spending. The rich seldom truly become poor, even if they make terrible investments. I agree with your last point—that’s precisely the kind of argument I am trying to make with this piece: we care far too much about the ultra-wealthy’s virtue signaling and not enough about what they actually do with their money. • Hi Alene! Thanks a lot for sharing your thoughts! You might find the discussion under this post interesting and relevant: Some quick notes on “effective altruism” • What, if anything, does this imply about the hundreds of millions of insecticide-treated bednets we have helped distribute? • 29 Nov 2022 21:51 UTC 9 points 2 ∶ 0 Thanks for posting these! It’s helpful both as a nudge to participate and as a way of keeping up with human challenge trials research. • I love this. Living more simply does so much good for so many reasons. I would extend this as a challenge to the EA community, as well as something to be praised in the ultra rich. Living more simply creates value on so many fronts no matter how rich you are, including... • Minimise waste • Minimise carbon emissions • Minimise spending to maximise giving • Builds integrity in your EA position (More than just a bunch of rich tech bros who want to feel good about themselves) , and forge a small degree of solidarity with the poor we claim to be supporting. • (Perhaps most importantly) Creates curiosity from others as to why you live simply, allowing EA evangelistic opportunities ;). I would say however that living simply is so much more than doing nothing. The norm of modern society is to spend as much (or more than) you earn, so it takes great thought, discipline and even sacrifice to live more simply. It’s far far harder than doing nothing I have wondered why effective altruism doesn’t make a bigger deal of simplicity within our own community, especially from an Evangelistic point of view where I think it can work wonders. • Thanks for the comment! I agree with all of your arguments for value creation—thanks for expanding the claims in the original post. Fair point that living simply is far from doing nothing—sort of a glib title I suppose. Simple living is a key tenet of Singer’s ethics, so it was definitely emphasized in early EA, but I agree we have strayed from those roots. It’s worth thinking about our actions as individuals and a community through this lens, too—maybe people earning to give should set spending thresholds, maybe EAG should be held virtually, etc. Interestingly, I think we’ve lost some of this frugality rhetoric because it is dangerous from an “evangelistic” POV. Telling people they should give more to charity is one thing, but telling them they need to buy less stuff and also give more is even harder... • Thanks for the post! Super cool to show why/​how this matters more to smaller donors! • Great post—I think this is a really important meta-topic within EA that doesn’t get enough airtime. It might also be worth considering the “hidden zero problem” coined by Mark Budolfson and Dean Spears here. The thrust of their argument is that if a charity is funded by the ultra-rich or their foundations, small donations may have measurably 0 impact. As an example: suppose NGO X wants10M in funding for 2022. Foundation X has been NGO X’s largest donor for a few years running. If small donors give NGO X $8M in 2022, Foundation X will fully fund it to$10M, but if small donors give $8M, Foundation X will give$1M more and still fully fund it to $10M. This means that some of the small donations did 0 impact other than saving Foundation X some cash. Of the top of my head, there are a few obvious problems with the hidden-zero problem: 1. Foundations having more money isn’t necessarily a bad thing, especially if they give their assets away relatively quickly and effectively. 2. How are we supposed to know how much a certain real foundation like Open Philanthropy plans to give certain organizations? 3. Many charities don’t have such cut-and-dried budgets and fundraising goals. E.g., if GiveDirectly gets more money in 2022, it will simply give away more money by expanding the number of recipients and/​or its geographical operations. Regardless, Budolfson and Spears did a lot of fancy math to show the hidden zero problem is worth taking seriously in many cases, especially within EA. All that being said, it’s not clear to me how the hidden zero problem impacts your claim here. On one hand, if we intentionally diversify funding sources, charities might raises their budgets and demand the same amount from big foundations. However, if these foundations see that more money is coming in from more donors, they might decide the charity/​cause is no longer “neglected” and choose to reduce the size of their grant. Would love to hear thoughts on this from people more deeply entrenched in the grant-making world... • Where in Cambridge will this take place (accommodation /​ venue)? • Is compensation for both students and mentors? • Will you provide/​subsidize access to GPUs? • Great post—and I agree that it’s too narrow to ever claim that one is simply doing ethics through a singular lens. Another interesting facet of this: as we make moral theories more complex by “Band-Aiding” their flaws, we end up incorporating aspects of different moral theories. Take rule consequentialism which states that “an act is morally wrong if and only if it is forbidden by rules justified by their consequences” (SEP). In a way, this is a combination of the universal maxim idea from deontology and utilitarianism. Derek Parfit has been working on this stuff for decades. His three part series On What Matters attempts to combine consequentialism, deontology, and virtue ethics into a single moral theory he calls “Triple Theory”. Parfit’s Triple Theory is summarized as: An act is wrong if and only if, or just when, such acts are disallowed by some principle that is: (1) one of the principles whose being universal laws would make things go best, (2) one of the only principles whose being universal laws everyone could rationally will.... (3) a principle that no one could reasonably reject. (Clickable link) In essence, Parfit argues that all moral theorists are “climbing the same mountain on different sides” in their search for objective moral truth. • Maybe the biggest thing is that I got much more worried about AI risk over the last year. Cliche in this crowd, but you guys got me, I wasn’t expecting it, and I’m not thrilled about it. I went into the year sort of assuming we had about a century and that Stuart Russell had plausibly solved the technical side (in theory at least), I left (not so much because of actual developments in AI, as Yudkowsky’s dramatizing motivating me to do my homework on the field in the way I hadn’t before) thinking we probably have less than 50 years, and Russell is probably wrong even on the broad strokes. I don’t know whether this will cause me to donate directly to AI work or not (I don’t have a good sense of where the best place to donate is, and much of the broader community work seems meta in ways I’m skeptical of), but it’s probably the biggest, most relevant update of my own views this year. • Also more related to the content of this post I’m looking at Strong Minds very seriously. I was aware of them and liked there work before, but this year have been convinced that they are unusually underrated by major granters in the field. • I also worked with Markus during his pilot phase, and his work was extreamly helpful in helping us figure out some technical bits that he was able to do very quickly, but would have taken me hours (maybe would have just never been able to fix alone). I was exceptionally grateful for his help and it just gave me so much faith in the EA community that such a service would even exist. Still feel really grateful for all his help. • I’ll speak to question 6, since I am on the community health team, and in particular was hired in large part to work on community epistemics, but am only speaking to the work I’ve done rather than the whole team since I’m newish to the team. (Haven’t done tons of work on this yet, and my initial experiments and forays have been pretty varied, since the epistemics space is really large) Tl;dr I think this matters, in and of itself it hasn’t been the top thing on my list, adjacent/​related things have been high priority. (Other CEA teams online (via the forum), groups and events teams have all thought about this as well.) Whether people feel “able” to disagree itself might take some disambiguation—I tried to think a bunch about (1) intellectual challenge of having an inside view in a world with tons of information and how to make that easier and (2) the emotional difficulty of believing in your own ideas, not falling prey to epistemic learned helplessness, noticing your own intuitions, etc. When I thought about working on the latter at scale, I thought about: • Modelling thinking out loud, what it looks like when people try to figure things out and show all the messiness, that people others respect a lot have plenty of uncertainties, and trying to make figuring things out more accessible • Talking a lot about the mental and conversational motions I think are great, including those that solicit disagreement • Getting high status people to encourage disagreement Before the FTX situation happened, I had been updating more towards “doing things that don’t scale” and considering things like: • Epistemics coaching /​ “epistemics therapy” • A residence at a uni group to be a person who could focus on helping people shake up their thinking /​ get red-teaming on their current ideas /​ encouragement to think for themselves • Asking a lot of people what helped them think better and think about what social and physical contexts let people really think • E.g. the pros and cons of sharper and softer cultures for this, and whether EA should more explicitly think of itself as an archipelago, where there are different areas for different vibes, and your job is to figure out which one works best for you or move around as needed • I’ve definitely heard that some spaces feel like they privilege only a certain kind of thinking or set of conclusions, and that makes it hard for others to think straight, especially when access to funding /​ coworking spaces /​ etc feels contigent on it. That sucks and is hard. My team has done some thinking about this—I think the current sense is that adding more support is a better move than trying to get people to change how they run their own things, but I am definitely not super sure. And more generally trying to give support to people like group leaders, anyone who is closer to the ground and has more leverage over the social environment. My guess is a lot of the value of “feel viscerally like you have social support for disagreeing” happens in smaller contexts like that, and I’ve been in conversations with a handful about how they support their groups to think (like, I’m obsessed with this). E.g. my guess is that getting high status people to encourage disagreement is more useful here than it is at scale (but not sure whether it’s so much more useful that it out-does scale). In general people being excited about criticism, saying when they’ve updated and highlighting their favorites seems really great. When I thought about the problem of inside views, I was much more focused on people feeling afraid to even start thinking, and deferring too much /​ more than they endorsed, and trying to make figuring out what’s true easier. I suspect that kind of thing has valuable knock-on effects on “feeling like you’ll have social support to speak up”—personally when I know why I think what I think, I feel much more able to articulate it and fight for it than if I feel much more confused about the world. Maybe the direct “social support for disagreeing” should have been more my focus, I’m not sure. It was definitely on my radar. • I thought “the forum being scary” might end up being a real epistemics problem (though I wasn’t sure it was the top of my list of such problems, and the forum team have worked hard on this). • I think it’s very possible we should have more debates at EAGs and have bid for it. • I was at high school programs tracking in part how pressure-y we were being (and I’m so appreciative to others at those programs who have a lot more experience at it than me and were amazing influences). (In practice, I think people on average are overworried about this in high school contexts rather than under, but it definitely matters.) • I also taught a class that involved talking about how to actually make people feel like disagreeing was good (one feedback I got was that we’d done too much to make disagreeing feel like the thing to do and people felt a little pressured to come up with a disagreement!) Julia Wise has also written in part about how to get real feedback, in the context of power dynamics, and there’s a whole world of “how does funding affect epistemics” I haven’t delved into. One thing I don’t want to lose track of is that it can feel shitty to have people disagree that one’s ideas or critiques are valuable or true, and that alone is an emotional and often tracked-as-social or in-fact-social hit. But of course no one wants us to be in a position where as a community we can’t say “I don’t think your critique is any good” or “I want to hire that person less because I think their judgments of ideas have been systematically wrong.” Like, lots of criticism is bad. So it’s tricky. Really appreciative of the agree/​disagree voting system and all the people who say “Thanks so much for voicing your disagreement here” before they say why they don’t buy it. I think those things are great. (Really lovely example here and here). If I may name names, I think Rob Bensinger and Nathan Young are unusually good at this, and I appreciate them for it. I think this is important but hard, and there are a lot of important things in community epistemics. If you have thoughts on addressing this particular thing, I’d love to hear them (noting that in my role, I might decide there are things that are higher priority—but anyone can help community epistemics, I certainly can’t do it alone)! I have a form here. (Also, if people aren’t feeling able to disagree with community builders or anyone else, I’d really appreciate hearing about that—the form can be for that too). • 29 Nov 2022 16:40 UTC 19 points 0 ∶ 0 I fulfil my gwwc pledge by donating each month to the EA funds animal welfare fund for the fund managers to distribute as they see fit. I trust them to make a better decision than I will on the individual charities’ effectiveness since I don’t have that much time/​expertise to look into it. I think the long-run future is incredibly important, and I spend my labour mostly on that. But my guess (though I’m pretty unsure) is that my donations do more good in animal welfare than in longtermism-focused things. Perhaps the new landscape should change that but I haven’t made any updates yet. I also admit to donating a bit extra to The Humane League because they are close to my heart and also seem really effective for animals. I don’t think about this that often, and there was part of me that didn’t want to post this because it’s not very rigorous! But also maybe others also feel that way and it feels honest to post. (If other people have takes on where I should donate instead though I’m open to hearing them!) • I downvoted this because it is way too unspecific about what fields this advice applies to, and implies that people should do this for any field of study. But this is not the case. In economics, for example, it’s worth nothing and even frowned upon to reach out to supervisors before you apply. I’m not claiming that is the norm across fields, but the example highlights the danger with giving this advice without caveats. I think it’s very important to describe what fields the authors and people consulted are working in, and what fields they think it applies to. Edit: removed the downvote since it unfairly discounts the actual advice, but I still think the post needs to be much clearer about who the target audience is. • Thanks for the feedback Karthik! Agreed this is very general advice that isn’t applicable in all disciplines, departments, etc. and thank you for pointing out that we didn’t make this clear enough. We’ve added a caveat at the beginning that hopefully makes it clearer that this is intended to offer general guidance in contexts in which it is considered appropriate to reach out to supervisors independently, and that it’s important to check (e.g. with a university administrator, and/​or people in the same discipline) that it’s appropriate to do so. Hopefully this addresses your main concern. We think your comment also rightly points out the importance of us communicating our confidence in how relevant this advice is for different disciplines, what evidence we’re basing that on, and making it easier for people to judge that for themselves as well. We’ll make some edits to the post asap to try to do this. • This stuff also varies a lot by country. I’m guessing this is UK focused (as US tends to use “advisor” rather than “supervisor”). • Yes, I like their work! It is great that there are many complementing ways to learn these important topics. Although I have not yet found a good comprehensive playlist for those who want to learn by watching a summary of important concepts. • Thank you for donating 50% of your income Henry. You could have gone on a holiday or bought something fancy for yourself but you have chosen to help others in need instead. It’s admirable. • 29 Nov 2022 14:48 UTC 30 points 0 ∶ 0 I’m really grateful for this post and the resulting discussion (and I’m curating the post). I’ve uncritically used neuron counts as a proxy in informal discussions (more than once), and have seen them used in this way a lot more. It helped me to draw out a diagram as I was reading this post (would appreciate corrections! although I probably won’t spend time trying to make the diagram nicer or cleaner). My understanding is that the post sketched out the rough case for neuron counts as a proxy for moral weight as predictors of the grey properties below (information-processing capacity, intelligence, extent of valenced consciousness, and the number of morally relevant thresholds crossed by the organism), and then disputed the (predictive power of the) arrows I’ve greyed out and written on. • Wow, this is really cool, Lizka, thanks! I think it’s a really nice visualization of the post and report. I would say, in regards to the larger argument, that @lukeprog is right that hidden qualia/​conscious subsystems is another key route people try to take between neuron count and moral weight, so the full picture of the overall debate would probably need to include that. (and again, RP’s report on that should be published next week). • Neuron count relative to body-size, relative to the average ratio between the brain size/​neuron count and body size, matters I think for intelligence and other capabilities. I think I am missing these, rather crucial I think, qualifications (and these are quite commonly used within biology). Is that correct? And perhaps that is related to this other key route you mention here. And consequently there is the link between higher intelligence or capabilities (such as self-consciousness) and suffering, for which I agree arguments can be made in either direction. And I agree bare neuron count is a bad proxy, and against that the proposed non-relation to body-size can also I think be productively employed as a reductio ad absurdum. Cheers!! • 29 Nov 2022 13:38 UTC 2 points 0 ∶ 0 Is this closed off to people in the UK, or can anyone apply? • In light of FTX, I am updating a bit away from giving to meta stuff, as some media made clear that a (legitimate) concern is EA orgs donating to each other and keeping the money internal to them. I don’t think EAs do this on purpose for any bad reason, in fact I think meta is high leverage, but concern does give one pause to think about why we are doing this and also how this is perceived from the outside. • Is this subforum hidden from logged out users or no? • Gonna admit that more than giving to Givewell/​EA funds occasionally, I find giving matching pretty intimidating. Also I subscribe to longtermistm, but sense that I should put some money into global health in case there isn’t an extinction to possibly prevent. Though perhaps that makes less sense since I can see community level donations. I guess I’ll think about this at some point, but suggestions are welcome. • This year, I am giving$10K to Charity Entrepreneurship’s incubated charities at their discretion as they know where it will best be placed after all counterfactuals have been calculated. I am giving here for a lot of reasons (CoI: I like them so much I am on the board):

• I think there is a lot of counterfactual value in supporting new EA startups with higher risk profiles, especially within CE, where there is a good rate of growth to GW Top Charity status.

• I like to fund stuff that isn’t getting funded through the normal means to create more diversified funding in the EA space, which I believe is extremely important and more important than ever given the FTX situation.

• It is FUN to read project proposals and be a bit more involved early stage—feels more like venture capital than e.g. giving to AMF (although I wouldn’t begrudge anyone giving to AMF by any means!)

I also gave smaller sums to other organizations this year—numbers rough as I don’t have the donation receipt yet and am too lazy to look it up: 2500 to a mix of the following charities: • Effektiv Spenden—they have a crazy multiplier on money in → money raised so I think this is a great, leveraged way to donate. • CATF • AMF I gave to CATF and AMF as well mostly to hedge on myself being too meta. I think there is a tough tradeoff between leverage in meta stuff and meta 1) being less clearly linked to actual impact and 2) the fear that donating to meta orgs, where I’ve been more at home over the past 6 years is more giving to my friends and keeping the money “in the family” than doing actual good. I think meta is still worth it, as evidenced by my donations, but I think this is a concern to take seriously. Finally, I suppose I donate to my own org, High Impact Professionals, by taking a lower salary than I otherwise would as that makes more sense than taking a salary, getting taxed, and then donating back to my own org, at least if you think, as I do, that our org can do more good than the marginal dollar to the German/​US government. I am a little bit biased on that one though. • Great post! Check whether the model works with Paul Christiano-type assumptions about how AGI will go. I had a similar thought reading through your article and my gut reaction is that your setup can be made to work as-is with a more gradual takeoff story with more precedents, warning shots and general transformative effects of AI before we get to takeover capability, but its a bit unnatural and some of the phrasing doesn’t quite fit. Background assumption: Deploying unaligned AGI means doom. If humanity builds and deploys unaligned AGI, it will almost certainly kill us all. We won’t be saved by being able to stop the unaligned AGI, or by it happening to converge on values that make it want to let us live, or by anything else. Paul says rather that e.g. The notion of an AI-enabled “pivotal act” seems misguided. Aligned AI systems can reduce the period of risk of an unaligned AI by advancing alignment research, convincingly demonstrating the risk posed by unaligned AI, and consuming the “free energy” that an unaligned AI might have used to grow explosively or Eliezer often equivocates between “you have to get alignment right on the first ‘critical’ try” and “you can’t learn anything about alignment from experimentation and failures before the critical try.” This distinction is very important, and I agree with the former but disagree with the latter. On his view (and this is somewhat similar to my view) the background assumption is more like, ‘deploying your first critical try (i.e. an AGI that is capable of taking over) implies doom’, which is saying that there is an eventual deadline where these issues need to be sorted out, but lots of transformation and interaction may happen first to buy time or raise the level of capability needed for takeover. So something like the following is needed: 1. Technical alignment research success by the time of the first critical try (possibly AI assisted) 2. Safety-conscious deployment decisions when we reach the critical point where dangerous AGI could take over (possibly assisted by e.g. convincing public demonstrations of misalignment) 3. Coordination between potential AI deployers by the critical try (possibly aided by e.g. warning shots) On the Paul view, your three pillars would still eventually have to be satisfied at some point, to reach a stable regime where unaligned AGI cannot pose a threat, but we would only need to get to those 100 points after a period where less capable AGIs are running around either helping or hindering, motivating us to respond better or causing damage that degrades our response, to varying extents depending on how we respond in the meantime, and exactly how long we spend during the AI takeoff period. Also, crucially, the actions of pre-AGI AI may push this point where the problems become critical to higher AI capability levels as well as potentially assisting on each of the pillars directly, e.g. by making takeover harder in various ways. But Paul’s view isn’t that this is enough to actually postpone the need for a complete solution forever: e.g. that the effects of pre-AGI AI could ‘could significantly (though not indefinitely) postpone the point when alignment difficulties could become fatal’. This adds another element of uncertainty and complexity to all of the takeover/​success stories that makes a lot of predictions more difficult. Essentially, the time/​level of AI capability at which we must reach 100 points to succeed also becomes a free variable in the model that can move up and down, and we also have to consider the shorter-term effects of transformative AI on each of the pillars as well. • A simple yet inspiring post, much like the good work that you have wrought. Good job Henry! • Hello Yonatan and EA community, I am currently exploring the idea of a EA fundraising videogame, with 3 objectives: 1/​fundraising for EA charities 2/​Advertising for EA charities and 3/​EA education. I see it as a new way of enlarging the EA community but it may already have been done and I am simply unaware of it. Are there already EA video games designed by the community? Would someone in the community having experience in building video games be motivated to get in touch with me and start a project? • Hey! I don’t understand what you mean, could you help me visualize it? The closest I know of is Baba Is You (1m video), that advertises in the loading screen that developers donate to effective charities. Is that what you mean? • Belfort ended up paying 50% of his future income towards restitution No he didn’t! From Wikipedia [my emphasis in bold]: Restitution Belfort’s restitution agreement required him to pay 50% of his income towards restitution to the 1,513 clients he defrauded until 2009, with a total of110 million in restitution further mandated. About $10 million of the$110 million that had been recovered by Belfort’s victims as of 2013 was the result of the sale of forfeited properties.[32]

In October 2013, federal prosecutors filed a complaint against Belfort. Several days later, the U.S. government withdrew its motion to find Belfort in default of his payments, after his lawyers argued that he had only been responsible for paying 50% of his salary to restitution up until 2009, and not since. The restitution he paid during his parole period (after leaving prison) amounted to $382,910 in 2007,$148,799 in 2008, and $170,000 in 2009. Following this period, Belfort began negotiating a restitution payment plan with the U.S. government.[33] The final deal that Belfort made with the government was to pay a minimum of$10,000 per month for life towards the restitution, after a judge ruled that Belfort was not required to pay 50% of his income past the end of his parole. Belfort has claimed that he is additionally putting the profits from his U.S. public speaking engagements and media royalties towards the restitution, in addition to the $10,000 per month.[34] Prosecutors also said that he had fled to Australia to avoid taxes and conceal his assets from his victims,[35] but later recanted their statement, which had been given to The Wall Street Journal,[36] by issuing Belfort an official apology and requesting that The Wall Street Journal print a retraction.[37] Belfort also claimed on his website and elsewhere that he intended to request that “100% of the royalties” from his books and The Wolf of Wall Street film be turned over to victims. But in June 2014, spokesmen for the U.S. attorney said that Belfort’s claim was “not factual”,[38] and that he had received money from the initial sale of the movie rights that was not entirely put towards his restitution repayment.[36] BusinessWeek reported that Belfort had paid only$21,000 toward his restitution obligations out of approximately 1.2 million paid to him in connection with the film before its release.[39] Belfort has stated that the government refused his offer to put 100% of his book deal money towards his restitution.[39][40] I think this is pretty outrageous (he’s paid only ~11% of what he owes, and will likely only ever pay ~12% max), and sends completely the wrong message. i.e. basically he’s still a rich celebrity making lots of money off the stories of his wrongdoing; he only served 22 months in prison; this could easily be interpreted by a lot of people in terms of “financial crime pays off”(!). Who knows, maybe FTX/​Alameda were even inspired by it!? • A list I’m considering for end-of-year donations, in no special order: I’m also very interested in the best ways to help people affected by recent events, especially ways which are more scalable /​ accessible than supporting personal connections. • The issue is that there is no option, other than travel to the US, for the majority of willing UK adults to receive a bivalent vaccine. FWIW if we’re resorting to “vaccine tourism”, it may be possible to get it from Europe rather than the US. I think some people attending the Prague Fall Season were able to buy a vaccine there. • I just wanted to raise a short critique that came to me while reading this section: Investment funds regularly take a management fee (hedge funds, for example, typically take 1–4% of invested funds each year). Whereas the charitable funds we recommend don’t take any fees for their work. While I certainly understand the point, it seems a little bit more justification for why this is a good arrangement is desirable when viewed through an economics lens. The reason that there are management fees is so that there is an economic incentive for the people running the fund to stay “alive”. In economic terms, ideally, the management fee would be conditional on the profits made using an investment so as to align interests between management team and investors. However, even in cases where there is a simpler arrangement, having economic incentives in place helps to align interests as long as the management teams depends on them. So, I guess my point is, what we are doing here in the donation space seems to be a very trust based arrangement, where we would need to justify the mechanisms that ensure that interests between management and investors remain aligned if the management team does not depend on the fund surviving. I am slightly worried about this after the whole SBF and FTX debacle. There is/​was a lot of good will towards people who seem to have a lot of money and claiming they want to do good with it. How do we make sure that not all of our eggs are in one basket and potential downsides in the case of betrayal or corruption are limited? • I think fees make sense for investment funds because it increases their incentive to make a profit for their customers. But I don’t think a straightforward fee for charitable funds would increase their incentive to have an impact (though perhaps it would increase their incentive to convince donors they are having an impact—but this is still a ‘trust based arrangement’). That said, I take your point about the problems with trust based arrangements! I feel in the ideal world, charitable funds are funded proportional to the quality of their grants. To some extent, this is what already happens (often these funds are themselves funded by a different funder after conducting some kind of evaluation), but it’s often not public. I’m hoping that Giving What We Can’s work evaluating the evaluators will help provide additional accountability and help donors make a more informed choice about which funds to trust. • 29 Nov 2022 12:07 UTC 14 points 2 ∶ 0 Sorry if I missed this in other comments, but one question I have is if there are ways for small donors to support projects or individuals in the short term who have been thrown into uncertainty by the FTX collapse (such as people who were planning on the assumption that they would be receiving a regrant). I suppose it would be possible to donate to Nonlinear’s emergency funding pot, or just to something like the EAIF /​ LTFF /​ SFF. But I’m imagining that a major bottleneck on supporting these affected projects is just having capacity to evaluate them all. So I wonder about some kind of initiative where affected projects can choose to put some details on a public register/​spreadsheet (e.g. a description of the project, how they’ve been affected, what amount of funding they’re looking for, contact details). Then small donors can look through the register and evaluate projects which fit their areas of interest /​ experience, and reach out to them individually. It could be a living spreadsheet where entries are updated if their plans change or they receive funding. And maybe there could be some way for donors to coordinate around funding particular projects that they individually each donor couldn’t afford to fund, and which wouldn’t run without some threshold amount. E.g. donors themselves could flag that they’d consider pitching in on some project if others were also interested. A more sophisticated version of this could involve small donors putting donations into some kind of escrow managed by a trusted party that donates on people’s behalf, and that trusted party shares donors on information about projects affected by FTX. That would help maintain some privacy /​ anonymity if some projects would prefer that, but at administrative cost. I’d guess this idea is too much work given the time-sensitivity of everything. An 80-20 version is just to set up a form similar to Nonlinear’s, but which feeds into a database which everyone can see, for projects happy to publicly share that they are seeking shortish-term funding to stay afloat /​ make good on their plans. Then small donors can reach out at their discretion. If this worked, then it might be a way to help ‘funge’ not just the money but also the time of grant evaluators at grantmaking orgs (and similar) which is spent evaluating small projects. It could also be a chance to support projects that you feel especially strongly about (and suspect that major grant evaluators won’t share your level of interest). I’m not sure how to feel about this idea overall. In particular, I feel misgivings about the public and uncoordinated nature of the whole thing, and also about the fact that typically it’s a better division of labour for small donors to follow the recommendations of experienced grant investigators/​evaluators. Decisions about who to fund, especially in times like these, are often very difficult and sensitive, and I worry about weird dynamics if they’re made public. Curious about people’s thoughts, and I’d be happy to make this a shortform or post in the effective giving sub-forum if that seems useful. • Great post, thank you for writing. Definitely makes me re-evaluate my own priors on fraud, and also think about structural risks inherent in companies where: • there is a trading house and fund owned by the same person (as in Bernie Madoff’s case) • the nature of the business and /​ or investments may not be a ponzi scheme by nature or have multi-level marketing built in, but where it’s growth model in effect looks a bit like that; i.e. growth seems highly driven by enthusiasm, is activated through social networks /​ word of mouth, and the intrinsic value of the business /​ commodity is subject to a lot of debate (in contrast with e.g. stock prices of minerals necessary in electronics manufacturing) • Cheers. For what it’s worth, I’m not sure how much one should update one’s priors based on this list, because it’s not clear how many people in finance there are in total (though maybe one could quickly do a back of the envelope calculation here). So I think that this kind of thing is more useful when thinking about what happens once you know there is fraud. • Yeah I had the same thought too. Though when I said priors I personally did not mean updating quantifiably (i.e. 0.05 --> 0.1); more in the folk sense of priors, or base rates. Also the examples I gave are more about certain features of a business /​ company that I should be more sceptical about. • Right, what I meant is that you probably shouldn’t update all that much about the frequency of fraud within finance, the same way that you shouldn’t update on how often redheads are evil are after reading a list of evil redheads. • Thanks for the thoughtful post. I think you are onto something interesting here! I like the move of trying to frame ethics in pragmatic terms (i.e., focused on what we actually are and could/​should be doing rather then on a-priori assumptions) and would argue that your argument hits onto something really important in this regard. Imo, there is much to learn from pragmatic philosophy to further elaborate on your insight. Having said that, I am not sure that assigning new meanings to already heavily used terms like “virtue ethics” or “consequentialism” is the right way to go here. Imo, people are bound to be confused by this. Maybe it would make sense to frame it slightly differently by creating a “new model” for ethics that consists of the components you identify and then simply state that “component Y could be informed by prior work on X” where Y is a useful term for the component, and X is one of the already used terms like virtue ethics. Hope this helps you flesh out this idea further! Feel free to reach out to me if you want to discuss. • [ ] [deleted] • Thanks for the report ! I must admit that I didn’t dig much into the debate, and only offering personal intuition, but I always found something odd off with the argument that “more neurons = greater ability to feel pain”. The implication of this argument would be “Children and babies have a lower neuron count than adults, so they should be given lower moral value”, as pointed out (i.e. it’s less problematic if they die). And I just don’t see many people defending that. Many people would say the opposite: that children tend to be happier than adults. So I kept wondering why people used that approximation for other species. • [ ] [deleted] • This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions. This has also been added to the post’s appendix and assumes some familiarity with the post. Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spending on research and influence, rather than finding the optimal solutions based on point estimates and again find that the community’s current spending rate on AI risk interventions is too low. My distributions over the the model parameters imply that • Of all fixed spending schedules (i.e. to spend X% of your capital per year[2]), the best strategy is to spend 4-6% per year. • Of all simple spending schedules that consider two regimes: now until 2030, 2030 onwards, the best strategy is to spend ~8% per year until 2030, and ~6% afterwards. I recommend entering your own distributions for the parameters in the Python notebook here[3]. Further, these preliminary results use few samples: more reliable results would be obtained with more samples (and more computing time). I allow for post-fire-alarm spending (i.e., we are certain AGI is soon and so can spend some fraction of our capital). Without this feature, the optimal schedules would likely recommend a greater spending rate. Caption: Fixed spending rate. See here for the distributions of utility for each spending rate. Caption: Simple - two regime - spending rate Caption: The results from a simple optimiser[4], when allowing for four spending regimes: 2022-2027, 2027-2032, 2032-2037 and 2037 onwards. This result should not be taken too seriously: more samples should be used, the optimiser runs for a greater number of steps and more intervals used. As with other results, this is contingent on the distributions of parameters. Some notes • The system of equations—describing how a funder’s spending on AI risk interventions change the probability of AGI going well—are unchanged from the main model in the post. • This version of the model randomly generates the real interest, based on user inputs. So, for example, one’s capital can go down. Caption: An example real interest function , cherry picked to show how our capital can go down significantly. See here for 100 unbiased samples of . Caption: Example probability-of-success functions. The filled circle indicates the current preparedness and probability of success. Caption: Example competition functions. They all pass through (2022, 1) since the competition function is the relative cost of one unit of influence compared to the current cost. This short extension started due to a conversation with David Field and comment from Vasco Grilo; I’m grateful to both for the suggestion. 1. ^ Inputs that are not considered include: historic spending on research and influence, the rate at which the real interest rate changes, the post-fire alarm returns are considered to be the same as the pre-fire alarm returns. 2. ^ And supposing a 50:50 split between spending on research and influence 3. ^ This notebook is less user-friendly than the notebook used in the main optimal spending result (though not un user friendly) - let me know if improvements to the notebook would be useful for you. 4. ^ The intermediate steps of the optimiser are here. • 29 Nov 2022 9:47 UTC 8 points 2 ∶ 0 Thanks for writing this! I’m inclined to agree with a lot of it. I am cautious about over-updating on the importance of earning to give. Naively speaking, (longtermist) EA’s NPV has crashed by ~50% (maybe more since Open Phil’s investments went down), so (very crudely, assuming log returns to the overall portfolio) earning to give is looking roughly twice as valuable in money terms, maybe more. How many people are in the threshold where this flips the decision on whether ETG is the right move for them? My guess is actually not a ton, especially since I think the income where ETG makes sense is still pretty high (maybe more like500k than $100k — though that’s a super rough guess). That said, there may be there are other reasons EA has been underrating (and continues to underrate) ETG, like the benefits of having a diversity of donors. Especially when supporting more public-facing or policy-oriented projects, this really does just seem like a big deal. A rough way of modeling this is that the legitimacy /​ diversity of a source of funding can act like a multiplier on the amount of money, where funding pooled from many small donors often does best. The Longtermism Fund is a cool example of this imo. Another thing that has changed since the days when ETG was a much more widely applicable recommendation is that fundraising might be more feasible, because there are more impressive people /​ projects /​ track records to point to. So the potential audience of HNWIs interested in effective giving has plausibly grown quite a bit. • Living in Australia, I’ve always given to orgs that have tax deductibility here in Australia—even though I know there might be better donation opportunities out there it’s been a bit of a mental blocker for me. But now I’ve managed to internalise the benefit of donating to the charities I think have the highest impact regardless of the tax benefit so I’ll be donating to StrongMinds and GFI this Giving Season as well as some of the other global health charities I normally support. • Has anyone ever looked into the possibility of donation swapping for tax favorability purposes? E.g., A and B are tax-deductible in the US, only B is in Australia. Someone wants to give$1000 to B, you want to give 1000 to A. Can y’all agree to switch so both people can get tax deductions? The parties have to trust each other but there are potentially ways to facilitate that. I’m not in a position to give legal advice from a US perspective and haven’t researched, but I don’t see any obvious legal hurdles on thirty seconds of thinking about it. • (I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it’s still a helpfwl frame in order to understand the differences anyway, so I’m posting it here. ^^) TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin. Some reasons to prefer decentralised funding and insider trading I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total. Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I’m not arguing that we shouldn’t take these considerations into account. What I’m trying to say is that even after you’ve given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don’t handicap ourselves just to fit in. Consider the community from a bird’s eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing? What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they’ve had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point. To be clear, this isn’t how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition (“consciousness”). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn’t work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads. And because neurons have been harshly optimised for their collective performance, they show a remarkable level of competitive coordination aimed at making sure there are no informational short-circuits or redundancies. Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion? I think it would be a community optimised for the early detection and transmission of market-moving information—which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they’re friends with the CEO and received private information, it’s called “insider trading” and is illegal in some countries. But it’s not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we’d want to see happening. If, say, you have a friend who’s trying to get time off from work in order to start a project, but no one’s willing to fund them because they’re a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness. That kind of information doesn’t transmit very readily, so if we insist on centralised funding mechanisms, we’re unknowingly losing out on all those insider trading opportunities. Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn’t even pass the first layer. (I should probably mention that there are obviously biases that come into play when evaluating people you’re close to, and that could easily interfere with good judgment. It’s a crucial consideration. I’m mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.) There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they’re likely to be better at evaluating impact opportunities. But this needn’t matter much if they’re bottlenecked by bandwidth—both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3] On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration. While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets. They’re a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by “purchasing the impact” second-hand. Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally. (There are of course, more complexities to this, and you can check out the previous discussions on the forum.) 1. ^ This doesn’t necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they’re still reluctant to use information that other people don’t have access to, so it amounts to nearly the same thing. 2. ^ This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren’t interested in passing it along. (Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!) 3. ^ As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be “between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information.” -- (Herbert et al., 2013) Luckily it’s nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research. • 29 Nov 2022 5:12 UTC 5 points 6 ∶ 0 length of prison term (sentenced? served?) seems like a useful column in the spreadsheet. • Why? What decisions would change depending on the various values? • FWIW: I also would that info interesting, mainly to see if they got a jail sentence that seems justified, but it’s not essential indeed. • I was commenting because I’ve been curious about it and it seemed like info that would often be present alongside whether or not there was a prison sentence at all, so it seems like it wouldn’t have been much marginal work to collect it on your first pass (though obviously much more work now, unless you were going back through the list for some other reason). There do exist questions about how long people will be sentenced to prison around this (like this Metaculus one), but it also wasn’t obvious to me that you were going for exclusively decision relevant info—how is jurisdiction of crimes decision relevant? Though maybe I should have just said interesting rather than useful. • Hey! Don’t forget about “Colossus: the Forbin Project”. Predates “Terminator” by fourteen years and is certainly more realistic in its depiction of the existential risk of AI than is Terminator. Also, as a bonus, doesn’t need time travel, and doesn’t have silly robot one-liners. • If someone is strongly considering donating to a charitable fund, I think they should usually instead participate in a donor lottery up to say 5-10% of the annual money moved by that fund. If they win, they can spend more time deciding how to give (whether that means giving to the fund that they were considering, giving to a different fund, changing cause areas, supporting a charity directly, participating in a larger lottery, saving in a donor-advised fund, or doing something altogether different). I’m curious how you feel about that advice. Obviously some donors won’t be comfortable with the idea of a donor lottery and they can continue to give directly. I personally remain very excited about the idea of donor lotteries and think it would be healthy for the EA community to use more extensively. For example, I think it would be healthy if funds were accountable to a smaller number of randomly selected donors who had the time to investigate more deeply, rather than spending <10% as much time and being more likely to pick based on a quick skim of fund materials and advertising/​social dynamics/​etc. And it seems like there’s no way to escape from that regress by having GWWC evaluate evaluators, since then the donor must evaluate GWWC’s evaluations. From this perspective a donor lottery is really like a “free lunch” that’s hard to get in other ways. Using a fund is similar to using an actively managed investment fund instead of trying to pick individual stocks to invest in: in both cases, you let experts decide what to do with your money. This analogy helps explain the structure of a charitable fund, but it likely understates its benefits. There is also one major way in which it overstates the benefits: for financial investments it is very valuable to diversify across at least dozens of firms and a few asset classes. Evaluating so many investments would take a huge amount of time, and so even if evaluating individual investments was easier than evaluating funds you’d still probably want to invest in a fund. In contrast, a charitable donor needs to find just one charity that they want to support, and so the case for evaluators really rests on it being easier to evaluate an evaluator than to evaluate a charity. That comparison is most favorable for organizations like GiveWell, whose main role is to produce reasoning that would clearly be valuable to an individual donor trying to evaluate a charity. But “evaluate funds” vs “evaluate charities” is more apples-to-apples when you are primarily relying on funder judgment, since you could just as well rely on the judgment of people who run the charities they support. (However the point about charities preferring to engage with fewer big funders is still very relevant and suggests using either a fund or a lottery.) • Thanks for the thoughtful comment. I think there’s a strong theoretical case in favour of donation lotteries — Giving What We Can just announced our 2022/​2023 lottery is open! I see the case in favour of donation lotteries as relying on some premises that are often, but not always true: • Spending more time researching a donation opportunity increases the expected value of a donation. • Spending time researching a donation opportunity is costly, and a donation lottery allows you to only need to spend this time if you win. • Therefore, all else equal, it’s more impactful (in expectation) to have a 1% chance of spending 100 hours to decide where100,000 should go than it is to have a 100% chance spending 1 hour to decide where $1,000 to go. • And donation lotteries provide a mechanism to do the more impactful thing. Some of these don’t hold for many donors, and there are some additional considerations which undermine the value of lotteries: • Some donors may not feel confident that they can do much better with more time invested. They may even feel averse about the amount of money they’d affect if they won(even if ex-ante they influenced$X either way). They stand less to gain from donations lotteries because of this.

• Choosing to donate to a donation lottery is not costless. For example, it may take a similar amount of time/​resources to evaluate which fund they think is highest impact, as it would to understand and trust donation lotteries. This takes away some of the advantage of a donor lottery.

• For some donors, there’s there may be more advocacy potential in giving to a fund supported by a reputable evaluator, than a donation lottery.

• I’d like to flag that I’m a little more reticent about putting too much weight on this consideration. Leaning too much into ‘advocacy potential’ (rather than just doing what’s straightforwardly effective) seems slippery. But I think it’d be a mistake to ignore this consideration.

• A substantial amount of our traffic comes from people who are completely unfamiliar with effective altruism (e.g.., people who just googled “Best charities” or just used our “How Rich Am I?” calculator) and I think funds are a better option for most of this audience (though perhaps for EA Forum users, it’s a different story, so I really appreciate pushback here!).

Overall, I think if Giving What We Can changed its default recommendation from funds to donation lotteries, we’d be having less impact.

Though we see funds as the best default option, we would like to provide additional guidance on when it makes sense to choose other options. I’ve made a small edit to the version of this post on our website to acknowledge that donor lotteries could be a compelling alternative. My sense is that donor lotteries would be a better option than funds for someone who:

• Understands the arguments in favour of a donor lottery, and also the mechanisms for how it works.

• Would be able to donate cost-effectively if they spent more time on their decision.

• Would be able to spend that time in the event of winning.

For example, I think it would be healthy if funds were accountable to a smaller number of randomly selected donors who had the time to investigate more deeply, rather than spending <10% as much time and being more likely to pick based on a quick skim of fund materials and advertising/​social dynamics/​etc. And it seems like there’s no way to escape from that regress by having GWWC evaluate evaluators, since then the donor must evaluate GWWC’s evaluations. From this perspective a donor lottery is really like a “free lunch” that’s hard to get in other ways.

Speaking personally, I’d also prefer fewer donors conducting deeper investigations of funds than a larger number conducting more shallow investigations. I think this is a very good consideration in favour of donation lotteries.

Speaking on behalf of Giving What We Can: though our work “evaluating the evaluators” will inform our recommended funds and charities (to provide a stronger basis for our recommendations) we are also motivated to make it easier for donors to choose which evaluators and funds they rely on by providing resources on the values implicit in their methodology + pointing to some potential strengths/​weaknesses of their methodology.

Put another way, our vision for next year is to help:

• Provide strong default options for donors, with a reasonable justification for those defaults. (i.e., they’re supported by a trusted evaluator who we investigated).

• Provide the tools for donors to choose the best fund or charity given their values and worldview.

• Thanks for this great response.

Im really curious about your work to give donors the tools to choose between evaluators based on their values.

One big difference between investment funds and charitable funds is that lay people can at least evaluate funds on some basic metrics such as market returns versus a benchmark. Both for accountability and for the purpose of aligning charity fund/​ evaluator chooses with values, some further tooling seems valuable.

Do you have any comments on the accountability piece?

Finally, I would add another downside of lotteries, which is that donors need to trust that most participants will have similar values to them, and the knowledge /​ skills to do research. This trust seems easier to grant to evaluators or funds.

• Really interesting post. Not to hijack it, but I didn’t know about the EA Forecasting & Epistemics Slack. Can you point me to info on it or how to join?

• Would any of the funds be able to deploy additional donations quickly to projects that recently lost funding? Or is this an exceptional time where it might be more effective to donate directly to impacted groups, so they can receive donations faster?

• More sympathetic to biosecurity issues than at the start of the year. Pretty convinced there are clear things that would be useful to do and help a lot of people. Plus, FTX situation cut out a lot of money that went to the general area such as SBF’s brother’s group-Guarding Against Pandemics.

• How soon will ALLFED be able to actually deploy emergency nutrition?

I don’t know how many additional people died from a lack of nutrition following the invasion of Ukraine, but I’d be a little surprised if it were fewer than direct combat deaths which are probably over 100,000 this year.

Aside from the direct value of saving lives, it’s important to demonstrate that emergency food supplies can be deployed at a substantial scale before a global catastrophe actually happens.

• 28 Nov 2022 23:58 UTC
4 points
0 ∶ 0

Contribute to Wikiciv.org—A wiki for rebuilding civilization’s technology

Ways you can help:

-Write and edit articles
-Research and collect content
-Work on a port of Entitree to make a tech tree visualization

No coding experience needed! Wikiciv has a “What You See Is What You Get” editor, if you can edit a google doc, you can edit Wikiciv.

• Just wanted to say that I thought this post was very interesting and I was grateful to read it.

• This is pretty tame compared to the average article that gives EA more than a passing mention.

• I begin with splitting my donations between cause areas: currently, 60% to longtermism and 40% to animal welfare. And then I decide which funds/​orgs to give to from there.

This month, that is:

30% Long-term future fund

30% Longtermism Fund (Longview Philanthropy)

30% Animal Welfare Fund

5% Faunalytics

5% Good Food Institute

I’m becoming more comfortable with ‘diversifying’ my donations; 2 months ago I was just giving to the Long-term Future Fund and the Animal Welfare Fund.

For me, I think a big reason I’m starting to diversify is that while I trust that the folks at all these orgs know how to spend money more effectively than I do, it perhaps makes sense to trust multiple teams of experts in case there’s a more general failure at one of them. Hope that makes sense.

• Hi!

I have donated 10 % of my net income since I started working, and lately have been donating to the Long-Term Future Fund (LTFF). To better plan the timing of my giving and investments, I was wondering about how much one should expect the (marginal) cost-effectiveness of the LTFF to vary in the future. Do you have any guesses for the annual variation?

Update: I have just realised Bruce posted a very related question below!

• Hi Vasco, great question :).

There are a few considerations that might be relevant here:

• A lot here hinges on the extent to which donations to the LTFF are fungible with large funders (like Open Philanthropy). To the extent it does funge, then your donation might end up being as cost-effective as their last dollar, regardless of which year you give it.

• Another point: the LTFF at all points likely funds everything above a certain ‘bar’ of cost-effectiveness. But that bar should change based on the best information at the time (i.e., the bar might lower when there is a lot of funding available; it might increase when there’s not; it may also change depending on how ‘on fire’ the world appears to be). I’m much less confident about this point, but it makes me think that, to the extent you trust the grantmakers to be well-informed, you shouldn’t worry too much about the timing of your donation. They always have the option of saving it—I don’t believe they have a requirement to disberse all their grants each year.

• Hi Michael,

Great points, thanks!

A lot here hinges on the extent to which donations to the LTFF are fungible with large funders (like Open Philanthropy). To the extent it does funge, then your donation might end up being as cost-effective as their last dollar, regardless of which year you give it.

I agree donations to the LTFF are fungible with the grants of large funders. That being said, I am not sure we can assume the marginal cost-effectiveness (i.e. the benefits of the last dollar) are uniform across the areas supported by a given large funder. For example, I suspect the marginal cost-effectiveness of the global health and wellbeing side of Open Philanthropy (OP) is lower than that of its longtermist side. If this is true, I think it would be possible for the donations of the LTFF to be fungible with those of the longtermist side of OP, while still being more effective than the last dollar of OP (whose benefits would be lower than the last dollar of its longtermist side).

Another point: the LTFF at all points likely funds everything above a certain ‘bar’ of cost-effectiveness. But that bar should change based on the best information at the time

I think it would be nice to have more information about that part of grantmaking process on the LTFF website.

I’m much less confident about this point, but it makes me think that, to the extent you trust the grantmakers to be well-informed, you shouldn’t worry too much about the timing of your donation.

I tend to agree. Basically, we can maybe say the donations made this year to the LTFF are fungible to some extent with the donations made to it next year.

They always have the option of saving it—I don’t believe they have a requirement to disberse all their grants each year.

Yes, I think that would be fine as long as the funds do not increase a lot. For longer saving times, there is also the option of donating to the Patient Philanthropy Fund (which is maybe fungible with the LTFF too). Maybe one relevant question is knowing its annualised return, such that I can compare it to my investment options? I guess it is at least as high as that of the stock market (the inflation-adjusted return of the S&P500 between 1871 and 2022 was 2.56 % (= (4848.17/​106.19)^(1/​(2022-1871))-1), but I wonder whether one should also have in mind considerations around mission-correlated investing. I guess I can also beat the market, but I may well be wrong!

• Donate 10 % of net income until reaching the desired level of savings, which could be defined as a multiple of the global real GDP per capita.

• Define a flexible level of consumption such that the marginal consumption is as effective as the marginal donations.

• You might want to check out https://​​forum.effectivealtruism.org/​​s/​​AbrRsXM2PrCrPShuZ

pretty much agree that it doesn’t seem optimal to have people trying to drum up hype with a blog post when they think there is an opportunity for high impact. It would be nice to have a site that has thousands of very modular forecasts/​ impact estimates on things that you can paste together so that people can see the numbers clearly and quickly.

I think this is sorta trying to do that on a less ambitious level.

• 2022 donations in USD:

5k to Maternal Health Initiative (new CE org, family planning in Africa)
5k to Vida Plena (new CE org, mental health in LATAM)
2k to Giving What We Can fund (short and longtermism causes)
2k to Cellular Agriculture Australia (developing cell ag industry)

Donated to own EA projects:

22k to GoalsWon (accountability coaching app, pro-bono for EAs)
1k to EA for Kids (EA storybook creation started)

Overall happy with the year, the best yet for contributions both directly and investing in own EA projects. Also thankful for all the collaboration and advice from the community.

• 28 Nov 2022 21:39 UTC
13 points
1 ∶ 0

My guess is that this would be considered akin to an anticipatory assignment of income and would be charged against the wage earner as income, but I didn’t look at it for more than three minutes. So that is something you would want to run by a tax lawyer before actually doing (standard disclaimer that I can’t give legal advice).

I can think of two other ways you might be able to pull something like this off, although they involve additional complications:

• If everyone in Organization X already donated at least $5000 per year to charity, Organization X could potentially cut salaries by$3750 and announce a 3:1 employee charitable matching program (up to $1250 in employee giving). • If you (an employee of Organization X) want to contribute$5000 to Organization Y, and an employee of Organization Y wants to contibute $5000 to Organization X, you might be able to agree to each petition your employers for a$5000 pay cut. In theory, one could develop an algorithm to match people who wanted to do this across organizations in any number of combinations.

• I haven’t given much thought to either of these, but they don’t strike me as assignments of income in the same way as the initial suggestion. Definitely do not try without obtaining actual legal advice from a tax lawyer!

• The common method to mitigate effects of losing the standard deduction—which I use—is to donate nothing in half of the years (drawing the money into a savings account instead), and donate twice as much in the other half. Yes, I mail a number of checks in December and January of odd-number years. Yes, I take a video of myself putting the December ones in a USPS mailbox that is uploaded to the cloud. :)

• You might be interested in:

As I see it, EA is not a single consensus, different individuals reach very different conclusions about resource allocation, as you can see e.g. in the current “Where are you donating this year, and why?” thread. Or by comparing Founders Pledge The Global Health and Development Fund Grants with GiveWell’s All Grants Fund.
Also, it seems to me that there are many ideas that people are passionate about, but are often bottlenecked by a lack of implementers (i.e. people willing and able to turn those ideas into concrete projects).
When I see a successful new EA project, it never seems to happen because “EA” reached the conclusion that the project was important and allocated resources to it, but because some individuals developed a theory of change and worked to make it happen.

• This sounds like a great idea! I like the idea for the structure.

• Check out Tom Barnes’ post on Air Pollution, a neglected problem.

• 28 Nov 2022 20:34 UTC
21 points
3 ∶ 0

I really appreciate this post and the conclusion here seems very reasonable. As someone who is personally guilty of using neuron counts as a sole proxy for moral weight, I would love to include additional metrics that more closely proxy something like capacity for suffering and pleasure. However, my problem is that while the metrics mentioned (mirror-test, trace conditioning, unlimited associative learning, reversal learning) might be more accurate proxies, they are (as far as I can tell) not available for a wide variety of species. For me, the main goal of employing these moral weights is to get a framework that decisionmakers can use for evaluating the impact of any project. I am particularly interested in government cost-benefit analyses, where the ideal use case would be to have a spreadsheet where government economists could just plug in available proxies for moral weight and get an estimated valuation for suffering reduction for an individual of a particular species. Neuron counts are nice for this because you can pretty easily find an estimated neuron count for almost any species. With this issue in mind,

1. Are you aware of any papers/​databases that have a list of species for which any of the four recommended factors have been tested and the results? It seems, for example, that scientists make headlines when they find a species that passes the mirror test but I can’t tell which species have “failed” it versus which have not been tested.

2. Other factors that are widely available for many species include brain mass, body mass, brain-to-body mass ratio, cortical neurons, whether the animal has any particular brain/​anatomical structure, class/​order, etc. It sounds like maybe the ratio of cortical neurons to brain size might be a reasonable proxy based on the section on processing speed—would you agree that would be an improvement over just neurons? Do any of these other characteristics stand out as plausible proxies?

• [ ]
[deleted]
• Hi Monica! We hear you about wanting a table with those results. We’ve tried to provide one here for 11 farmed species: https://​​forum.effectivealtruism.org/​​posts/​​tnSg6o7crcHFLc395/​​the-welfare-range-table

We tend to think that if the goal is to find a single proxy, something like encephalization quotient might be the best bet. It’s imperfect in various ways, but at least it corrects for differences in body size, which means that it doesn’t discount many animals nearly as aggressively as neuron counts do. (While we don’t have EQs for every species of interest, they’re calculable in principle.)

Finally, we’ve also developed some models to generate values that can be plugged into cost-benefit analyses. We’ll post those in January. Hope they’re useful!

• Thank you, this is very helpful and I definitely agree that EQs are available/​practical enough to use in most cases. Really looking forward to seeing the new models in January!

• FYI: people usually say “modest proposal” for things they think are actually bad ideas, in the tradition of Swift’s satirical https://​​en.wikipedia.org/​​wiki/​​A_Modest_Proposal

• Would these relinquishments be seen by governments as actually income?

I’m not an expert on this, but when I’ve looked into this before I’ve been told it probably would count as income as long as (a) the amount to relinquish was up to the employee and (b) the employee had influence on where the money was donated.

• Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn’t notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.

• We have a report on conscious subsystems coming out I believe next week, which considers the possibility of non-reportable valenced conscious states.

Also (speaking only about my own impressions), I’d say that while some people who talk about neuron counts might be thinking of hidden qualia (eg Brian Tomasik), it’s not clear to me that that is the assumption of most. I don’t think the hidden qualia assumption, for example, is an explicit assumption of Budolfson and Spears or of MacAskill’s discussion in his book (though of course I can’t speak to what they believe privately).

• it’s not clear to me that that is the assumption of most

Thinking that much about anthropics will be common within the movement, at least.

• [ ]
[deleted]
• 28 Nov 2022 20:12 UTC
6 points
0 ∶ 0

Thanks for posting this!

At a skim, this looks related to Passing Up Pay by Jeff Kaufman, and Should effective altruism have a norm against donating to employers? by Owen Cotton-Barratt. I don’t remember what was in those posts, exactly, but imagine that readers who find this interesting might also find the discussion on those posts useful.

• I’m pretty sympathetic to patient philanthropy for longtermist causes that aren’t to do with nearterm Xrisks, because my view is that as long as we preserve option value for the future, they will likely be better placed to use the resources than we are, so we should just save the pool for them to use as they see fit.

The example I usually give when explaining my position is thinking about polio and the iron lung. Say someone in the 1910s wanted to invest in significant iron lung production facilities to make sure polio would never be a problem in the future. 20 years later, the polio vaccine is created and all this investment is obsolete. If that money was saved it could perhaps be used to speed up the distribution of polio vaccines and help eradicate polio etc.

One uncertainty I have about this though, is that I don’t know how to implement this in practice (what % to give later vs give now? How do I know when I should use this pool?). Curious about any takes!

• Hey Bruce, these are some great considerations!

The Patient Philanthropy Fund (PPF) is a fantastic option if you find the arguments behind patient philanthropy compelling. In my view, one of the biggest arguments against patient philanthropy is the idea that, in practice, you may fail to donate the money after all. I like that the PPF is removes yourself from the equation here. I also like that there are also (what seem to me to be) reasonable governance-mechanisms to ensure that the money will end up being donated.

That said, I don’t have a strong view about the merits of patient philanthropy compared to giving now. You can read some of the arguments here. I (very tentatively) take the view that on the margin, philanthropists are already saving too much, and are failing to sufficiently scale up their giving. This makes me think that marginal patient philanthropy is less cost-effective than marginal donations. But… I’m not sure this is the right way to think about this. There could be something different about the PPF (which is saving intentionally, and with an attempt to do so wisely) compared to most philanthropists who are saving more haphazardly.

You mentioned something else—whether to save some % now and give some % now. I think that’s a good question. My hunch here is that it’s exceedingly unlikely that a mixed portfolio is maximising expected value. Happy to say more about this if you’re interested, but this has been a long comment already :) thanks for the great points.

• 28 Nov 2022 20:06 UTC
14 points
0 ∶ 0

My husband and I are planning to donate to Wild Animal Initiative and Animal Charity Evaluators; we’ve also supported a number of political candidates this year (not tax deductible) who share our values.

We’ve been donating to WAI for a while, as we think they have a thoughtful, skilled team tackling a problem with a sweeping scale and scant attention.

We also support ACE’s work to evaluate and support effective ways to help animals. I’m on the board there, and we’re excited about ACE’s new approach to evaluations and trajectory for the coming years.

• Hey team!
I’d love if someone can give me a TL;DR on donation matching—it’s something I always get a bit confused about in terms of like “how much more should I donate because of this”. And someone asked in a slack I was in about counterfactuals, which I realised I didn’t know about either—how else is the money usually used?

Also, does anyone know what the optimal % split between donating to a matching pool vs donating to the charity (am I basically trading off between how much a matching pool actually increases the pie VS the money not being donated?), and how does this change if the org is fully EA funded vs partially vs not at all etc?

1. Most matches are of the free-for-all variety, meaning the funds will definitely go to some charity, just a question of who gets there first (e.g. Facebook & Every.org). While this might sound like a significant qualifier, it’s almost as good as a pure counterfactual unless you believe that all nonprofits are ~equally effective.

1. The ‘worst case’ is a matching pool restricted to one specific org, where presumably the funds will go there regardless, and doesn’t really add anything to your donation.

2. Conversely, as Lizka noted, even the best counterfactual only makes sense in theory if the recipient org is at least half as effective as the best charity you know of.

3. I’m not sure I fully understand the last question. It sounds like you’re referring to a matching pool specific to one charity, in which case no downside, but could be quite different if the pool covers a wider array of nonprofits.

• Mostly agree but I think this overstates it a bit:

it’s almost as good as a pure counterfactual unless you believe that all nonprofits are ~equally effective

It would only be ‘almost as good as a pure counterfactual’ if you think the charity the other people (other than you) would be choosing is likely to be far less effective than the one you are choosing.

My rough belief is that this is usually the case when the other people exploiting this match would be donating to a ‘wealthy country’ charity (e.g., US poverty), to a ‘pets’ charity (cat rescue etc), or (especially) to a ‘luxury/​cultural’ charity (like a university, opera house, etc.)

If this is in a context in which the other people are likely to donate to a mainstream global health and development charity like UNICEF, this case is less clear. We really don’t have good metrics to judge the effectiveness of charities like that.

For mainstream research charities (cancer research, etc.), I’m even less sure, but I would lean towards ‘these charities are probably far less effective than a GiveWell/​ACE/​etc charity’

• Yeah I think we’re on the same page, my point is just that it only takes a single digit multiple to swamp that consideration, and my model is that charities aren’t usually that close. For example, GiveWell thinks its top charities are ~8x GiveDirectly, so taken at face value a match that displaces 1:1 from GiveDirectly would be 88% as good as a ‘pure counterfactual’

• I’m definitely not an expert, but I’ll chime in:

• Here’s the Forum wiki page on donation matching, which collects lots of related posts.

• Presumably the best case for a donation match is a match such that:

• The “matcher” would not use the funds altruistically if they don’t spend them on the match (see also What should “counterfactual donation” mean?)

• The match’s limit won’t run out (at least, without your involvement), or others would use the match less effectively (e.g. if there’s $100 in matching funds going to any charity, and you think others might direct the funds to PlayPumps) • Where the donations are not restricted to a particular project that’s less than half as effective as where you’d be donating otherwise (e.g. you were going to donate$100 in total, and you’d go for GiveWell by default, and someone says they’ll match you for $100 if you give your$100 to PlayPumps — you should probably not do this).

• [There are probably other criteria that I’m not thinking about right now!]

• [ ]
[deleted]
• I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.

I’m still in the process of understanding what happened, and processing the new information that comes in every day. I’m also still working through my views on how I and the EA community could and should respond.

I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the right call, and will ultimately lead to a better and more helpful response.

• 28 Nov 2022 18:55 UTC
5 points
0 ∶ 0

2 small donations through Effektiv Spenden.

• Their climate change fund—according to their description, this adds money to the organizations recommended by Giving Green and Founders Pledge. I don’t prioritize climate change as a cause area, but I give a fixed amount per year to climate charities and Effektiv Spenden supports this one. Why? I do believe climate change is a big problem. Many people feel helpless about climate change, and by donating to a climate charity I can signal that there is a way to actually help—beyond consumption choices. This is also a donation I might be able to talk openly about.

• Their animal welfare fund—mostly ACE recommended charities. The animal welfare movement is quite funding constrained (I’ve heard from people from ACE that recommended charities usually(or never?) get their funding gap** filled completely) and evidence-based animal welfare is a new and growing field.

Unfortunately I will not move a lot of money this year, nor will I spend a lot of time thinking about my donations. But I am happy that I can do at least this little bit.

* I thought that, if everyone with an income similar to mine would do this, the climate would be in a better state, but I was wrong. I quickly fact-checked this. This article on nature.com says “The UN’s Intergovernmental Panel on Climate Change (IPCC) says that an annual investment of $2.4 trillion is needed in the energy system alone until 2035 to limit temperature rise to below 1.5 °C from pre-industrial levels.”. I understand from the article this includes funding from governments and companies. I am not going to disclose my income and my donation budget here, but I can say that my donation is much less than a fair share of this 2.4 trillion. (It may be, if my donation is unusually cost-effective). - apparently it’s damn hard to fix climate change. ** there may be difference between funding gap that the org believes they have themselves, and the funding gap that ACE thinks the org has. I mean the latter. • Interesting to think about! But for this kind of bargain to work, wouldn’t you need confidence that the you in other worlds would uphold their end of the bargain? E.g., if it looks like I’m in videogame-world, it’s probably pretty easy to spend lots of time playing videogames. But can I be confident that my counterpart in altruism-world will actually allocate enough of their time towards altruism? (Note I don’t know anything about Nash bargains and only read the non-maths parts of this post, so let me know if this is a basic misunderstanding!) • Great question—you absolutely need to take that into account! You can only bargain with people who you expect to uphold the bargain. This probably means that when you’re bargaining, you should weight “you in other worlds” in proportion to how likely they are to uphold the bargain. This seems really hard to think about and probably ties in with a bunch of complicated questions around decision theory. • Very small note that Faunalytics is spelled two different ways in the article. The correct spelling is with the “y.” Thanks. • Social Change Lab (the non-profit I run) has a matched-funding opportunity where the next$50,000 we receive will be matched by an anonymous donor. It’s looking very likely we won’t meet this in the allotted time period (until January) so this will likely be a counterfactual donating opportunity. You can donate via Giving What We Can if you’re interested!

• In the last 12 months, James Ozden’s work on social change appears almost entirely designed to lobby and obtain EA resources for his “social movement work”.

Ozden’s “non-profit” Social Change Lab is a website with his content, and has no registration and little activity besides this meta EA work.

Ozden took 40K of EA infrastructure funding, which he then used to produce extremely long articles, whose length conceals that these are self-reviews on the promisingness of his own project. He also used this as a springboard to network and publicize his associations with EA funders.

Ozden does a variety of activity online, which invariably aggrandizes or promotes his own projects, and his work is often low effort or quality, and competes with others.

Ozden is not an EA, because he has set upon his goals on the outset, does not update or acknowledge new information, and his activity has not been demonstrated as promising as a cost effective intervention. If successful, the resources and status he would obtain from EA would give him great status and power in his external community, which is almost certainly his main goal. This would also give him further incentives for his meta EA work.

This was extremely disappointing to some of us which supported him and looked forward to interesting and promising work on social change. This is not only bad because it is not truth seeking, but takes up space for deep analysis of social change.

Ozden has instincts for managing appearance and navigating social movements. His attempts to position himself in “meta” EA or meta animal welfare is problematic.

• I think you need to provide a lot more substance behind the claims you’ve made here.

If you think the problem is the quality of research, my suggestion is to give James some feedback, or to publish some substantive pushbacks of his work.

If you think his intentions are in question, then I’d appreciate documentation of this or at least some indication of the strength of evidence you have but can’t share so third parties know this isn’t just some vibe you’re getting.

I didn’t downvote, but if someone thought your claims were far too strong for the evidence provided and thought it represented a degradation of epistemic norms on the forum, or was unnecessarily unkind, it could be a good enough reason for a downvote. (At time of writing it has −4 karma over 4 votes, but +1 agreement vote over 3 votes)

• *sigh*

This comment is pretty disheartening to see. A lot of this is inaccurate, but I’ll only reply to a few key things as I’m not sure this will be the healthiest or best use of my time:

1. I’m not sure why you think Social Change Lab is “just a website”. We’re an org with 3 employees (2 of which started today hence not on the website yet) but we’re also a registered non-profit in the UK via Companies house.

2. The point about me not being an EA is particularly bizarre, given that I literally just wrote a piece on my personal blog defending EA.

3. Producing extremely long articles...yes that is culmination of various research projects! We also have short pieces, and pretty clear summaries. Would you rather we take donations and not do any work? Other orgs produce long reports, so not sure why ours are so much worse!

4. The other claims are mainly ad-hominems about my supposed bad intentions, so I’ll leave those be. For those that know me, or have interacted with me in various ways, I’m sure it’s pretty clear that “earning great status and power” etc is not my main goal.

FWIW there’s a reason I left doing social movement organising—it’s because I was sceptical of it’s effectiveness! If I was so sold it was the right thing to do, I would be trying to get funding for that, rather than funding for research to figure out if it’s actually overall helpful or harmful.

• (Bumping so this is seen).

I point out that in addition to the challenge of opposing the content, as a volunteer, against people whose full-time job are influencers who colonized the space and take EA money to do so, it is wrong to downvote because you do not like the content.

The issue is that this behavior will entrench itself, and is one critical root cause of why online EA space is low quality.

• [reconsidered and figured this is not the best place for this post but I can’t delete!]

• Can you give information on the “source” of these matching funds? Is this a grantmaker, external funder or arms-length donor to you?

• Cool to see EA orgs investing in digital tools :)

Should be great for fundraising but even better for advocacy and spreading awareness 🔸

• Really excited to see progress on subforums. They could be pretty key for scaling the EA Forum.

This might be obvious, but I wonder if all of the FTX discussion could be pushed into a certain subforum.

• While my work is focused mostly on longtermist interventions, most of my donations for the first half of the year were to Givewell as unrestricted donations. I also did a smaller amount of political giving to EA-aligned candidates, which was partially from my 10% of income dedicated to EA giving. (I split those political donations 50-50 between my bucketed EA spending and my personal spending.) I also gave a small amount to MIRI via every.org.

I have not yet donated all of my 2nd-half-of-2022 donations.

• Hi Jobst,

As far as I can grok, this seems to work (but I can’t grok very far). Maybe you could have a chat with Frederik van de Putte who’s also working on collective decision making, and who even tried his own hand in formalizing the veil of ignorance.

• This year I’m planning to meet with a friend to make a shared donation decision.

Before meeting up I want to create a longlist of potential places to donate to. I’m open to charities in any of the main EA cause areas, as well as evaluators /​ funds, and smaller projects that aren’t yet established.

What are good places to crowdsource a list from?

• [ ]
[deleted]

Thanks for the links, this is interesting. I am not sure I will dig into this topic right now but I might at some point when looking into ecological collapse, so this might provide a good start.

I was aware of the huge impact of discarded fishing gear on plastic, but probably neglected other impacts you mentioned.

But that would mean doing everything right. Not the usual for us humans.

Well put, exactly the problem. Every time there is an issue, whether social or ecological, someone says “yeah but we can do that to solve the problem”. But we don’t. That’s the issue.

• +1 to the Effective Giving subforum!

Suggestion: Add an “Effective Jobs” subforum—where people can discuss the impact of working at different orgs.

For example, I considered working at Zzapp Malaria, it would be nice if others could comment on this idea in some way that would be lower friction than writing a detailed post.

Or maybe someone’s considering working at Tesla, or Solar Edge.

I think there are lots of advantages to having the discussion public and inviting for anyone to ask (or answer), including the org itself, as opposed to having every person do their own research alone. It’s especially bad when people new-to-EA ask where they can work to help and all we have to say is “here are materials on how to evaluate impact” but “we have almost no suggestions for concrete jobs in your area”.

I have lots of thoughts on why this could be great (like these), but I’ll leave it here for now.

wdyt?

• Plan is

The total will be about 10% of my annual income.

the ^ is my approximate probability distribution about what the “right” cause is

• Someone who donates to DxE, very cool! I spent some time with DxE folks during the AVA summit in DC and know a few fairly well—they’re a really committed and great bunch :) plus amazing win with Wayne & Paul getting acquitted if you saw it!

• Very cool! I did see that and I’d say that was one of the most important wins for reducing animal suffering this past year, which is pretty strong evidence to me that they are an effective charity in the sense of ‘gets stuff done’

• 28 Nov 2022 13:53 UTC
7 points
0 ∶ 0

This seems really cool — thanks for setting it up!

• Hi OP,

Really interesting post! As your comment on my own post about funding an SMC replication, suggests, we’re substantively on the same page here about the desirability of furnishing new evidence relative to reading the tea leaves of existing murky evidence when the stakes are high enough.

Has your thinking on this changed in light of the recent preprint from Walker et al. showing strong long-term results?

• FAQ: How to find a really good software job? (high pay, fun, skill building, and maybe optimizing for something else too)

This continues Plan B.

Things I will not answer in this comment:

1. How to get accepted to 1-2 specific companies (like “I want to work at Google or Microsoft”)

2. How to get a high impact job (Plan A)

If you’d be searching for a job and you’d sit with me as a friend and ask me “hey, you spoke to over 100 developers and saw some of how their job search went, what’s the secret element?”, then I would answer you, as a friend, that I can guess how well a certain job search will go according to a single KPI: “How many jobs does the person apply to”. I’d quickly add “I know it’s annoying to apply to lots of jobs, please don’t run away”, and if our conversation would be one of those conversations-that-go-well, then we would speak about all the bad things about applying to lots of jobs and find solutions to them, or work around them, or something.

If, alternatively, you’d say “oh interesting sounds right” and not give me any of the many pushbacks you’re experiencing, then I’d know this is one of those conversations that goes badly, and I’d try to think what I could do differently next time. Maybe I can go meta.

How does applying to lots of places help salary?

1. You can check how much the market is willing to pay you. Experiment. If an org says yes—ask for more next time. If they say no—ask for less. Roughly.

2. Negotiating is really hard when you have no alternative (or don’t know what your alternatives are), but really easy when there are good options lined up.

How does applying to lots of places help building skill?

You can check* how much mentorship (or other relevant skill building properties) many work places have, and pick the best (or one of the best). It’s hard to know how good this metric is without talking to the hiring manager.

How does applying to lots of places help [some other unusual property I’m looking for]?

For almost everyone who asks me this, the unusual-property is something that is easier to find out when talking to the hiring manager.

Yeah, but applying to lots of jobs is stressful /​ time-consuming /​ my-employer-will-know /​ how-can-I-know-the-next-job-will-be-perfect /​ what-if-I-am-rejected

If your response is something like that, please let me know. It’s easier for me to write things if I know it’s helping someone specific

• 28 Nov 2022 13:15 UTC
2 points
1 ∶ 0

Thanks for taking the time to share your thoughts and feelings on this important issue. I would agree that some posts have displayed some self-interested reasoning in an attempt to justify keeping all the money. However, I don’t think it’s fair to characterize those posts as representing some sort of community consensus.

My reaction to your post is, I think, fundamentally the same as my reaction to certain posts veering toward self-interested reasoning, just in a different direction. What is your broader theory of what individuals and organizations (not just in EA) needs to return what money, and when? Does FTX’s janitor need to figure out how much they were paid and return it all because it was “dirty money”? If not, why are grantees in a different moral position? Does your theory rest on certain factual assumptions—such as a belief that FTX was rotten from day one and that everything it touched was “dirty?”

Although I think a lot of the money should be returned, I think it is also quite a bit more complex than “everyone is ethically required to return everything.”

• Again with the janitors. It raises the eyebrow to see the efforts to compare FTX grant recipients with janitors. It’s an attempt at misdirection and diffusion of responsibility: “If I can say that everyone who got FTX money is the same, then I can point to a sympathetic person and say I’m the same—how about a janitor? How about a janitor who used the money to pay for their mom’s life-saving operation?”

The answer is that grant recipients are not janitors. Grant recipients received gifts of money from FTX. Grant recipients did not scrub toilets or empty trash cans. The janitors can figure out what they want to do. That has no bearing on what the grant recipients should do, which is return the money (at least the unspent portion). What janitors do or don’t do had no bearing on the grant recipients’ responsibility.

• I think this is a potentially stronger argument than the one in the original post, which decried all FTX money as “dirty” and said everyone who had it should return it. However, you’re making an assumption that the grantees received gifts, rather than advance compensation for work to be performed. I don’t think that assumption is correct for all or even most grants. There were grant contracts; I don’t think FTX gave Joe Smith $50K free and clear to do whatever he wanted to with the money. As I understand it, most of the contracts obliged Joe Smith to provide$50K worth of research labor for that $50K grant. If Joe Smith spent time conducting the research (when he could have been working for someone else), I think his status is similar to that of the janitor. As I’ve said in several posts, I generally agree that unearned/​unspent monies should be returned—I was responding to an original post that 100% of the funds should be returned in every case. I wrote about a hypothetical janitor for two reasons: First, I didn’t feel the original comment explained why it was OK for anyone to receive any money ever received from FTX. It is fair to push someone’s stated policy position to its full logical extent. In response, you’ve qualified the original position and apparently clarified that it’s OK for someone to retain money paid for work they had already done. The question served its purpose of helping clarify the argument. Second, one could argue that certain senior FTX employees should give back already-earned funds because they were negligent in not detecting that a fraud was afoot. I think that argument is likely wrong, but using a blue-collar employee as the comparator avoids wading into that possibility. • You make some persuasive points. Imagine person A steals your car. Person A then gives the car to Person B, who does not know at that point that the car is stolen. B is going to use the car to deliver food for a food bank or some other good purpose—A nods and gives B the car. The police then find B has your car, and tell B that the car is stolen and belongs to you. B starts looking at his shoes and shuffling his feet. He mumbles that he filled up the gas tank, and it was only half full when he got the car. Maybe he even put some new wiper blades on it. He keeps repeating that he didn’t know the car was stolen when A gave it to him, and that he was supposed to use it for good purposes. It dawns on you that B is not going to return your car. B is straining to think up excuses to keep your car. That’s how the FTX grant situation looks from the outside. Grants are gifts. They are, as you say, up front payments, and in return you say you’re going to use them for some purpose, but it’s not an economic benefit to the grantee. It’s like a rich donor giving a university money to build a sports stadium—it’s a gift even though the university does have to use it to build the stadium. In the scenario, B should return the car to you. If he doesn’t he may be committing the crime of refusing to return known stolen property. • I think there are some grantee situations in which your car metaphor generally makes sense. But I think there are others that look more like this: Thief steals$3000. Thief gets in a car crash. Thief takes car to Innocent Mechanic, who spends significant time and resources repairing the car pursuant to a contract with Thief (without knowledge the money was stolen). Thief pays Innocent Mechanic with the $3000 and picks up the car. Thief is caught, has a heart attack (crashing the car which is now worthless), and dies without a penny to his name. Innocent Victim comes in and demands the$3000 back. Innocent Mechanic asserts the right to be compensated for the work he has performed in good faith and without knowledge his fee was stolen.

Either Innocent Mechanic or Innocent Victim is going to get unfairly screwed here. Assuming he is actually innocent, I think it is OK for Innocent Mechanic to keep the money. Although I feel bad for Innocent Victim, it’s necessary for the smooth functioning of society that workers are confident they will be able to keep fair wages for the work they performed. That’s why mechanics’ liens exist, for instance, and why unpaid wages get priority treatment in bankruptcy. So if you (1) told me this story, (2) told me there was a 110 chance I was Innocent Mechnic, a 110 chance I was Innocent Victim, and a 810 chance I was random member of society, and (3) made me decide who should suffer the loss—I would have said Innocent Victim. That has nothing to do with what I think of the merits of the grants at issue here.

Just saw your clarification—I don’t think it matters that there was no economic benefit to the grantor; the detriment to the grantee is sufficient to establish the grantee’s legitimate interest in retaining the money (to the extent of that detriment). Charities serve important social functions. While I do not generally think charities should get privileged status compared to other transferees, I generally don’t think they should get inferior status either. Hence my inclination to treat them like other vendors here.

• Meant “not an economic benefit to the grantor

A:

1. How I see a typical EA software career is (inspired by 80k and I agree), vaguely split into 2:

In the first “half”, gain skill.

In the second “half”, have direct impact.

The subtext is: Most of your impact will come from the second “half”.

2. If you agree, then—in the first half I wouldn’t optimize for as-high-impact-as-possible, I’d optimize for building skill, and I’d assume that building skill is the most effective way to have long-term-impact (by reaching the second “half” early). If that seems true for you, see Plan B. Building skill, having fun, and making money—can often go together in software, and I think this is what many people should aim for.

3. When to move to work directly?

It’s hard for most of us to estimate our own skill, so what I recommend is “apply to high impact jobs sometimes (every 612 months?)”. See if you’re accepted. If you are, it’s time for Plan A. If not, keep having fun with Plan B. The point of applying isn’t “getting accepted”, it’s more about “having the habit of applying sometimes so you don’t ‘waste’ years of having direct impact by building skill that you already have” (this is a common mistake EA devs make, and this habit-of-applying-sometimes is my current best idea for how to solve it).

Part of the reason I think this is a good idea: If you know you’re not “missing your chance” of doing something really high impact (because you’re applying sometimes) - then I think it’s easier to go “all in” to a job that is focused on skill building (and fun and probably money).

• “Applying” dozens of jobs in less than an hour (combined!)

TL;DR: Post in social media that you’re looking for a job.

What I’d write:

• 1-2 lines on what you’re looking for

• 1-2 lines on what you’re able to do (not to be confused with what you’re looking for)

• Optionally: 1-2 lines on what you don’t want people to contact you about at all. For example, “I only want to start working in 2 months, so please only invite me to interview if that’s ok”

• Ask to share with relevant people/​orgs that might want to hire you

This trick doesn’t work for everyone, but it seems super cost effective if it does fit your situation

• TL;DR:

Reasons to think that “neuron count” correlates with “moral weight”:

1. Neuron counts correlate with our intuitions of moral weights

2. “Pains, for example, would seem to minimally require at least some representation of the body in space, some ability to quantify intensity, and some connections to behavioral responses, all of which require a certain degree of processing power.”

3. “There are studies that show increased volume of brain regions correlated with valenced experience, such as a study showing that cortical thickness in a particular region increased along with pain sensitivity.” (But the opposite is also true. See 6. below.)

Reasons to think that “neuron count” does NOT correlate “moral weight”

1. There’s more to information processing capacity than neuron count. There’s also:

1. Number of neural connections (synapses)

2. Distance between neurons (more distance → more latency)

3. Conduction velocity of neurons

4. Neuron refactory period (“rest time” between neuron activation)

2. “There’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence”

3. “It seems conceptually possible to increase intelligence without increasing the intensity of experience”

4. Within humans, we don’t think that more intelligence implies more moral weight. We don’t generally give less moral to children, elderly, or the cognitiviely impaired.

5. The top-down cognitive influences on pain suggest that maybe intelligence actually mitigates suffering.

6. There are “studies showing that increased pain is correlated with decreased brain volume in areas associated with pain”

7. Hundreds of brain imaging experiments haven’t uncovered any simple relationship between quantity of neurons firing and “amount of pain”

8. Bees have small brains, but have “cognitive flexibility, cross-modal recognition of objects, and play behavior

9. There are competing ideas for correlates of moral weight/​consciousness/​self-awareness:

• Thanks for doing this. Post is too long, could have been dot points. I want to see more TL;DRs like this

• Not sure I agree with the “TL” part haha, but this is a pretty good summary. However, I’d also add that there’s no consensus among people who study general intelligence across species that neuron counts correlate with intelligence (I guess this would go between 1d and 2) and also that I think the idea that more neurons are active during welfare-relevant experiences is a separate but related point to the idea that more brain volume is correlated with welfare-relevant experiences.

I’d also note that your TL/​DR is a summary of the summary, but there are some additional arguments in the report that aren’t included in the summary. For example, here’s a more general argument against using neuron counts in the longer report: https://​​docs.google.com/​​document/​​d/​​1p50vw84-ry2taYmyOIl4B91j7wkCurlB/​​edit#bookmark=id.3mp7v7dyd88i

• Thanks for feedback.

Not sure I agree with the “TL” part haha

Well, yeah. Maybe. It’s also about making the structure more legible.

there are some additional arguments in the report that aren’t included in the summary.

Anything specific I should look at?

• Anything specific I should look at?

My link above was to a bookmark in the report, which includes an additional argument.

• What is a bad guy? Not well defined. You can take it as a naive question, or you can take it as a philosophical one. In the eyes of children, the world is black and white. But in the eyes of sages, it is different. “Everyone can become Yao and Shun”, this is the famous saying of Mencius, but Xunzi thinks that human nature is inherently evil. But whether it is inherently good or inherently evil, we don’t have to get too entangled, let’s start from the simplest, from the perspective of human ethics and morality, to see what a bad person is.

I still think that you should do a summary of the start of your articles, using bullet points and stuff like that. Some people won’t go past the summary and still criticize, but they weren’t going to read the full stuff in the first place. I think many people find a summary helpful because it allows them to know if the article is worth checking and has interesting themes and insights.

Plus I find summaries helpful to take notes in a faster way.

It also allows to highlight some sentences that I find really worth reading like “People commonly think things are wrong without identifying any specific, important error” or “Why point out three errors and then give up? Because it gives you 3 chances to change your mind.

• 28 Nov 2022 11:12 UTC
26 points
1 ∶ 0

We have now also published a post about our impact, our strategy and our funding needs for 2023.
[I work as RP’s director of development.]

• I think this is a somewhat useful post to prompt discussion and reflection, and agree there is probably some motivated reasoning going on. That being said, some pushbacks:

The EA forum expresses almost zero empathy for the individuals who put money/​crypto on the FTX exchange and then were criminally defrauded by FTX.

I think this is demonstrably false—from some of the most upvoted posts on this issue alone:

We must be very clear: fraud in the service of effective altruism is unacceptable
”It is clear that many people—customers and employees—have been deeply hurt by FTX’s collapse. People’s life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important.”

First sentence:
”This is a thread of my thoughts and feelings about the actions that led to FTX’s bankruptcy, and the enormous harm that was caused as a result, involving the likely loss of many thousands of innocent people’s savings.”

My reaction to FTX: appalled

“One or more leaders at FTX have betrayed the trust of everyone who was counting on them.

Most importantly FTX’s depositors, who didn’t stand to gain on the upside but were unwittingly exposed to a massive downside and may lose savings they and their families were relying on.

FTX leaders also betrayed investors, staff, collaborators, and the groups working to reduce suffering and the risk of future tragedies that they committed to help.

No plausible ethics permits one to lose money trading then take other people’s money to make yet more risky bets in the hope that doing so will help you make it back.”

(edit: see Pranay’s comment for more examples)

It is dirty money, the proceeds of FTX’s criminal fraud. Real people have been victimized by FTX.

Keeping FTX grant money is ratifying the criminal fraud.

Where do you draw the line for “clean money”? Is it due to the alleged criminal behavior? Or is it due to real people being victimized?

E.g. if your justification is due to “real people being victimized”, then real people have been victimized by Facebook, should we not accept that money also? If your grandparents get their pension funds, they probably invest in and profit from companies that directly cause harm to real people, should you you tell them to decline that? Crypto assets globally use more electricity than entire countries like Argentina or Australia, does that count as real harm? Does spending that money count as ratifying those harmful actions? What about fossil fuels?

While I do think it’s worth having a conversation about what the EA community or individual grantees’ obligation are, and I think you’re directionally correct about the motivated reasoning claims (though we might disagree about extent), I think I would be interested in a bit more nuance around what you are proposing, unless you literally mean: “all money received from FTX should be returned, no matter what”, in which case I guess I’d be interested in stronger justification. FTX grantees aren’t a monolith, the feasibility of returning the money will vary significantly, and while most of the retail investors were not at fault, nor were most of the FTX grantees. I think I’d consider a claim like “All FTX grantees should return money regardless of their personal situation” to be supererogatory, and not a moral requirement (you may have scenarios where a grantee is just taking the place of another victim, through no fault of their own, and it’s unclear how this is actually net positive or fair, because neither victim is more ‘blameworthy’). I’d be interested if you think otherwise. Again, I agree there was harm done, and I agree there’s motivated reasoning going on, but the existence of motivated reasoning and harm done doesn’t seem sufficient to suggest all FTX grantees should return all their money.

1. You are correct that there have been expressions of sympathy for the ripped-off FTX investors. I overstated the lack of empathy expressed by the forum.

What appears absent, however, is anyone saying, “I received a grant from FTX and I am returning the money.” So verbal expressions of empathy exist, but actions appear lacking.

1. Raising the issue of where does one abstractly draw the line between clean money and dirty money—this is the kind of misdirection and casting around for rationalizations to keep the grant money that raise eyebrows to non-EA people like me who read this forum. Wherever that line is drawn in the abstract, FTX’s criminal fraud crossed it and FTX’s money is dirty, tainted proceeds of theft. Saying, “what about fossil fuels?” does not change FTX’s criminal fraud. That’s the concrete issue at hand.

2. Nice try at implying that grant recipients are equal victims to FTX’s depositors. Grant recipients are not victims at all. Grant recipients received gifts of money from FTX. Money they now know was stolen.

3. Many of the grant recipients appear to be making a lack-of-knowledge argument to justify their keeping the money. They didn’t know it was stolen when they received it, so now they want to feel justified retaining it.

At the same time they don’t want the cognitive dissonance of recognizing that by keeping the grant money they just gave up their pretense of EA ideals. So they search around for rationalizations, and this lack of knowledge argument is one.

It’s not persuasive. Say you are walking down the street and you see person A rob person B at gunpoint. You then ask person A for money and he tosses you B’s wallet. Hopefully everyone would agree you should give the wallet back to B.

Now imagine you are walking down the street and A runs around the corner. You ask him for money and he tosses you a wallet and runs off. Two seconds later a crowd of people, including B, come around the corner. They explain that A robbed B of his wallet 30 seconds ago, and that you are holding the wallet. Do you start arguing that you too are the victim because when A gave you the wallet you didn’t know it was stolen? Well you know now and you should return it to B. Or, if you decide to keep the wallet, you should not pretend you are doing so out of anything other than pure self-interest.

• You are correct that there have been expressions of sympathy for the ripped-off FTX investors. I overstated the lack of empathy expressed by the forum.

What appears absent, however, is anyone saying, “I received a grant from FTX and I am returning the money.” So verbal expressions of empathy exist, but actions appear lacking.

This seems wrong to me. E.g. Paul Christiano has written:

Earlier this year ARC received a grant for $1.25M from the FTX foundation. We now believe that this money morally (if not legally) belongs to FTX customers or creditors, so we intend to return$1.25M to them.

• 1.
Fair, I haven’t spent enough time reading the comments to push back on this. Not sure if you consider something like this as reasonable or a rationalization.

2.
I’m not making a case that FTX grantees should keep the money where possible, I’m just pointing out that if your principle is that people shouldn’t spend money they know to have an unclean source, then it actually isn’t as obvious as you are making it out to be. If your argument is around harm done, then clearly there are also considerations around what harms might occur as a result of returning this money (and I hope you would also want to push strongly for those who are involved with returning money to be prioritizing harm mitigation, as one consideration is there might a lot more money available to be redirected from that pool than the EA one). If your argument is about legal proceedings, then I’m hopeful that EAs will follow the law here, and I would condemn EAs who tried to keep the money if it was against the law to. If your argument is about something like—“doing right by retail investors”, then I think you still have to make a case that the retail investors are now the strongest focus of moral obligation for all grantees for that pool of money (though I think that could easily be true for some or even most grantees).

I’m also not casting around for rationalizations to keep the grant money because I’ve never received an FTX grant.

3.
I didn’t claim that grant recipients are equal victims to FTX’s depositors, I said there could be scenarios where a grantee takes the place of another victim, through no fault of their own. You can reasonably interpret that as “sometimes grant recipients are also victims”. It sounds like you disagree because you say “Grant recipients are not victims at all.” but I strongly disagree with this.

Suppose someone quits their job to do independent research funded by the FTX future fund. then they lose the “gift” of money that they made some major life changes around, but now have no job to return to, meaning they quit their job for nothing.

Say an organization with 10 researchers who was running in part on FTX money now lose funding. But they’ve also spent millions of FTX money last year. Should they fire everyone and go bankrupt trying to return this money?

I don’t understand how you have so much (and rightly so!) empathy for the retail investors, but at the same time don’t see how it’s possible for grant recipients to also be victims.

4.
I think this is not a great analogy, because most of the time you don’t expect people to throw you a wallet and run off, and grant recipients don’t really receive money like that. A better analogy might be: you apply to a scholarship to college. you successfully get this scholarship, and then 2 weeks into college, you find out that Gaddafi funded your scholarship. Should you now decline your scholarship? Now you find out that half of all the college entrance scholarships in your country was funded by Gaddafi, but you’re not actually required to pay it back legally—but a third party is starting a process that allows you to pay it back, should you choose to. Should all those people be morally obliged to return their scholarships, regardless of the situation? What if you can’t go to college if you didn’t have this scholarship? What if 75% of the money wasn’t going to go back to the people harmed by Gaddafi if you declined the scholarship, but kept by the third party? What if you thought you could do better than 75% if you didn’t voluntarily return the money? I mean this is also not a great analogy but I’m mainly trying to illustrate why you could have recipients of grants also be a victim, as well as why modelling grants as wallets that a stranger throws you while running away isn’t great.

FWIW, I think if you’re not going to face major financial issues, then I do think it’s likely morally preferable to return the money than not, all else equal. And I think if you would be in a financially stable situation if you return the money and choose not to, then you should be very careful about the reasoning for your justification, because it would be very easy for there to be some degree of motivated reasoning. But again, I think you haven’t made a case for why it is a moral requirement for all grant recipients to return the money regardless of circumstance, as opposed to something supererogatory.

• Hi All. I’ve known about effective altruism for quite a while and it’s helped me a lot. Mostly found out about it after a breakup I had and improved how I viewed the world to be more objective and rational. I found this video and was wondering if anyone else had it. I think it raises some points but that it also has a lot of flaws which shouldn’t be overlooked. I commented on my criticisms. Know they’re are definitely more academic and sophisticated responses to this topic but think it’s approachable and also gives insight into how people who are very much outside the community think.

• 28 Nov 2022 10:42 UTC
14 points
7 ∶ 2

I don’t think anyone should be required to share detailed personal financial information in order to donate

• I usually agree, but Moskovitz isn’t just any donor, he makes up the great majority of EA funding. Insofar as this rule has any limits in exceptional cases, Moskovitz’s money seems to rise to the level where it’s worth considering. I should also add, in case the wording made it seem otherwise, I’m not necessarily suggesting this should be a super burdensome audit of the sort the IRS might conduct, if that seems like too much even something much lighter seems like it would be useful.

• Hi OP,

Welcome to the EA Forum! I appreciate you sharing your thoughts and using this forum to engage with the EA community.

I’ll use my own number system separate from yours so it doesn’t get too confusing if you want to respond.

1. To summarize my personal thoughts, it’s plausible that people should return unspent money (I’m mostly unsure but lean towards disagree). But if what you’re saying is that people should return FTX Future Fund money that they’ve spent (and therefore go into personal debt), I disagree.

2. Regarding your #5, could you give examples of people giving “zero empathy for the individuals who put money/​crypto on the FTX exchange and then were criminally defrauded by FTX”? And could you give examples of “Those people do not seem to count to the EA community”? I feel like most posts I’ve read about this have mentioned people feeling terrible about the effects on innocent people who lost money to FTX. I agree that the harm suffered by them is far greater than any harm to the EA community.

For some examples off the top of my head, there’s the Future Fund team’s post (“Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.”), Michel’s post (“people’s lives got ruined...”), Will’s post (”… that may cost many thousands of people their savings...”), evhub’s post (“People’s life savings and careers...”), Rob Wiblin’s post (“Most importantly FTX’s depositors… may lose savings they and their families were relying on...”), and Rethink Priorities’ Leadership Statement (“many customers are unable to retrieve funds held by FTX”).

3. It would help if you could spell out your exact logic here. The next few questions are sub-questions/​specifics I fail to understand. Feel free to answer them specifically or just spell out your logic cohesively if you think that would answer all my questions.

3a. I (and every EA I’m aware of) agree that depositors in FTX who’ve lost their money are victims here. If FTX Future Fund recipients paid back all their money (including spent money), most would be in financial problems and be victims as well. What good would that do? And why should the recipients pay back the money and not the organizations that they paid their money to?

3a. How does this apply to money that had been spent before the FTX news came out? Many individuals or organizations would need to pay back thousands (or tens of thousands) of dollars that they don’t have. FTX grantees who would face financial ruin as a result would also be victims here.

3c. Would you then agree that all organizations and celebrities that were paid by FTX should pay back whatever money they got? Including sponsorships like the Miami Heat, TSM (the e-sports team), and Stephen Curry? And the electrical company and janitors in the Bahamas paid by FTX? And other employees of FTX (uninvolved and unaware of the scandal) who received a salary?

3d. If your answer to 3c is that some parties shouldn’t have to pay back money, then why? I really encourage you to think about why this should apply to the people and organizations who used or are using the money they received to fund charitable causes. Why is the financial burden on them?

3e. Is your logic affected by how the amount of money FTX grantees received compares to the amount of money owed to depositors? By my best findings, it seems like over $8 billion is owed back (New York Times, “The run on FTX...”), and the FTX Foundation gave away$140 million (New York Times, “as recently as last month...”). $140 mil divided by$8 bil is just under 2%. If all that $140 million is given back, then 98% of money owed will be remaining. The vast majority of depositors will still be financial affected, and now suddenly hundreds (if not more) of recipients of FTX Foundation grants will be financially affected. Thank you! • I don’t think 3e is convincing, both because it doesn’t account for other potential revenue streams for the bankruptcy estate and because partial recompense is still valuable. Also, as to much of the funds in question, the grantees are still in a position to avoid or at least manage financial loss. A grantee’s continued interest in working on a cause that is important to them just isn’t in the same category as a depositor’s interest in recovering monies stolen from them. Nor is a larger organization’s desire not to cancel initiatives that it was planning on due to the FTX money. Collasping the effect of the fraud into a binary of “financially affected” /​ “not financially affected” and counting noses doesn’t make a whole lot of sense to me. Would you be asking 3e—and would your own answer be the same—if FTX had made the grants to opera organizations and opera singers in the United States? I think there have been at least a few posts (not by you, to my recollection) that strongly suggest the author is applying a different standard to EA grantees than they would to opera grantees. I think it problematic if the answer to whether money should be returned changes much based on the nature of the charity. Such a stance will generally imply that we have some sort of right to tell crime victims that they will have to be involuntary donors because the cause is so important in our eyes as to justify them making that sacrifice. 1. I think you are correct that, as a practical matter, there is a difference between FTX grant money that has already been spent and grant money that is unspent. Unspent money should be returned. It would be asking too much for grant recipients to also return money already spent. That would be ideal but it is unrealistic. 3c. Janitors? Please. That is a false equivalency equating grant recipients with janitors. Grant recipients didn’t scrub any toilets or empty any trash cans at FTX. Instead, grant recipients were given a gift of money from FTX. This is another example of misdirection and searching around for a rationalization to keep the tainted grant money, it is an unseemly form of what-aboutism. “But, what about the janitors?” Let the janitors figure out what they want to do. What are the grant recipients going to do? As for the celebrity endorsers, of course they should return all the money they were paid. They affirmatively helped lure more depositors into the scheme. But again, that’s a separate issue from the EA grant recipients. 3e. Is someone seriously arguing that because the amount of FTX grants was ‘only’$140 million the money should not be returned because it’s only a fraction of the stolen $8 billion? That is an unworthy and unseemly argument. “Hey, I’m only going to keep a portion of the stolen money, so it’s okay.” If that argument is indeed advanced then the moral compass has been tossed overboard and the ship is being intentionally run onto the reef. • Grant recipients didn’t scrub any toilets or empty any trash cans at FTX. Instead, grant recipients were given a gift of money from FTX. This is another example of misdirection and searching around for a rationalization to keep the tainted grant money, it is an unseemly form of what-aboutism. I would push back against this. Grant recipients were given money to carry out a job. They were not given money unconditionally, which is what a gift is. So I think you still need to spell out the underlying principle here. • Well, if doing the right thing isn’t enough in itself to convince grant recipients (see ARC’s commendable statement that they are returning their$1.25 million grant) then how about wanting to stay out of prison.

Right now FTX grant recipients are relying on the fig leaf that FTX and/​or its former executives have not yet been been charged or convicted of criminally defrauding (embezzling from) FTX’s depositors.

But even at this point grant recipients are on notice that their grant money likely was stolen funds. If criminal convictions are obtained, this will be cemented—the money is stolen property.

Therefore, grant recipients who possess stolen funds (grants from FTX made with stolen money) are on the cusp of potentially committing a crime themselves if they refuse to return it—the crime of retaining known stolen property. It matters not that they did not know at the time they received it that it was stolen. They know now and yet are retaining the money rather than returning the money to its rightful owners.

In many jurisdictions retaining know stolen property is a crime (not talking about receiving stolen property—there you do have to know at the time you received it that it was stolen; talking about retaining known stolen property once you know it is stolen—that is an independent crime). Look for example at Model Penal Code 223.6(1) “A person is guilty of theft if he purposely … retains or disposes of … property of another knowing that it has been stolen, or believing that it probably has been stolen, unless the property is … retained, or disposed with purpose to restore it to the owner.”

Imagine you are a grant recipient, and in two years are in court trying to explain why you kept the grant money and then spent it, after you were on notice that it was likely stolen funds.

Right now there is a window where people can freely choose to do the right thing, or not. That window will likely close, and then the discussion will reduce to return the money or potentially commit a crime and go to jail.

(Again, I have have never owned or speculated in any cryptocurrency, and I have no connections whatsoever with FTX or any crypto business—I do not have a dog in this fight.)

• 28 Nov 2022 10:12 UTC
2 points
0 ∶ 0

The bottom line (as it were): natives of places that got more schools in the 1970s would exhibit a smaller young-old pay gap in 1995. That is the correlation that the Duflo study looks for…

I think I had to reread the Methods and results several times before understanding what seemed very unintuitive -- I initially would’ve guessed that the young-old wage gap would’ve been higher among the more educated population. Since the more educated youngsters would have made more money.

I think upon rereading and brief reflection that the young-old wage gap is supposed to be a situation where older men make more money? And the gap is lower if younger men had more education. This makes sense but I got tripped up a few times (and am still not certain my reading is correct). Sorry if this question is really naive for people who understand the labor econ literature!

• Thanks for the feedback. I can see why that is confusing. You figured it out. I inserted a couple of sentences before the first table to clarify. And I changed “young-old pay gap” to “old-young pay gap” because I think the hyphen reads, at least subliminally, like a minus sign.

• FTX seems to have been turned definitely fraudulent sometime this mid year, and to have been a legitimate business before then. I think it is likely that grants given before then are NOT the result of fraud. You don’t address this in your post, but I think it’s one of the strongest arguments

• I think you flipped something in here. Did you mean you think grants after this are likely the result of fraud?

• Why do you think FTX was legit before then? Any sort of presumption of legitimate operations has been shattered by subsequent events + worse-than-Enron internal controls. I’m remaining open to possibilities of what FTX was like prior to mid-2022, but I see very little reliable evidence to support a no-fraud conclusion. Because the answer may impact both the legal and ethical obligations, I think it’s important not to make assumptions unsupported by independent evidence (i.e., evidence not controlled by SBF or pre bankruptcy FTX).

• Hi NunoSempre, grants given before when are the result of fraud? And is this an argument for or against OP’s argument?

• Sorry, had a typo, grants given before mid-year probably are NOT the result of fraud. I’m less certain about grants afterwards. One particularly deciding moment seems to have been the crash of Luna, around May 7th, but I don’t recall the fraud starting just then. I imagine the bankruptcy proceedings may surface more information.

• Thanks for saying this. One complexity is that there are people who will have already rearranged their lives to work of Future-Fund-funded projects, so handing back the money will leave them worse off than when they started. I can understand then why they’d be unwilling to give the money back.

But I share your distaste for the dialogue and posts I’ve seen around the issue. People obviously have a heavy vested interest in keeping this money and this will bias some people’s moral reasoning. I think it’s a bad look for the community. Disappointing.

• Pre-commitent: I will reply on this thread with where I decide to donate by New Years Day.

I’m planning to meet up with a friend and decide where to give my donations on New Years Eve. I often find I put my giving off so I’m using this post as a commitment device.

1. If anyone wants to join me, feel free to comment with the date you plan to donate by.
2. Does anyone have any suggestions about how to structure your thinking on where to donate? I’m planning to spend a couple of hours on this with a friend.

• Hi Meghan,

Do you have any guesses about whether the lives of wild terrestrial arthropods are positive or negative? Knowing this would be important to assess changes in their population size. If they are negative as predicted by the Weighted Animal Welfare Index of Charity Entrepeneurship, decreasing (increasing) the population would be better (worse) everything else equal. Of course, everything else is never equal, and this questions is quite complex (e.g. due to trophic cascades).

I understand the question is not explicitly covered by your study, but any thoughts would be welcome. Feel free to pass the question too (I read your note here).

• Thanks for pushing the frontier of interspecies comparisons!

But we limited our time on these reports due to finding that, historically, within our CEAs, factors like these did not end up carrying the most weight or being the source of highest variability. For example, the cost of an intervention can vary by several orders of magnitude, and more logistical factors were more often the deciding factor when deciding between the most promising looking interventions.

I understand there is not much variation in the total welfare score, but this may not apply to the moral weight (which varies a lot based on the number of neurons, for instance). So species can potentially be a major factor for prioritisation.

Some questions:

• How resilient are the signs (positive or negative) of your estimates for the total welfare score?

• Have there been any other efforts to quantify the welfare of wild animals?

I am particularly interested in the answers for terrestrial arthropods (i.e. the “wild bug”).

• My monthly donations go to GiveWell charities via One for the World, but if I’m able to make discretionary donations this year, they will go Generation Pledge and High Impact Professionals. In light of recent events, efforts to support more HNW and effective giving outreach (respectively) are doubly important, to diversify EA’s funding base.

• I am planning to donate Mission #NoChildHungry in 2022. Childs are the future of India and we need to save future of India. I am staring donation with the online fundraising platform Give. It is India’s most trusted and secured online donation and fundraising platform. Apart from donation there are various causes available for which you can start donation or fundraising too.

• 28 Nov 2022 4:55 UTC
9 points
4 ∶ 0

Meta: looks like this is Skye’s first post on the EA Forum. Welcome, Skye! Thanks for your courage in posting this!

• To echo this, I’m grateful to Skye for raising the topic here and providing an opening for the discussion between harfe, Lauren, and Linch. I hope that Skye wasn’t dissuaded by the criticism, because I think there is a strong case that certain aspects of children’s advocacy are (currently) more tractable in developed countries. We have lots of examples of changes to the law in favour of children happening via established institutions.

Differences between regional legal systems need to be taken into account, but to provide an interesting example from the UK: section 58 of the Children Act 2004 specifies that hitting a child can be justified by a parent or guardian as long as it is “reasonable punishment” and doesn’t amount to “actual bodily harm” (long-term injury). This was revoked by the Children Act 2019 in Scotland and in 2020 in Wales, with each taking a couple of years to come into effect. Now children effectively have the same legal protection from assault and battery as adults in these countries, including from their parents. Any EAs based in the UK with any inclination towards national-scale advocacy would be well placed to push for similar changes in England and Northern Ireland.

How these acts came about might also make an interesting case study for possible replication in other places—and to determine if these problems are “neglected” enough for EAs. I haven’t read the history, but I suspect national charities like Barnado’s and the NSPCC along with international organisations like UNICEF were involved to varying degrees.

I also agree with the broader thrust of Skye’s post that children almost universally lack the legal and political framework to represent their own interests, so it is up to adults to advocate for them. Even if we can show that conditions are worse for children along most metrics in developing countries (as Lauren puts forward well), I still think children would be worth advocating for in developed countries for the right EAs.

• 28 Nov 2022 4:49 UTC
18 points
6 ∶ 0

The idea that that EA needs to be more aware of potential major risks relating to its most pivotal funder is a valid one. However, there are a range of donor-related risks, and potential fraud is rather low on the threat list here. It is just more salient given recent events. Better to focus on contingency planning that would cover any number of much more likely reasons Moskovitz’s money could largely or entirely dry up—massive stock market crash, he decides that his foundation should spend its money on other causes, whatever.

To the extent that discovering Moskovitz is SBF 0.5 would be survivable, I think the plan would be largely the same as for other scenarios in which his money is no longer available.

• If there was a consequence-free way to do it, it seems like a good idea. One difference is that Moscovitz’s funding and fortune has been reliable for years. Just by the Lindy principle, it seems on its face to be more likely than SBF’s to continue without scandal or disappearance.