Classic posts (from the Forum Digest)

This is a sequence of posts that were shared as “Classics” in the EA Forum Digest (Mailchimp, Substack)

Use re­silience, in­stead of im­pre­ci­sion, to com­mu­ni­cate uncertainty

The Pos­si­bil­ity of an On­go­ing Mo­ral Catas­tro­phe (Sum­mary)

‘Ugh Fields’, or why you can’t even bear to think about that task

The world is much bet­ter. The world is awful. The world can be much bet­ter.

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

Keep­ing Ab­solutes in Mind

Growth and the case against ran­domista development

Effec­tive Altru­ism is a Ques­tion (not an ide­ol­ogy)

500 Million, But Not A Sin­gle One More

Be­ware sur­pris­ing and sus­pi­cious convergence

On Caring

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Re­duc­ing long-term risks from malev­olent actors

Not yet gods

You have more than one goal, and that’s fine

In­tegrity for consequentialists

Men­tor­ship, Man­age­ment, and Mys­te­ri­ous Old Wizards

Scope-sen­si­tive ethics: cap­tur­ing the core in­tu­ition mo­ti­vat­ing utilitarianism

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

Aim high, even if you fall short

[Question] Why “cause area” as the unit of anal­y­sis?

No, it’s not the in­cen­tives — it’s you

List of ways in which cost-effec­tive­ness es­ti­mates can be misleading

Epistemic Legibility

Po­ten­tial down­sides of us­ing ex­plicit probabilities

Learn­ing By Writing

Rea­son­ing Transparency

Care and demandingness

Com­par­ing char­i­ties: How big is the differ­ence?

Mo­ral cir­cles: De­grees, di­men­sions, visuals

Shap­ley val­ues: Bet­ter than counterfactuals

Amanda Askell: The moral value of information

What gives me hope

The ex­traor­di­nary value of or­di­nary norms

Min­i­mal-trust investigations

Ex­is­ten­tial risk as com­mon cause

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Why Char­i­ties Usu­ally Don’t Differ Astro­nom­i­cally in Ex­pected Cost-Effectiveness

On ev­ery­day al­tru­ism and the lo­cal circle

Six Ways To Get Along With Peo­ple Who Are To­tally Wrong*

Key Les­sons From So­cial Move­ment History

Aiming for the min­i­mum of self-care is dangerous

Cru­cial ques­tions for longtermists

Hard-to-re­verse de­ci­sions de­stroy op­tion value

Real­ity is of­ten underpowered

The Un­weav­ing of a Beau­tiful Thing

Three in­tu­itions about EA: re­spon­si­bil­ity, scale, self-improvement

In­de­pen­dent impressions

Distil­la­tion and re­search debt

Failing with abandon

On “fringe” ideas

The Value of a Life

You Don’t Need To Jus­tify Everything

Biose­cu­rity needs en­g­ineers and ma­te­ri­als scientists

Sup­port­ive scep­ti­cism in practice

Manag­ing ‘Im­posters’

Con­tra Ari Ne’eman On Effec­tive Altruism

When in doubt, ap­ply*

Virtues for Real-World Utilitarians

[Question] Why “cause area” as the unit of anal­y­sis?

Sum­mary: When should an effec­tive al­tru­ist donate? (William MacAskill)

We are in triage ev­ery sec­ond of ev­ery day

Whole­hearted choices and “moral­ity as taxes”

Rad­i­cal Empathy

Why the triv­ial­ity ob­jec­tion to EA is beside the point

In­visi­ble im­pact loss (and why we can be too er­ror-averse)

Cer­tifi­cates of impact

Are we liv­ing at the most in­fluen­tial time in his­tory?

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Com­par­a­tive ad­van­tage does not mean do­ing the thing you’re best at

What hap­pens on the av­er­age day?

A Long-run per­spec­tive on strate­gic cause se­lec­tion and philanthropy

9/​26 is Petrov Day

In­tegrity for consequentialists

On the limits of ideal­ized values

Don’t think, just ap­ply! (usu­ally)

The Ap­pli­ca­tion is not the Applicant

What the EA com­mu­nity can learn from the rise of the neoliberals

Cheerfully

How we can make it eas­ier to change your mind about cause areas

Sup­port­ive scep­ti­cism in practice

Im­por­tant Between-Cause Con­sid­er­a­tions: things ev­ery EA should know about

How likely is World War III?

Killing the ants

Most prob­lems fall within a 100x tractabil­ity range (un­der cer­tain as­sump­tions)

Use­ful Vices for Wicked Problems

EA should taboo “EA should”

Fram­ing Effec­tive Altru­ism as Over­com­ing Indifference

How moral progress hap­pens: the de­cline of foot­bind­ing as a case study

$1.25/​day—What does that mean?

Em­piri­cal data on value drift

But­terfly Ideas

Ter­mi­nate de­liber­a­tion based on re­silience, not certainty

Re­cov­er­ing from Rejection

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

Without spe­cific coun­ter­mea­sures, the eas­iest path to trans­for­ma­tive AI likely leads to AI takeover

Col­lec­tion of good 2012-2017 EA fo­rum posts

Let’s think about slow­ing down AI

An em­bar­rass­ment of riches

Cor­po­rate cam­paigns af­fect 9 to 120 years of chicken life per dol­lar spent

Seven things that sur­prised us in our first year work­ing in policy—Lead Ex­po­sure Elimi­na­tion Pro­ject

The case of the miss­ing cause pri­ori­ti­sa­tion research

The Copen­hagen In­ter­pre­ta­tion of Ethics

An Ar­gu­ment for Why the Fu­ture May Be Good

Effec­tive al­tru­ists love sys­temic change

Hits-based Giving

AGI and Lock-In

Most* small prob­a­bil­ities aren’t pas­calian

How I Formed My Own Views About AI Safety

Good news on cli­mate change

Open Phil Should Allo­cate Most Neart­er­mist Fund­ing to An­i­mal Welfare

Effec­tive­ness is a Con­junc­tion of Multipliers

Act util­i­tar­i­anism: crite­rion of right­ness vs. de­ci­sion procedure

How effec­tive are prizes at spurring in­no­va­tion?

Some read­ings & notes on how to do high-qual­ity, effi­cient research

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

De­tect­ing Ge­net­i­cally Eng­ineered Viruses With Me­tage­nomic Sequencing

Five Ways to Han­dle Flow-Through Effects

The Im­por­tance of Com­mit­ting to Causes

Room for Other Things: How to ad­just if EA seems overwhelming

Find­ing more effec­tive causes

Good things that hap­pened in EA this year

An Eval­u­a­tion of An­i­mal Char­ity Eval­u­a­tors

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Les­sons learned from Tyve, an effec­tive giv­ing startup

You have a set amount of “weird­ness points”. Spend them wisely.

[Question] What are some quick, easy, re­peat­able ways to do good?

EA & “The cor­rect re­sponse to un­cer­tainty is *not* half-speed”

See­ing more whole

A cause can be too neglected

If you don’t have good ev­i­dence one thing is bet­ter than an­other, don’t pre­tend you do

No in­juries were reported

Why An­ima In­ter­na­tional sus­pended the cam­paign to end live fish sales in Poland

No Silver Bul­let Solu­tions for the Were­wolf Crisis

Effi­cient char­ity: do unto oth­ers...

Some un­fun les­sons I learned as a ju­nior grantmaker

You should write about your job

Why Neu­ron Counts Shouldn’t Be Used as Prox­ies for Mo­ral Weight

Some ob­ser­va­tions from an EA-ad­ja­cent (?) char­i­ta­ble effort

Want to make a differ­ence on policy and gov­er­nance? Be­come an ex­pert in some­thing spe­cific and boring

I saved a kid’s life today

Snap­shot of a ca­reer choice 10 years ago

Rea­sons and Per­sons: Watch the­o­ries eat themselves

Why should eth­i­cal anti-re­al­ists do ethics?

Con­crete Biose­cu­rity Pro­jects (some of which could be big)

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Big List of Cause Candidates

A Hap­piness Man­i­festo: Why and How Effec­tive Altru­ism Should Re­think its Ap­proach to Max­imis­ing Hu­man Welfare

Messy per­sonal stuff that af­fected my cause pri­ori­ti­za­tion (or: how I started to care about AI safety)

Me­gapro­jects for animals

10 years of Earn­ing to Give

EA Wins 2023