Classic posts (from the Forum Digest)

This is a sequence of posts that were shared as “Classics” in the EA Forum Digest

Use re­silience, in­stead of im­pre­ci­sion, to com­mu­ni­cate uncertainty

The Pos­si­bil­ity of an On­go­ing Mo­ral Catas­tro­phe (Sum­mary)

‘Ugh Fields’, or why you can’t even bear to think about that task

The world is much bet­ter. The world is awful. The world can be much bet­ter.

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

Keep­ing Ab­solutes in Mind

Growth and the case against ran­domista development

Effec­tive Altru­ism is a Ques­tion (not an ide­ol­ogy)

500 Million, But Not A Sin­gle One More

Be­ware sur­pris­ing and sus­pi­cious convergence

On Caring

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Re­duc­ing long-term risks from malev­olent actors

An un­offi­cial Re­plac­ing Guilt tier list

A sum­mary of ev­ery Re­plac­ing Guilt post

Not yet gods

You have more than one goal, and that’s fine

In­tegrity for consequentialists

Men­tor­ship, Man­age­ment, and Mys­te­ri­ous Old Wizards

Scope-sen­si­tive ethics: cap­tur­ing the core in­tu­ition mo­ti­vat­ing utilitarianism

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

Aim high, even if you fall short

[Question] Why “cause area” as the unit of anal­y­sis?

No, it’s not the in­cen­tives — it’s you

List of ways in which cost-effec­tive­ness es­ti­mates can be misleading

Epistemic Legibility

Po­ten­tial down­sides of us­ing ex­plicit probabilities

Learn­ing By Writing

Rea­son­ing Transparency

Care and demandingness

Com­par­ing char­i­ties: How big is the differ­ence?

Mo­ral cir­cles: De­grees, di­men­sions, visuals

Shap­ley val­ues: Bet­ter than counterfactuals

Amanda Askell: The moral value of information

What gives me hope

The ex­traor­di­nary value of or­di­nary norms

Min­i­mal-trust investigations

Ex­is­ten­tial risk as com­mon cause

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Why Char­i­ties Usu­ally Don’t Differ Astro­nom­i­cally in Ex­pected Cost-Effectiveness

On ev­ery­day al­tru­ism and the lo­cal circle

Six Ways To Get Along With Peo­ple Who Are To­tally Wrong*

Key Les­sons From So­cial Move­ment History

Aiming for the min­i­mum of self-care is dangerous

Cru­cial ques­tions for longtermists

Hard-to-re­verse de­ci­sions de­stroy op­tion value

Real­ity is of­ten underpowered

The Un­weav­ing of a Beau­tiful Thing

Three in­tu­itions about EA: re­spon­si­bil­ity, scale, self-improvement

In­de­pen­dent impressions

Effi­cient char­ity: do unto oth­ers...

Distil­la­tion and re­search debt

Failing with abandon

If you don’t have good ev­i­dence one thing is bet­ter than an­other, don’t pre­tend you do

On “fringe” ideas

The Value of a Life

You Don’t Need To Jus­tify Everything

Biose­cu­rity needs en­g­ineers and ma­te­ri­als scientists

Sup­port­ive scep­ti­cism in practice

Manag­ing ‘Im­posters’

Con­tra Ari Ne’eman On Effec­tive Altruism

When in doubt, ap­ply*

Virtues for Real-World Utilitarians

[Question] Why “cause area” as the unit of anal­y­sis?

Sum­mary: When should an effec­tive al­tru­ist donate? (William MacAskill)

We are in triage ev­ery sec­ond of ev­ery day

Whole­hearted choices and “moral­ity as taxes”

Rad­i­cal Empathy

Why the triv­ial­ity ob­jec­tion to EA is beside the point

In­visi­ble im­pact loss (and why we can be too er­ror-averse)

Cer­tifi­cates of impact

Are we liv­ing at the most in­fluen­tial time in his­tory?

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Com­par­a­tive ad­van­tage does not mean do­ing the thing you’re best at

What hap­pens on the av­er­age day?

A Long-run per­spec­tive on strate­gic cause se­lec­tion and philanthropy

9/​26 is Petrov Day

In­tegrity for consequentialists

On the limits of ideal­ized values

Don’t think, just ap­ply! (usu­ally)

The Ap­pli­ca­tion is not the Applicant

What the EA com­mu­nity can learn from the rise of the neoliberals

Cheerfully

How we can make it eas­ier to change your mind about cause areas

Sup­port­ive scep­ti­cism in practice

Im­por­tant Between-Cause Con­sid­er­a­tions: things ev­ery EA should know about

How likely is World War III?

Killing the ants

Most prob­lems fall within a 100x tractabil­ity range (un­der cer­tain as­sump­tions)

Use­ful Vices for Wicked Problems

EA should taboo “EA should”

Fram­ing Effec­tive Altru­ism as Over­com­ing Indifference

How moral progress hap­pens: the de­cline of foot­bind­ing as a case study

$1.25/​day—What does that mean?

Em­piri­cal data on value drift

But­terfly Ideas

Ter­mi­nate de­liber­a­tion based on re­silience, not certainty

Re­cov­er­ing from Re­jec­tion (writ­ten for the In-Depth EA Pro­gram)

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

Without spe­cific coun­ter­mea­sures, the eas­iest path to trans­for­ma­tive AI likely leads to AI takeover