RSS

Anthropics

TagLast edit: Jan 6, 2022, 4:12 PM by Leo

Anthropics is the study of observation selection effects.

Further reading

LessWrong (2012) Observation selection effect, LessWrong Wiki, June 26.

SIA > SSA, part 1: Learn­ing from the fact that you exist

Joe_CarlsmithOct 1, 2021, 6:58 AM
16 points
2 comments16 min readEA link

Quan­tify­ing an­thropic effects on the Fermi paradox

Lukas FinnvedenFeb 15, 2019, 10:47 AM
72 points
4 comments41 min readEA link

Repli­cat­ing and ex­tend­ing the grabby aliens model

Tristan CookApr 23, 2022, 12:36 AM
137 points
27 comments51 min readEA link

Does the Self-Sam­pling As­sump­tion Im­ply Clair­voy­ance?

Matthew BarberSep 18, 2024, 10:09 PM
9 points
1 comment1 min readEA link

SIA > SSA, part 4: In defense of the pre­sump­tu­ous philosopher

Joe_CarlsmithOct 1, 2021, 7:00 AM
8 points
0 comments22 min readEA link

In fa­vor of more an­throp­ics research

Eric NeymanAug 15, 2021, 5:33 PM
21 points
7 comments1 min readEA link

SIA > SSA, part 3: An aside on bet­ting in anthropics

Joe_CarlsmithOct 1, 2021, 6:59 AM
10 points
3 comments8 min readEA link

An­throp­ics and the Univer­sal Distribution

Joe_CarlsmithNov 28, 2021, 8:46 PM
18 points
0 comments46 min readEA link

[Question] What is the rea­son­ing be­hind the “an­thropic shadow” effect?

tobycrisford 🔸Sep 3, 2019, 1:21 PM
4 points
2 comments2 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

EmberAug 17, 2022, 1:13 PM
18 points
28 comments23 min readEA link

SIA > SSA, part 2: Telekine­sis, refer­ence classes, and other scandals

Joe_CarlsmithOct 1, 2021, 6:58 AM
10 points
0 comments35 min readEA link

EA read­ing list: pop­u­la­tion ethics, in­finite ethics, an­thropic ethics

richard_ngoAug 3, 2020, 9:22 AM
25 points
10 comments1 min readEA link

On longter­mism, Bayesi­anism, and the dooms­day argument

iporphyrySep 1, 2022, 12:27 AM
30 points
5 comments13 min readEA link

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchinSep 11, 2022, 10:22 AM
33 points
25 comments52 min readEA link

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssuesMay 17, 2022, 11:20 AM
20 points
13 comments1 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlosJul 14, 2023, 5:10 PM
5 points
1 comment6 min readEA link

AI things that are per­haps as im­por­tant as hu­man-con­trol­led AI

ChiMar 3, 2024, 6:07 PM
113 points
9 comments21 min readEA link

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus VindingAug 4, 2024, 7:34 AM
27 points
3 comments7 min readEA link

Dispel­ling the An­thropic Shadow

Eli RoseSep 8, 2024, 6:45 PM
107 points
23 comments1 min readEA link
(globalprioritiesinstitute.org)

Silent cos­mic rulers

Magnus VindingJul 15, 2024, 5:07 PM
41 points
16 comments9 min readEA link

Point-by-point re­ply to Yud­kowsky on UFOs

Magnus VindingDec 19, 2024, 9:24 PM
4 points
0 comments9 min readEA link

Cos­mic AI safety

Magnus VindingDec 6, 2024, 10:32 PM
23 points
5 comments6 min readEA link

Is Our Uni­verse A New­comb’s Para­dox Si­mu­la­tion?

Jordan ArelMay 15, 2022, 7:28 AM
16 points
8 comments2 min readEA link

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden KarnofskyJul 13, 2021, 4:57 PM
217 points
47 comments8 min readEA link
(www.cold-takes.com)

2020 PhilPapers Sur­vey Results

RobBensingerNov 2, 2021, 5:06 AM
40 points
0 comments12 min readEA link

Dooms­day and ob­jec­tive chance

Global Priorities InstituteJun 30, 2021, 1:14 PM
3 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Sav­ing Aver­age Utili­tar­i­anism from Tarsney - Self-Indi­ca­tion As­sump­tion can­cels solip­sis­tic swamp­ing.

wuschelMay 16, 2021, 1:47 PM
10 points
13 comments5 min readEA link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson WagnerOct 26, 2021, 6:38 PM
50 points
1 comment19 min readEA link

Are we prob­a­bly in the mid­dle of hu­man­ity? (Five an­throp­ics thought ex­per­i­ments)

carter allen🔸Mar 14, 2024, 6:45 PM
9 points
2 comments10 min readEA link

Quan­tum Im­mor­tal­ity: A Per­spec­tive if AI Doomers are Prob­a­bly Right

turchinNov 7, 2024, 4:06 PM
7 points
0 comments1 min readEA link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan ArelAug 17, 2024, 10:09 PM
15 points
12 comments4 min readEA link

Against An­thropic Shadow

tobycrisford 🔸Jun 3, 2022, 5:49 PM
49 points
13 comments13 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA ItalyJan 17, 2023, 2:59 PM
1 point
0 comments8 min readEA link
No comments.