RSS

Anthropics

TagLast edit: 6 Jan 2022 16:12 UTC by Leo

Anthropics is the study of observation selection effects.

Further reading

LessWrong (2012) Observation selection effect, LessWrong Wiki, June 26.

SIA > SSA, part 1: Learn­ing from the fact that you exist

Joe_Carlsmith1 Oct 2021 6:58 UTC
16 points
2 comments16 min readEA link

Quan­tify­ing an­thropic effects on the Fermi paradox

Lukas Finnveden15 Feb 2019 10:47 UTC
72 points
4 comments41 min readEA link

Repli­cat­ing and ex­tend­ing the grabby aliens model

Tristan Cook23 Apr 2022 0:36 UTC
137 points
27 comments51 min readEA link

Does the Self-Sam­pling As­sump­tion Im­ply Clair­voy­ance?

Matthew Barber18 Sep 2024 22:09 UTC
9 points
1 comment1 min readEA link

SIA > SSA, part 4: In defense of the pre­sump­tu­ous philosopher

Joe_Carlsmith1 Oct 2021 7:00 UTC
8 points
0 comments22 min readEA link

In fa­vor of more an­throp­ics research

Eric Neyman15 Aug 2021 17:33 UTC
21 points
7 comments1 min readEA link

SIA > SSA, part 3: An aside on bet­ting in anthropics

Joe_Carlsmith1 Oct 2021 6:59 UTC
10 points
3 comments8 min readEA link

An­throp­ics and the Univer­sal Distribution

Joe_Carlsmith28 Nov 2021 20:46 UTC
18 points
0 comments46 min readEA link

[Question] What is the rea­son­ing be­hind the “an­thropic shadow” effect?

tobycrisford 🔸3 Sep 2019 13:21 UTC
4 points
2 comments2 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

Ember17 Aug 2022 13:13 UTC
18 points
28 comments23 min readEA link

SIA > SSA, part 2: Telekine­sis, refer­ence classes, and other scandals

Joe_Carlsmith1 Oct 2021 6:58 UTC
10 points
0 comments35 min readEA link

Is Our Uni­verse A New­comb’s Para­dox Si­mu­la­tion?

Jordan Arel15 May 2022 7:28 UTC
16 points
8 comments2 min readEA link

EA read­ing list: pop­u­la­tion ethics, in­finite ethics, an­thropic ethics

richard_ngo3 Aug 2020 9:22 UTC
25 points
10 comments1 min readEA link

On longter­mism, Bayesi­anism, and the dooms­day argument

iporphyry1 Sep 2022 0:27 UTC
30 points
5 comments13 min readEA link

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchin11 Sep 2022 10:22 UTC
33 points
25 comments52 min readEA link

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden Karnofsky13 Jul 2021 16:57 UTC
217 points
47 comments8 min readEA link
(www.cold-takes.com)

2020 PhilPapers Sur­vey Results

RobBensinger2 Nov 2021 5:06 UTC
40 points
0 comments12 min readEA link

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssues17 May 2022 11:20 UTC
20 points
13 comments1 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
5 points
1 comment6 min readEA link

AI things that are per­haps as im­por­tant as hu­man-con­trol­led AI

Chi3 Mar 2024 18:07 UTC
113 points
9 comments21 min readEA link

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus Vinding4 Aug 2024 7:34 UTC
23 points
3 comments7 min readEA link

Dispel­ling the An­thropic Shadow

Eli Rose8 Sep 2024 18:45 UTC
107 points
23 comments1 min readEA link
(globalprioritiesinstitute.org)

Silent cos­mic rulers

Magnus Vinding15 Jul 2024 17:07 UTC
37 points
16 comments9 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA Italy17 Jan 2023 14:59 UTC
1 point
0 comments8 min readEA link

Dooms­day and ob­jec­tive chance

Global Priorities Institute30 Jun 2021 13:14 UTC
3 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Sav­ing Aver­age Utili­tar­i­anism from Tarsney - Self-Indi­ca­tion As­sump­tion can­cels solip­sis­tic swamp­ing.

wuschel16 May 2021 13:47 UTC
10 points
13 comments5 min readEA link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson Wagner26 Oct 2021 18:38 UTC
50 points
1 comment19 min readEA link

Are we prob­a­bly in the mid­dle of hu­man­ity? (Five an­throp­ics thought ex­per­i­ments)

carter allen🔸14 Mar 2024 18:45 UTC
9 points
2 comments10 min readEA link

Quan­tum Im­mor­tal­ity: A Per­spec­tive if AI Doomers are Prob­a­bly Right

turchin7 Nov 2024 16:06 UTC
7 points
0 comments1 min readEA link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan Arel17 Aug 2024 22:09 UTC
15 points
12 comments4 min readEA link

Against An­thropic Shadow

tobycrisford 🔸3 Jun 2022 17:49 UTC
49 points
13 comments13 min readEA link
No comments.