RSS

Fanaticism

TagLast edit: 11 May 2021 19:37 UTC by EA Wiki assistant

Fanaticism can be described as the position that it’s morally better to reject “a certainty of a moderately good outcome, such as one additional life saved” in favour of “a lottery which probably gives a worse outcome, but has a tiny probability of some vastly better outcome (perhaps trillions of additional blissful lives created)” (Wilkinson 2020). Some have argued that fanaticism should be rejected and that this might undermine the case for certain philosophical positions, such as longtermism.

See also the concept of “Pascal’s mugging” (LessWrong 2020).

Bibliography

LessWrong (2020) Pascal’s mugging, LessWrong Wiki, August 3 (updated 23 September 2020).

Wiblin, Robert & Keiran Harris (2021) Christian Tarsney on future bias and a possible solution to moral fanaticism, 80,000 Hours, May 5.

Wilkinson, Hayden (2020) In defence of fanaticism, GPI Working Paper No. 4-2020 (updated January 2021).

Related entries

alternatives to expected value theory | altruistic wager | decision theory | decision-theoretic uncertainty | expected value | moral uncertainty | naive consequentialism vs. sophisticated consequentialism | risk aversion

Ex­pected value the­ory is fa­nat­i­cal, but that’s a good thing

HaydenW21 Sep 2020 8:48 UTC
52 points
20 comments5 min readEA link

Mo­ral Anti-Real­ism Se­quence #5: Me­taeth­i­cal Fa­nat­i­cism (Dialogue)

Lukas_Gloor17 Jun 2020 12:33 UTC
22 points
10 comments15 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: A Prob­lem for Long-Ter­mism?

kokotajlod8 Nov 2018 10:09 UTC
23 points
17 commentsEA link

Tiny Prob­a­bil­ities of Vast Utilities: De­fus­ing the Ini­tial Worry and Steel­man­ning the Problem

kokotajlod10 Nov 2018 9:12 UTC
24 points
6 commentsEA link

Tiny Prob­a­bil­ities of Vast Utilities: Solutions

kokotajlod14 Nov 2018 16:04 UTC
20 points
5 commentsEA link

Tiny Prob­a­bil­ities of Vast Utilities: Con­clud­ing Arguments

kokotajlod15 Nov 2018 21:47 UTC
21 points
5 comments10 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

Pablo6 May 2021 10:39 UTC
26 points
3 comments1 min readEA link
(80000hours.org)

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
59 points
28 comments2 min readEA link
(globalprioritiesinstitute.org)

Pos­si­ble mis­con­cep­tions about (strong) longtermism

jackmalde9 Mar 2021 17:58 UTC
77 points
43 comments19 min readEA link

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
100 points
9 comments8 min readEA link

Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

80000_Hours18 Jan 2021 8:35 UTC
10 points
1 comment124 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

New ar­ti­cle from Oren Etzioni

iarwain25 Feb 2020 15:38 UTC
23 points
3 comments2 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Bibliog­ra­phy and Appendix

kokotajlod20 Nov 2018 17:34 UTC
9 points
0 comments24 min readEA link