RSS

Epistemology

TagLast edit: 2 Jun 2022 19:35 UTC by Leo

Epistemology is the study of how people should form credences about the nature of the world.

Beliefs and credences are purely evaluative attitudes: they are simply about the way that we think the world is. A person might believe that it will rain, for example, even though they hope that it will not.

Beliefs are all-or-nothing attitudes: we either believe that it will rain or we don’t believe that it will rain. Credences, on the other hand, reflect how likely we think it is that something is true, expressed as a real number between 0 and 1. For example, we might think that there is a 80% chance that it will rain, and therefore have a credence of 0.8 that it will rain.

It is widely held that beliefs are rational if they are supported by our evidence. And credences are rational if they follow the probability axioms (e.g. a credence should never be greater than 1 in any event) and are revised in accordance with Bayes’ rule.

Improving the accuracy of beliefs

One way to improve a person’s capacity to do good is to increase the accuracy of their beliefs. Since people’s actions are determined by their desires and their beliefs, a person aiming to do good will generally do more good the more accurate their beliefs are.

Examples of belief-improving work include reading books, crafting arguments in moral philosophy, writing articles about important problems, and making scientific discoveries.

Two distinctions are relevant in this context. First, a person can build capacity by improving either their factual or their normative beliefs. Second, a person can build capacity by improving either particular beliefs or general processes of belief-formation.

Further reading

Steup, Matthias & Ram Neta (2005) Epistemology, The Stanford Encyclopedia of Philosophy, December 14 (updated 11 April 2020).

Related entries

Bayesian epistemology | decision theory | epistemic deference | rationality

Ten Com­mand­ments for Aspiring Superforecasters

Vasco Grilo20 Feb 2024 13:01 UTC
13 points
2 comments1 min readEA link
(goodjudgment.com)

Epistemic Hell

rogersbacon127 Jan 2024 17:17 UTC
10 points
1 comment14 min readEA link
(www.secretorum.life)

How did you up­date on AI Safety in 2023?

Chris Leong23 Jan 2024 2:21 UTC
30 points
5 comments1 min readEA link

Deep athe­ism and AI risk

Joe_Carlsmith4 Jan 2024 18:58 UTC
64 points
4 comments1 min readEA link

Pro­ject ideas: Epistemics

Lukas Finnveden4 Jan 2024 7:26 UTC
34 points
1 comment17 min readEA link
(lukasfinnveden.substack.com)

Say how much, not more or less ver­sus some­one else

Gregory Lewis28 Dec 2023 22:24 UTC
99 points
10 comments5 min readEA link

Solu­tions to prob­lems with Bayesianism

Bob Jacobs4 Nov 2023 12:15 UTC
27 points
2 comments21 min readEA link

Uncer­tainty over time and Bayesian updating

David Rhys Bernard25 Oct 2023 15:51 UTC
63 points
2 comments28 min readEA link

AI Safety is Drop­ping the Ball on Clown Attacks

trevor121 Oct 2023 23:15 UTC
−17 points
0 comments1 min readEA link

Con­trol­ling for a thinker’s big idea

Vasco Grilo21 Oct 2023 7:56 UTC
60 points
11 comments8 min readEA link
(magnusvinding.com)

EA is un­der­es­ti­mat­ing in­tel­li­gence agen­cies and this is dangerous

trevor126 Aug 2023 16:52 UTC
28 points
4 comments10 min readEA link

Us­ing Points to Rate Differ­ent Kinds of Evidence

Ozzie Gooen25 Aug 2023 19:26 UTC
33 points
6 comments6 min readEA link

Ex­pert trap: Ways out (Part 3 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak22 Jul 2023 13:05 UTC
2 points
0 comments9 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
4 points
1 comment6 min readEA link

Ex­pert trap (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak9 Jun 2023 22:53 UTC
3 points
0 comments7 min readEA link

Ex­pert trap: What is it? (Part 1 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak6 Jun 2023 15:05 UTC
3 points
0 comments8 min readEA link

AI Doom and David Hume: A Defence of Em­piri­cism in AI Safety

Matt Beard30 May 2023 20:45 UTC
33 points
6 comments12 min readEA link

Epistemics (Part 2: Ex­am­ples) | Reflec­tive Altruism

BrownHairedEevee19 May 2023 21:28 UTC
34 points
0 comments2 min readEA link
(ineffectivealtruismblog.com)

On miss­ing moods and tradeoffs

Lizka9 May 2023 10:06 UTC
50 points
3 comments2 min readEA link

Pre­dictable up­dat­ing about AI risk

Joe_Carlsmith8 May 2023 22:05 UTC
129 points
12 comments36 min readEA link

How much do you be­lieve your re­sults?

Eric Neyman5 May 2023 19:51 UTC
208 points
14 comments1 min readEA link

Think­ing of Con­ve­nience as an Eco­nomic Term

Ozzie Gooen5 May 2023 19:09 UTC
28 points
5 comments12 min readEA link

In fa­vor of steelmanning

JP Addison1 May 2023 15:33 UTC
27 points
3 comments3 min readEA link

[Question] What con­crete ques­tion would you like to see de­bated? I might or­ganise some de­bates.

Nathan Young18 Apr 2023 16:13 UTC
35 points
50 comments1 min readEA link

Prob­a­bil­ities, Pri­ori­ti­za­tion, and ‘Bayesian Mind­set’

Violet Hour4 Apr 2023 10:16 UTC
55 points
6 comments24 min readEA link

New­comb’s Para­dox Explained

Alex Vellins31 Mar 2023 21:26 UTC
2 points
11 comments2 min readEA link

De­sen­si­tiz­ing Deepfakes

Phib29 Mar 2023 1:20 UTC
18 points
8 comments1 min readEA link

The space of sys­tems and the space of maps

Jan_Kulveit22 Mar 2023 16:05 UTC
12 points
0 comments5 min readEA link
(www.lesswrong.com)

Es­ti­ma­tion for san­ity checks

NunoSempere21 Mar 2023 0:13 UTC
58 points
7 comments4 min readEA link
(nunosempere.com)

Let’s make the truth eas­ier to find

DPiepgrass20 Mar 2023 4:28 UTC
36 points
8 comments24 min readEA link

Dis­courseDrome on Tradeoffs

JP Addison18 Feb 2023 20:50 UTC
12 points
0 comments1 min readEA link
(www.tumblr.com)

[Linkpost] Michael Hue­mer on the case for Bayesian statistics

John G. Halstead7 Feb 2023 17:52 UTC
20 points
2 comments1 min readEA link

How to be a good ag­nos­tic (and some good rea­sons to be dog­matic)

peterhartree4 Feb 2023 11:08 UTC
11 points
1 comment3 min readEA link

Epistemic health is a com­mu­nity issue

ConcernedEAs2 Feb 2023 15:59 UTC
11 points
11 comments9 min readEA link

[Question] What im­prove­ments should be made to im­prove EA dis­cus­sion on heated top­ics?

Ozzie Gooen16 Jan 2023 20:11 UTC
54 points
35 comments1 min readEA link

A new Heuris­tic to Up­date on the Cre­dences of Others

aaron_mai16 Jan 2023 11:35 UTC
22 points
4 comments20 min readEA link

We in­ter­viewed 15 China-fo­cused re­searchers on how to do good research

gabriel_wagner19 Dec 2022 19:08 UTC
46 points
3 comments23 min readEA link

Be less trust­ing of in­tu­itive ar­gu­ments about so­cial phe­nom­ena

Nathan_Barnard18 Dec 2022 1:11 UTC
43 points
20 comments4 min readEA link

On Epistemics and Communities

JP Addison16 Dec 2022 19:11 UTC
36 points
0 comments3 min readEA link

Up­date your be­liefs with a “Yes/​No de­bate”

jorges3 Nov 2022 0:05 UTC
18 points
4 comments4 min readEA link

Multi-Fac­tor De­ci­sion Mak­ing Math

Elliot Temple30 Oct 2022 16:46 UTC
1 point
13 comments37 min readEA link
(criticalfallibilism.com)

Effec­tive Altru­ism’s Im­plicit Epistemology

Violet Hour18 Oct 2022 13:38 UTC
123 points
12 comments28 min readEA link

Paper sum­mary: The Epistemic Challenge to Longter­mism (Chris­tian Tarsney)

Global Priorities Institute11 Oct 2022 11:29 UTC
39 points
5 comments4 min readEA link
(globalprioritiesinstitute.org)

There is no royal road to al­ign­ment

Eleni_A17 Sep 2022 13:24 UTC
18 points
2 comments3 min readEA link

When should you trust your in­tu­ition/​gut check/​hunch over ex­plicit rea­son­ing?

Sharmake5 Sep 2022 22:18 UTC
9 points
0 comments1 min readEA link
(80000hours.org)

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Eleni_A3 Sep 2022 23:21 UTC
5 points
0 comments17 min readEA link

The Role of “Economism” in the Belief-For­ma­tion Sys­tems of Effec­tive Altruism

Thomas Aitken1 Sep 2022 7:33 UTC
28 points
3 comments10 min readEA link
(drive.google.com)

Pre­fer be­liefs to cre­dence probabilities

Noah Scales1 Sep 2022 2:04 UTC
3 points
1 comment4 min readEA link

Longter­mism and Com­pu­ta­tional Complexity

David Kinney31 Aug 2022 21:51 UTC
41 points
46 comments27 min readEA link

Who or­dered al­ign­ment’s ap­ple?

Eleni_A28 Aug 2022 14:24 UTC
5 points
0 comments3 min readEA link

Cap­i­tal­ism, power and episte­mol­ogy: a cri­tique of EA

Matthew_Doran22 Aug 2022 14:20 UTC
4 points
19 comments23 min readEA link

The Wages of North-At­lantic Bias

Sach Wry19 Aug 2022 12:34 UTC
8 points
2 comments17 min readEA link

LW4EA: Epistemic Legibility

Jeremy16 Aug 2022 15:55 UTC
5 points
2 comments3 min readEA link
(www.lesswrong.com)

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat Woods15 Aug 2022 14:22 UTC
76 points
8 comments2 min readEA link

The most im­por­tant les­son I learned af­ter ten years in EA

Kat Woods3 Aug 2022 12:28 UTC
188 points
8 comments6 min readEA link

[Question] I’m col­lect­ing con­tent that show­cases epistemic virtues, any sug­ges­tions?

Alejandro Acelas31 Jul 2022 12:15 UTC
10 points
2 comments1 min readEA link

Red team­ing a model for es­ti­mat­ing the value of longter­mist in­ter­ven­tions—A cri­tique of Tarsney’s “The Epistemic Challenge to Longter­mism”

Anjay F16 Jul 2022 19:05 UTC
21 points
0 comments25 min readEA link

Limits to Legibility

Jan_Kulveit29 Jun 2022 17:45 UTC
103 points
3 comments5 min readEA link
(www.lesswrong.com)

Street Outreach

Barracuda6 Jun 2022 13:40 UTC
4 points
3 comments2 min readEA link

Global health is im­por­tant for the epistemic foun­da­tions of EA, even for longtermists

Owen Cotton-Barratt2 Jun 2022 13:57 UTC
163 points
15 comments2 min readEA link

[Question] Where can I find good crit­i­cisms of EA made by non-EAs?

oh543212 Jun 2022 0:03 UTC
26 points
5 comments1 min readEA link

[Question] What im­por­tant truth do very few peo­ple agree with you on?

Tomer_Goloboy1 Jun 2022 17:18 UTC
3 points
3 comments1 min readEA link

My notes on: Se­quence think­ing vs. cluster thinking

Vasco Grilo25 May 2022 15:03 UTC
24 points
0 comments5 min readEA link

Im­pact is very complicated

Justis22 May 2022 4:24 UTC
98 points
12 comments6 min readEA link

Flimsy Pet The­o­ries, Enor­mous Initiatives

Ozzie Gooen9 Dec 2021 15:10 UTC
210 points
57 comments4 min readEA link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

Ozzie Gooen5 Nov 2021 9:01 UTC
91 points
20 comments3 min readEA link

Truth­ful AI

Owen Cotton-Barratt20 Oct 2021 15:11 UTC
55 points
14 comments10 min readEA link

On Solv­ing Prob­lems Be­fore They Ap­pear: The Weird Episte­molo­gies of Alignment

adamShimi11 Oct 2021 8:21 UTC
28 points
0 comments15 min readEA link

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

Linch14 Sep 2021 20:43 UTC
277 points
59 comments23 min readEA link

[Question] “Epistemic maps” for AI De­bates? (or for other is­sues)

Harrison Durland30 Aug 2021 4:59 UTC
14 points
8 comments5 min readEA link

Epistemic tres­pass­ing, or epistemic squat­ting? | Noahpinion

BrownHairedEevee25 Aug 2021 1:50 UTC
11 points
1 comment1 min readEA link
(noahpinion.substack.com)

[Question] Mat­sés—Are lan­guages pro­vid­ing epistemic cer­tainty of state­ments not of the in­ter­est of the EA com­mu­nity?

mikbp8 Jun 2021 19:25 UTC
15 points
12 comments1 min readEA link

Epistemic Trade: A quick proof sketch with one example

Linch11 May 2021 9:05 UTC
19 points
2 comments8 min readEA link

[Question] Is the cur­rent defi­ni­tion of EA not rep­re­sen­ta­tive of hits-based giv­ing?

Venkatesh26 Apr 2021 4:37 UTC
44 points
14 comments1 min readEA link

Defer­ence for Bayesians

John G. Halstead13 Feb 2021 12:33 UTC
101 points
29 comments7 min readEA link

How mod­est should you be?

John G. Halstead28 Dec 2020 17:47 UTC
26 points
10 comments7 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

[Link] The Op­ti­mizer’s Curse & Wrong-Way Reductions

Chris Smith4 Apr 2019 13:28 UTC
94 points
61 comments1 min readEA link

Sen­tien­tism: Im­prov­ing the hu­man episte­mol­ogy and ethics baseline

JamieWoodhouse27 Mar 2019 15:34 UTC
8 points
6 comments1 min readEA link

Against Modest Epistemology

EliezerYudkowsky14 Nov 2017 21:26 UTC
17 points
11 comments15 min readEA link

Se­quence think­ing vs. cluster thinking

GiveWell25 Jul 2016 10:43 UTC
17 points
0 comments28 min readEA link
(blog.givewell.org)

An episte­mol­ogy for effec­tive al­tru­ism?

Benjamin_Todd21 Sep 2014 21:46 UTC
22 points
19 comments3 min readEA link

Nav­i­gat­ing the episte­molo­gies of effec­tive altruism

Ozzie_Gooen23 Sep 2013 19:50 UTC
0 points
1 comment5 min readEA link
No comments.