RSS

Epistemology

TagLast edit: 2 Jun 2022 19:35 UTC by Leo

Epistemology is the study of how people should form credences about the nature of the world.

Beliefs and credences are purely evaluative attitudes: they are simply about the way that we think the world is. A person might believe that it will rain, for example, even though they hope that it will not.

Beliefs are all-or-nothing attitudes: we either believe that it will rain or we don’t believe that it will rain. Credences, on the other hand, reflect how likely we think it is that something is true, expressed as a real number between 0 and 1. For example, we might think that there is a 80% chance that it will rain, and therefore have a credence of 0.8 that it will rain.

It is widely held that beliefs are rational if they are supported by our evidence. And credences are rational if they follow the probability axioms (e.g. a credence should never be greater than 1 in any event) and are revised in accordance with Bayes’ rule.

Improving the accuracy of beliefs

One way to improve a person’s capacity to do good is to increase the accuracy of their beliefs. Since people’s actions are determined by their desires and their beliefs, a person aiming to do good will generally do more good the more accurate their beliefs are.

Examples of belief-improving work include reading books, crafting arguments in moral philosophy, writing articles about important problems, and making scientific discoveries.

Two distinctions are relevant in this context. First, a person can build capacity by improving either their factual or their normative beliefs. Second, a person can build capacity by improving either particular beliefs or general processes of belief-formation.

Further reading

Steup, Matthias & Ram Neta (2005) Epistemology, The Stanford Encyclopedia of Philosophy, December 14 (updated 11 April 2020).

Related entries

Bayesian epistemology | decision theory | epistemic deference | rationality

Epistemic Hell

rogersbacon127 Jan 2024 17:17 UTC
10 points
1 comment14 min readEA link
(www.secretorum.life)

Prob­a­bil­ities, Pri­ori­ti­za­tion, and ‘Bayesian Mind­set’

Violet Hour4 Apr 2023 10:16 UTC
66 points
6 comments24 min readEA link

Pro­ject ideas: Epistemics

Lukas Finnveden4 Jan 2024 7:26 UTC
43 points
1 comment17 min readEA link
(lukasfinnveden.substack.com)

In favour of ex­plor­ing nag­ging doubts about x-risk

Owen Cotton-Barratt25 Jun 2024 23:52 UTC
89 points
15 comments2 min readEA link

Flimsy Pet The­o­ries, Enor­mous Initiatives

Ozzie Gooen9 Dec 2021 15:10 UTC
212 points
57 comments4 min readEA link

Dis­tanc­ing EA from ra­tio­nal­ity is foolish

Jan_Kulveit25 Jun 2024 21:02 UTC
136 points
32 comments2 min readEA link

Think­ing of Con­ve­nience as an Eco­nomic Term

Ozzie Gooen5 May 2023 19:09 UTC
28 points
5 comments12 min readEA link

On miss­ing moods and tradeoffs

Lizka9 May 2023 10:06 UTC
50 points
3 comments2 min readEA link

Longter­mism and Com­pu­ta­tional Complexity

David Kinney31 Aug 2022 21:51 UTC
41 points
46 comments25 min readEA link

[Link] The Op­ti­mizer’s Curse & Wrong-Way Reductions

Chris Smith4 Apr 2019 13:28 UTC
94 points
61 comments1 min readEA link

[Question] What con­crete ques­tion would you like to see de­bated? I might or­ganise some de­bates.

Nathan Young18 Apr 2023 16:13 UTC
35 points
50 comments1 min readEA link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

Ozzie Gooen5 Nov 2021 9:01 UTC
91 points
20 comments3 min readEA link

In fa­vor of steelmanning

JP Addison🔸1 May 2023 15:33 UTC
27 points
3 comments3 min readEA link

Defer­ence for Bayesians

John G. Halstead13 Feb 2021 12:33 UTC
101 points
30 comments7 min readEA link

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

Linch14 Sep 2021 20:43 UTC
285 points
59 comments23 min readEA link

De­sen­si­tiz­ing Deepfakes

Phib29 Mar 2023 1:20 UTC
22 points
10 comments1 min readEA link

The most im­por­tant les­son I learned af­ter ten years in EA

Kat Woods3 Aug 2022 12:28 UTC
193 points
8 comments6 min readEA link

How much do you be­lieve your re­sults?

Eric Neyman5 May 2023 19:51 UTC
211 points
14 comments1 min readEA link

Solu­tions to prob­lems with Bayesianism

Bob Jacobs 🔸4 Nov 2023 12:15 UTC
27 points
2 comments21 min readEA link

My Cur­rent Claims and Cruxes on LLM Fore­cast­ing & Epistemics

Ozzie Gooen26 Jun 2024 0:40 UTC
44 points
7 comments24 min readEA link

Say how much, not more or less ver­sus some­one else

Gregory Lewis🔸28 Dec 2023 22:24 UTC
100 points
10 comments5 min readEA link

Ten Com­mand­ments for Aspiring Superforecasters

Vasco Grilo🔸20 Feb 2024 13:01 UTC
13 points
2 comments1 min readEA link
(goodjudgment.com)

A tril­ogy on anti-philan­thropic misdirection

Richard Y Chappell🔸5 Apr 2024 16:39 UTC
103 points
12 comments1 min readEA link
(rychappell.substack.com)

Deep athe­ism and AI risk

Joe_Carlsmith4 Jan 2024 18:58 UTC
64 points
4 comments1 min readEA link

Epistemic Trade: A quick proof sketch with one example

Linch11 May 2021 9:05 UTC
19 points
3 comments8 min readEA link

Why is it so hard to know if you’re helping?

Vasco Grilo🔸16 Oct 2024 16:08 UTC
58 points
6 comments12 min readEA link
(www.theintrinsicperspective.com)

[Question] Where can I find good crit­i­cisms of EA made by non-EAs?

oh543212 Jun 2022 0:03 UTC
26 points
5 comments1 min readEA link

Truth­seek­ing is the ground in which other prin­ci­ples grow

Elizabeth27 May 2024 1:11 UTC
104 points
17 comments1 min readEA link

Pre­dictable up­dat­ing about AI risk

Joe_Carlsmith8 May 2023 22:05 UTC
130 points
12 comments36 min readEA link

[Question] Mat­sés—Are lan­guages pro­vid­ing epistemic cer­tainty of state­ments not of the in­ter­est of the EA com­mu­nity?

Miquel Banchs-Piqué (prev. mikbp)8 Jun 2021 19:25 UTC
15 points
12 comments1 min readEA link

Limits to Legibility

Jan_Kulveit29 Jun 2022 17:45 UTC
103 points
3 comments5 min readEA link
(www.lesswrong.com)

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

[Question] What im­prove­ments should be made to im­prove EA dis­cus­sion on heated top­ics?

Ozzie Gooen16 Jan 2023 20:11 UTC
54 points
35 comments1 min readEA link

Street Outreach

Barracuda6 Jun 2022 13:40 UTC
4 points
2 comments2 min readEA link

Global health is im­por­tant for the epistemic foun­da­tions of EA, even for longtermists

Owen Cotton-Barratt2 Jun 2022 13:57 UTC
163 points
15 comments2 min readEA link

How to be a good ag­nos­tic (and some good rea­sons to be dog­matic)

peterhartree4 Feb 2023 11:08 UTC
11 points
1 comment3 min readEA link

Dis­courseDrome on Tradeoffs

JP Addison🔸18 Feb 2023 20:50 UTC
12 points
0 comments1 min readEA link
(www.tumblr.com)

Im­pact is very complicated

Justis22 May 2022 4:24 UTC
98 points
12 comments7 min readEA link

[Question] “Epistemic maps” for AI De­bates? (or for other is­sues)

Marcel D30 Aug 2021 4:59 UTC
14 points
9 comments5 min readEA link

Con­trol­ling for a thinker’s big idea

Vasco Grilo🔸21 Oct 2023 7:56 UTC
60 points
11 comments8 min readEA link
(magnusvinding.com)

Truth­ful AI

Owen Cotton-Barratt20 Oct 2021 15:11 UTC
55 points
14 comments10 min readEA link

How mod­est should you be?

John G. Halstead28 Dec 2020 17:47 UTC
26 points
10 comments7 min readEA link

Se­quence think­ing vs. cluster thinking

GiveWell25 Jul 2016 10:43 UTC
17 points
0 comments28 min readEA link
(blog.givewell.org)

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat Woods15 Aug 2022 14:22 UTC
77 points
8 comments2 min readEA link

[Question] Is the cur­rent defi­ni­tion of EA not rep­re­sen­ta­tive of hits-based giv­ing?

Venkatesh26 Apr 2021 4:37 UTC
44 points
14 comments1 min readEA link

On Epistemics and Communities

JP Addison🔸16 Dec 2022 19:11 UTC
36 points
0 comments3 min readEA link

[Question] What im­por­tant truth do very few peo­ple agree with you on?

Tomer_Goloboy1 Jun 2022 17:18 UTC
3 points
3 comments1 min readEA link

Uncer­tainty over time and Bayesian updating

David Rhys Bernard25 Oct 2023 15:51 UTC
63 points
2 comments28 min readEA link

Epistemics (Part 2: Ex­am­ples) | Reflec­tive Altruism

Eevee🔹19 May 2023 21:28 UTC
34 points
0 comments2 min readEA link
(ineffectivealtruismblog.com)

Us­ing Points to Rate Differ­ent Kinds of Evidence

Ozzie Gooen25 Aug 2023 19:26 UTC
33 points
6 comments6 min readEA link

Dispel­ling the An­thropic Shadow (Teruji Thomas)

Global Priorities Institute16 Oct 2024 13:25 UTC
11 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Epistemic tres­pass­ing, or epistemic squat­ting? | Noahpinion

Eevee🔹25 Aug 2021 1:50 UTC
11 points
1 comment1 min readEA link
(noahpinion.substack.com)

The most ba­sic ra­tio­nal­ity tech­niques are of­ten neglected

Vasco Grilo🔸26 Aug 2024 15:45 UTC
49 points
4 comments2 min readEA link
(stefanschubert.substack.com)

Nav­i­gat­ing the episte­molo­gies of effec­tive altruism

Ozzie_Gooen23 Sep 2013 19:50 UTC
0 points
1 comment5 min readEA link

Win­ning isn’t enough

Anthony DiGiovanni5 Nov 2024 11:43 UTC
27 points
3 comments1 min readEA link

The Wages of North-At­lantic Bias

Sach Wry19 Aug 2022 12:34 UTC
8 points
2 comments17 min readEA link

Cap­i­tal­ism, power and episte­mol­ogy: a cri­tique of EA

Matthew_Doran22 Aug 2022 14:20 UTC
13 points
19 comments23 min readEA link

Paper sum­mary: The Epistemic Challenge to Longter­mism (Chris­tian Tarsney)

Global Priorities Institute11 Oct 2022 11:29 UTC
39 points
5 comments4 min readEA link
(globalprioritiesinstitute.org)

An episte­mol­ogy for effec­tive al­tru­ism?

Benjamin_Todd21 Sep 2014 21:46 UTC
22 points
19 comments3 min readEA link

Be less trust­ing of in­tu­itive ar­gu­ments about so­cial phe­nom­ena

Nathan_Barnard18 Dec 2022 1:11 UTC
43 points
20 comments4 min readEA link

When should you trust your in­tu­ition/​gut check/​hunch over ex­plicit rea­son­ing?

Sharmake5 Sep 2022 22:18 UTC
9 points
0 comments1 min readEA link
(80000hours.org)

Red team­ing a model for es­ti­mat­ing the value of longter­mist in­ter­ven­tions—A cri­tique of Tarsney’s “The Epistemic Challenge to Longter­mism”

Anjay F16 Jul 2022 19:05 UTC
21 points
0 comments30 min readEA link

The Role of “Economism” in the Belief-For­ma­tion Sys­tems of Effec­tive Altruism

Thomas Aitken1 Sep 2022 7:33 UTC
27 points
3 comments10 min readEA link
(drive.google.com)

Multi-Fac­tor De­ci­sion Mak­ing Math

Elliot Temple30 Oct 2022 16:46 UTC
2 points
13 comments37 min readEA link
(criticalfallibilism.com)

Against Modest Epistemology

EliezerYudkowsky14 Nov 2017 21:26 UTC
18 points
11 comments15 min readEA link

[Question] I’m col­lect­ing con­tent that show­cases epistemic virtues, any sug­ges­tions?

Alejandro Acelas31 Jul 2022 12:15 UTC
10 points
2 comments1 min readEA link

Effec­tive Altru­ism’s Im­plicit Epistemology

Violet Hour18 Oct 2022 13:38 UTC
129 points
12 comments28 min readEA link

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Eleni_A3 Sep 2022 23:21 UTC
5 points
0 comments17 min readEA link

Pre­fer be­liefs to cre­dence probabilities

Noah Scales1 Sep 2022 2:04 UTC
3 points
1 comment4 min readEA link

LW4EA: Epistemic Legibility

Jeremy16 Aug 2022 15:55 UTC
5 points
2 comments3 min readEA link
(www.lesswrong.com)

Up­date your be­liefs with a “Yes/​No de­bate”

jorges3 Nov 2022 0:05 UTC
18 points
4 comments4 min readEA link

We in­ter­viewed 15 China-fo­cused re­searchers on how to do good research

gabriel_wagner19 Dec 2022 19:08 UTC
47 points
3 comments23 min readEA link

On Solv­ing Prob­lems Be­fore They Ap­pear: The Weird Episte­molo­gies of Alignment

adamShimi11 Oct 2021 8:21 UTC
28 points
0 comments15 min readEA link

My notes on: Se­quence think­ing vs. cluster thinking

Vasco Grilo🔸25 May 2022 15:03 UTC
24 points
0 comments5 min readEA link

There is no royal road to al­ign­ment

Eleni_A17 Sep 2022 13:24 UTC
18 points
2 comments3 min readEA link

Who or­dered al­ign­ment’s ap­ple?

Eleni_A28 Aug 2022 14:24 UTC
5 points
0 comments3 min readEA link

A new Heuris­tic to Up­date on the Cre­dences of Others

aaron_mai16 Jan 2023 11:35 UTC
22 points
4 comments20 min readEA link

Epistemic health is a com­mu­nity issue

ConcernedEAs2 Feb 2023 15:59 UTC
12 points
11 comments9 min readEA link

[Linkpost] Michael Hue­mer on the case for Bayesian statistics

John G. Halstead7 Feb 2023 17:52 UTC
20 points
2 comments1 min readEA link

Let’s make the truth eas­ier to find

DPiepgrass20 Mar 2023 4:28 UTC
36 points
8 comments24 min readEA link

AI Safety is Drop­ping the Ball on Clown Attacks

trevor121 Oct 2023 23:15 UTC
−17 points
0 comments1 min readEA link

Es­ti­ma­tion for san­ity checks

NunoSempere21 Mar 2023 0:13 UTC
64 points
7 comments4 min readEA link
(nunosempere.com)

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
5 points
1 comment6 min readEA link

Ex­pert trap: Ways out (Part 3 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak22 Jul 2023 13:05 UTC
2 points
0 comments9 min readEA link

The space of sys­tems and the space of maps

Jan_Kulveit22 Mar 2023 16:05 UTC
12 points
0 comments5 min readEA link
(www.lesswrong.com)

New­comb’s Para­dox Explained

Alex Vellins31 Mar 2023 21:26 UTC
2 points
11 comments2 min readEA link

EA is un­der­es­ti­mat­ing in­tel­li­gence agen­cies and this is dangerous

trevor126 Aug 2023 16:52 UTC
28 points
4 comments10 min readEA link

Ex­pert trap: What is it? (Part 1 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak6 Jun 2023 15:05 UTC
3 points
0 comments8 min readEA link

Ex­pert trap (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak9 Jun 2023 22:53 UTC
3 points
0 comments7 min readEA link

An­nounc­ing the First Work­shop on the Eco­nomics of Pan­demic Preparedness

matteo3 Apr 2024 11:18 UTC
3 points
0 comments1 min readEA link

How Do We Know What We Know: The Tri­par­tite The­ory of Knowl­edge

Siya Sawhney6 Jul 2024 10:18 UTC
2 points
1 comment3 min readEA link

Con­ta­gious Beliefs—Si­mu­lat­ing Poli­ti­cal Alignment

Non-zero-sum James13 Oct 2024 1:38 UTC
4 points
3 comments2 min readEA link
(nonzerosum.games)

How did you up­date on AI Safety in 2023?

Chris Leong23 Jan 2024 2:21 UTC
30 points
5 comments1 min readEA link

Differ­en­tial knowl­edge interconnection

Roman Leventov12 Oct 2024 12:52 UTC
3 points
1 comment1 min readEA link

An am­bi­tious pro­ject of re­defin­ing what sci­ence is based on pro­cess philosophy

Michał Terpiłowski18 Oct 2024 5:14 UTC
0 points
0 comments1 min readEA link
(youtu.be)

Value capture

Erich_Grunewald 🔸26 Oct 2024 22:46 UTC
37 points
0 comments2 min readEA link
(jesp.org)

Towards the Oper­a­tional­iza­tion of Philos­o­phy & Wisdom

Thane Ruthenis28 Oct 2024 19:45 UTC
1 point
1 comment1 min readEA link
(aiimpacts.org)

AI & wis­dom 1: wis­dom, amor­tised op­ti­mi­sa­tion, and AI

L Rudolf L29 Oct 2024 13:37 UTC
14 points
0 comments1 min readEA link
(rudolf.website)

Sen­tien­tism: Im­prov­ing the hu­man episte­mol­ogy and ethics baseline

JamieWoodhouse27 Mar 2019 15:34 UTC
8 points
6 comments1 min readEA link
No comments.