TagLast edit: 5 Jun 2022 2:28 UTC by Pablo

Cluelessness is radical uncertainty about the long-term effects of our actions.

Simple versus complex cluelessness

All actions we take have huge effects on the future. One way of seeing this is by considering identity-altering actions. Imagine that Amy passes her friend on the street and they stop to chat. Amy and her friend will now be on a different trajectory than they would have been otherwise. They will interact with different people, at a different time, in a different place, or in a different way than if they hadn’t paused. This will eventually change the circumstances of a conception event such that a different person will now be born because they paused to speak on the street. Now, when the person who is conceived takes actions, Amy will be causally responsible for those actions and their effects. She is also causally responsible for all the effects flowing from those effects.

This is an example of simple cluelessness, which isn’t generally considered problematic. In the above example, Amy has no reason to believe that the many consequences that would follow from pausing would be better than the many consequences that follow from not pausing. Amy has evidential symmetry between the two following claims:

And similarly, Amy has evidential symmetry between the two following claims:

(The example assumes that there is nothing particularly special about this chat — eg. Amy and her friend are not chatting about starting a nuclear war or influencing AI policy.)

By evidential symmetry between two actions it is meant that, though massive value or disvalue could come from a given action, these effects could equally easily, and in precisely analogous ways, result from the relevant alternative actions. In the previous scenario, it was assumed that each of the possible people that will be born are as likely as each other to be the next Norman Borlaug. And each of the possible people are as likely as each other to be the next Joseph Stalin.

So this situation is not problematic; the possible effects, though they are huge, cancel out precisely in an expected value estimate.

Cluelessness is problematic, however, in situations where there is no evidential symmetry. For a pair of actions (act one and act two), complex cluelessness obtains when:

For example, there are some reasons to think that the long-term effects of a marginally higher economic growth rate would be good—for example, via driving more patient and pro-social attitudes. This would mean that taking action to increase economic growth could have much better effects than not taking the action. We have some reasons to think that the long-term effects of a marginally higher economic growth rate would be bad —for example, via increased carbon emissions leading to climate change. This would mean that not taking the action that increases economic growth could be a much better idea. It is not immediately obvious that one of these is better than the other, but we also cannot say they have equal expected value. That would need either evidential symmetry, or a very detailed expected value estimate.

Some authors claim that complex cluelessness implies that we should be very skeptical of interventions whose claim to cost-effectiveness is through their direct, proximate effects. As Benjamin Todd and others have argued, the long-term effects of these actions probably dominate.[2] But we do not know what the long-term effects of many interventions are or just how good or bad they will be.

Actions we take today have indirect long-term effects, and they seem to dominate over the direct near-term effects. In the absence of evidential symmetry, these long-term effects cannot be ignored. So it seems that those concerned about future generations have to justify interventions via their long-term effects, rather than their proximate ones.

Further reading

Greaves, Hilary (2020) Evidence, cluelessness, and the long term, Effective Altruism Forum, November 1.

Mogensen, Andreas (2020) Maximal cluelessness, The Philosophical Quarterly, vol. 71, pp. 141–162.

Schubert, Stefan (2022) Against cluelessness: pockets of predictability, Stefan Schubert’s Blog, May 18.

Tarsney, Christian (2022) The epistemic challenge to longtermism, GPI Working Paper No. 3-2022, Global Priorities Institute.

Related entries

accidental harm | alternatives to expected value theory | crucial consideration |expected value | forecasting | indirect long-term effects | long-range forecasting | model uncertainty | value of information

  1. ^

    An explanation of what is meant by ‘systematically’ can be found in section 5 of Greaves, Hilary (2016) Cluelessness, Proceedings of the Aristotelian Society, vol. 116, pp. 311–339.

  2. ^

    Todd, Benjamin (2017) Longtermism: the moral significance of future generations, 80,000 Hours, October.

Ev­i­dence, clue­less­ness, and the long term—Hilary Greaves

james1 Nov 2020 17:25 UTC
99 points
54 comments19 min readEA link

Even Allo­ca­tion Strat­egy un­der High Model Ambiguity

MichaelStJules31 Dec 2020 9:10 UTC
15 points
3 comments2 min readEA link

Do­ing good while clueless

Milan_Griffes15 Feb 2018 5:04 UTC
46 points
8 commentsEA link

What con­se­quences?

Milan_Griffes23 Nov 2017 18:27 UTC
29 points
22 commentsEA link

Hedg­ing against deep and moral uncertainty

MichaelStJules12 Sep 2020 23:44 UTC
47 points
11 comments9 min readEA link

A prac­ti­cal guide to long-term plan­ning – and sug­ges­tions for longtermism

weeatquince10 Oct 2021 15:37 UTC
114 points
12 comments24 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
66 points
28 comments2 min readEA link

“Just take the ex­pected value” – a pos­si­ble re­ply to con­cerns about cluelessness

Milan_Griffes21 Dec 2017 19:37 UTC
14 points
31 commentsEA link

How tractable is clue­less­ness?

Milan_Griffes29 Dec 2017 18:52 UTC
16 points
2 commentsEA link

EA read­ing list: clue­less­ness and epistemic modesty

richard_ngo3 Aug 2020 9:23 UTC
23 points
3 comments1 min readEA link

Com­plex clue­less­ness as credal fragility

Gregory_Lewis8 Feb 2021 16:59 UTC
50 points
50 comments23 min readEA link

For­mal­is­ing the “Wash­ing Out Hy­poth­e­sis”

dwebb25 Mar 2021 11:40 UTC
94 points
24 comments12 min readEA link

The case for strong longter­mism—June 2021 update

Jack Malde21 Jun 2021 21:30 UTC
64 points
13 comments3 min readEA link

In defence of epistemic modesty

Gregory_Lewis29 Oct 2017 19:15 UTC
103 points
49 commentsEA link

Some thoughts on defer­ence and in­side-view models

Buck28 May 2020 5:37 UTC
131 points
31 comments10 min readEA link

An­dreas Mo­gensen’s “Max­i­mal Clue­less­ness”

Pablo25 Sep 2019 11:18 UTC
60 points
31 comments1 min readEA link

Solv­ing the moral clue­less­ness prob­lem with Bayesian joint prob­a­bil­ity distributions

ben.smith28 Feb 2021 9:17 UTC
9 points
13 comments4 min readEA link

Hilary Greaves: Ev­i­dence, clue­less­ness and the long term

EA Global25 Oct 2020 5:48 UTC
8 points
1 comment1 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

Jack Malde9 Mar 2021 17:58 UTC
89 points
45 comments19 min readEA link

Chris­tian Tarsney: Can we pre­dictably im­prove the far fu­ture?

EA Global18 Oct 2019 7:40 UTC
9 points
0 comments1 min readEA link

Will MacAskill: Why should effec­tive al­tru­ists em­brace un­cer­tainty?

EA Global4 Dec 2018 16:23 UTC
32 points
2 commentsEA link

Are GiveWell Top Char­i­ties Too Spec­u­la­tive?

MichaelDickens21 Dec 2015 4:05 UTC
17 points
59 commentsEA link

[Question] What should we call the other prob­lem of clue­less­ness?

Owen Cotton-Barratt3 Jul 2021 17:10 UTC
52 points
22 comments1 min readEA link

Mo­gensen & MacAskill, ‘The paral­y­sis ar­gu­ment’

Pablo19 Jul 2021 14:04 UTC
14 points
1 comment1 min readEA link

[Question] Has any­one wrote some­thing us­ing moral clue­less­ness to “de­bunk” anti-con­se­quen­tial­ist thought ex­per­i­ments?

capybaralet9 Oct 2021 13:09 UTC
3 points
1 comment1 min readEA link

Heuris­tics for clue­less agents: how to get away with ig­nor­ing what mat­ters most in or­di­nary de­ci­sion-making

Global Priorities Institute31 May 2020 13:35 UTC
3 points
0 comments3 min readEA link

Cru­cial con­sid­er­a­tions in the field of Wild An­i­mal Welfare (WAW)

Holly_Elmore10 Apr 2022 19:43 UTC
62 points
10 comments3 min readEA link

Hay­den Wilk­in­son: Do­ing good in an in­finite, chaotic world

EA Global18 Feb 2020 23:29 UTC
33 points
1 comment18 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Bibliog­ra­phy and Appendix

kokotajlod20 Nov 2018 17:34 UTC
10 points
0 comments24 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: De­fus­ing the Ini­tial Worry and Steel­man­ning the Problem

kokotajlod10 Nov 2018 9:12 UTC
27 points
10 commentsEA link

Tiny Prob­a­bil­ities of Vast Utilities: Solutions

kokotajlod14 Nov 2018 16:04 UTC
20 points
5 commentsEA link

If you value fu­ture peo­ple, why do you con­sider near term effects?

Alex HT8 Apr 2020 15:21 UTC
89 points
78 comments13 min readEA link

Alexan­der Berger on im­prov­ing global health and wellbe­ing in clear and di­rect ways

80000_Hours12 Aug 2021 15:46 UTC
42 points
1 comment137 min readEA link

Pre­dic­tion: The long and the short of it

Global Priorities Institute30 Nov 2019 14:32 UTC
3 points
0 comments7 min readEA link

Sum­mary: Ev­i­dence, clue­less­ness and the long term | Hilary Greaves

Vasco Grilo15 Apr 2022 17:22 UTC
4 points
0 comments5 min readEA link

In­tro­duc­ing spirit hazards

brb24327 May 2022 22:16 UTC
9 points
2 comments2 min readEA link

How to dis­solve moral clue­less­ness about donat­ing mosquito nets

ben.smith8 Jun 2022 7:12 UTC
24 points
8 comments12 min readEA link

Red team­ing a model for es­ti­mat­ing the value of longter­mist in­ter­ven­tions—A cri­tique of Tarsney’s “The Epistemic Challenge to Longter­mism”

Anjay F16 Jul 2022 19:05 UTC
21 points
0 comments30 min readEA link
No comments.