RSS

Cru­cial consideration

TagLast edit: 31 Mar 2023 15:01 UTC by Pablo

A crucial consideration is a consideration that warrants a major reassessment of a cause or intervention.

The concept was introduced by Nick Bostrom in a 2007 article[1] and applied in subsequent publications.[2][3]

Related concepts

Besides introducing the concept of a crucial consideration, Bostrom introduced two other related concepts. First, a crucial consideration component, or a consideration that is not itself a crucial consideration, but has the potential to become one when conjoined with additional considerations still unknown.[2] As Bostrom writes, a crucial consideration component is “the kind of thing of which we would say: ‘This looks really intriguing, this could be important; I’m not really sure what to make of it at the moment.’ On its own, maybe it doesn’t tell us anything, but maybe there’s another piece that, when combined, will somehow yield an important result.”[3]

Second, a deliberation ladder, or a sequence of crucial considerations resulting in successive reassessments of the same cause or intervention.[2] Consider, for illustration, an altruist who initially becomes a vegan out of concern for the treatment of animals in factory farms. Later, this person is exposed to the logic of the larder and concludes that consuming animal products is permissible because it increases the total number of animals. Finally, the altruist comes to believe that farm animal welfare is net negative and reverts to a vegan diet, reasoning that, since demand for animal products increases animals in expectation, it also increases net suffering. Many additional “deliberation ladders” can be imagined, related to the impact of meat consumption on the number of animals who feed on other animals, on climate change and its effects on wild animals, on public perception of the moral status of nonhuman animals, and so on.

Implications

The potential existence of yet undiscovered crucial considerations raises a very serious challenge for any attempt to do good effectively on a large scale. As Bostrom writes, “Our noblest and most carefully considered attempts to effect change in the world might well be pushing things further away from where they ought to be. Perhaps around the corner lurks some crucial consideration that we have ignored, such that if we thought of it and were able to accord it its due weight in our reasoning, it would convince us that our guiding beliefs and our struggles to date had been orthogonal or worse to the direction that would then come to appear to us as the right one.”[4] Such a challenge is particularly serious for longtermists: the additional difficulty associated with trying to influence the far future, and the greater neglect of this area until very recently, strongly suggest that relevant crucial considerations remain undiscovered.

Further reading

Bostrom, Nick (2014) Crucial considerations and wise philanthropy, Effective Altruism, July 9.

Bostrom, Nick (2016) Macrostrategy, Bank of England, April 11.
Starting at 22:00, discusses crucial considerations and related concepts.

Related entries

crux

  1. ^

    Bostrom, Nick (2007) Technological revolutions: ethics and policy in the dark, in Nigel M. de S. Cameron & M. Ellen Mitchell (eds.) Nanoscale: Issues and Perspectives for the Nano Century, Hoboken, New Jersey: John Wiley & Sons, pp. 129–152, p. 149.

  2. ^

    Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

  3. ^

    Bostrom, Nick (2014) Crucial considerations and wise philanthropy, Effective Altruism, July 9.

  4. ^

    Bostrom, Nick (2007) Technological revolutions, pp. 149-150.

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo25 Jan 2024 7:56 UTC
67 points
29 comments18 min readEA link

A quick PSA on not fal­ling down the stairs and other man­age­able life risks.

wes R11 Jan 2024 3:31 UTC
−16 points
7 comments2 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo2 Dec 2023 8:20 UTC
43 points
8 comments15 min readEA link

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo14 Oct 2023 12:05 UTC
40 points
20 comments66 min readEA link

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
123 points
3 comments4 min readEA link

Nu­clear win­ter scepticism

Vasco Grilo13 Aug 2023 10:55 UTC
110 points
42 comments10 min readEA link
(www.navalgazing.net)

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

The Meat Eater Problem

Vasco Grilo17 Jun 2023 6:52 UTC
50 points
1 comment7 min readEA link
(journalofcontroversialideas.org)

[Question] Are we con­fi­dent that su­per­in­tel­li­gent ar­tifi­cial in­tel­li­gence dis­em­pow­er­ing hu­mans would be bad?

Vasco Grilo10 Jun 2023 9:24 UTC
15 points
27 comments1 min readEA link

More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo29 Apr 2023 8:24 UTC
46 points
39 comments13 min readEA link

[Opz­ionale] ‘Con­sid­er­az­ioni cru­ciali e filantropia sag­gia’, di Nick Bostrom

EA Italy12 Jan 2023 3:11 UTC
1 point
0 comments1 min readEA link
(altruismoefficace.it)

What you pri­ori­tise is mostly moral intuition

James Özden24 Dec 2022 12:06 UTC
73 points
8 comments12 min readEA link

Info Lifeguards

jack jay7 Oct 2022 21:47 UTC
5 points
1 comment1 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinstein28 Apr 2022 15:49 UTC
37 points
21 comments7 min readEA link

Cru­cial con­sid­er­a­tions in the field of Wild An­i­mal Welfare (WAW)

Holly_Elmore10 Apr 2022 19:43 UTC
63 points
10 comments3 min readEA link

Case for emer­gency re­sponse teams

Gavin5 Apr 2022 11:08 UTC
246 points
48 comments5 min readEA link

Wide­spread val­ues brainstorming

brb24315 Mar 2022 12:45 UTC
9 points
2 comments1 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
26 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA13 Apr 2021 5:50 UTC
55 points
28 comments2 min readEA link

Real­ity has a sur­pris­ing amount of detail

Aaron Gertler11 Apr 2021 21:41 UTC
57 points
0 comments9 min readEA link
(johnsalvatier.org)

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA30 Nov 2020 1:10 UTC
43 points
4 comments15 min readEA link

Hedg­ing against deep and moral uncertainty

MichaelStJules12 Sep 2020 23:44 UTC
83 points
13 comments9 min readEA link

Cru­cial ques­tions about op­ti­mal timing of work and donations

MichaelA14 Aug 2020 8:43 UTC
45 points
4 comments27 min readEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
102 points
17 comments14 min readEA link

My per­sonal cruxes for work­ing on AI safety

Buck13 Feb 2020 7:11 UTC
135 points
35 comments45 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendal28 Nov 2019 14:47 UTC
85 points
9 comments6 min readEA link

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
69 points
8 comments20 min readEA link

Per­sis Eskan­der: Cru­cial con­sid­er­a­tions in wild an­i­mal suffering

EA Global8 Jun 2018 7:15 UTC
10 points
1 comment16 min readEA link
(www.youtube.com)

‘Cru­cial Con­sid­er­a­tions and Wise Philan­thropy’, by Nick Bostrom

Pablo17 Mar 2017 6:48 UTC
30 points
4 comments24 min readEA link
(www.stafforini.com)