RSS

Value of information

TagLast edit: 30 May 2022 18:44 UTC by Leo

The value of information is the extent to which additional information improves an agent’s decision. This depends on how likely that information is to change the agent’s decision and how much of an improvement that change would be. Value of information will usually be higher the less resilient the agent’s relevant credences currently are, because that will tend to mean new evidence is more likely to change the agent’s credences.

Further reading

Askell, Amanda (2017) The moral value of information, Effective Altruism, June 4.

Muehlhauser, Luke (2013) Review of Douglas Hubbard, How to Measure Anything, LessWrong, August 7.

Related entries

alternatives to expected value theory | credence | explore-exploit tradeoff | forecasting | model uncertainty

EA should in­vest more in exploration

Michael_PJ5 Feb 2017 17:11 UTC
23 points
27 comments7 min readEA link

Amanda Askell: The moral value of information

EA Global2 Jun 2017 8:48 UTC
19 points
0 comments14 min readEA link
(www.youtube.com)

What is the cost-effec­tive­ness of re­search­ing vac­cines?

Peter Wildeford8 May 2018 7:41 UTC
17 points
0 comments10 min readEA link

When To Find More In­for­ma­tion: A Short Explanation

Davidmanheim28 Dec 2019 18:00 UTC
76 points
14 comments3 min readEA link

Mak­ing de­ci­sions un­der moral uncertainty

MichaelA1 Jan 2020 13:02 UTC
44 points
8 comments17 min readEA link

Po­ten­tial down­sides of us­ing ex­plicit probabilities

MichaelA20 Jan 2020 2:14 UTC
57 points
22 comments18 min readEA link

Op­tion Value, an In­tro­duc­tory Guide

Caleb_Maresca21 Feb 2020 14:45 UTC
31 points
3 comments7 min readEA link

The Mo­ral Value of In­for­ma­tion—ed­ited transcript

james2 Jul 2020 21:02 UTC
20 points
3 comments12 min readEA link

The case against “EA cause ar­eas”

nadavb17 Jul 2021 6:37 UTC
137 points
24 comments13 min readEA link

A Gen­eral Treat­ment of the Mo­ral Value of Information

SamNolan17 Jul 2021 22:50 UTC
16 points
0 comments9 min readEA link

Nar­ra­tion: The case against “EA cause ar­eas”

D0TheMath24 Jul 2021 20:39 UTC
12 points
5 comments1 min readEA link
(anchor.fm)

Pri­ori­ti­za­tion when size mat­ters: Value of information

jh7 Jan 2022 5:16 UTC
23 points
0 comments2 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_Kulveit15 Mar 2022 10:05 UTC
186 points
9 comments4 min readEA link

Bulk­ing in­for­ma­tion ad­di­tion­al­ities in global de­vel­op­ment for medium-term lo­cal prosperity

brb24311 Apr 2022 17:52 UTC
4 points
0 comments4 min readEA link

Re­search vs. non-re­search work to im­prove the world: In defense of more re­search and re­flec­tion [linkpost]

Magnus Vinding20 May 2022 16:25 UTC
31 points
2 comments1 min readEA link
(magnusvinding.com)

LW4EA: Value of In­for­ma­tion: Four Examples

Jeremy14 Jun 2022 2:26 UTC
5 points
0 comments1 min readEA link
(www.lesswrong.com)

Es­ti­mat­ing the cost-effec­tive­ness of sci­en­tific research

Falk Lieder16 Jul 2022 12:20 UTC
36 points
5 comments17 min readEA link

Cause Ex­plo­ra­tion Prize: Distri­bu­tion of In­for­ma­tion Among Humans

markov_user12 Aug 2022 0:58 UTC
31 points
3 comments13 min readEA link

Pri­ori­ti­sa­tion should con­sider po­ten­tial for on­go­ing eval­u­a­tion alongside ex­pected value and ev­i­dence quality

freedomandutility13 Aug 2022 14:53 UTC
36 points
3 comments1 min readEA link

Value of In­fo­ma­tion, an ex­am­ple with GiveDirectly

SamNolan30 Aug 2022 20:37 UTC
12 points
1 comment1 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

Froolow18 Oct 2022 22:54 UTC
112 points
63 comments39 min readEA link

Quan­tify­ing Uncer­tainty in GiveWell Cost-Effec­tive­ness Analyses

SamNolan31 Oct 2022 14:31 UTC
116 points
7 comments20 min readEA link
(observablehq.com)

AI X-Risk: In­te­grat­ing on the Shoulders of Giants

TD_Pilditch1 Nov 2022 16:07 UTC
34 points
0 comments47 min readEA link

How im­por­tant are ac­cu­rate AI timelines for the op­ti­mal spend­ing sched­ule on AI risk in­ter­ven­tions?

Tristan Cook16 Dec 2022 16:05 UTC
30 points
0 comments6 min readEA link

[Question] Has pri­vate AGI re­search made in­de­pen­dent safety re­search in­effec­tive already? What should we do about this?

Roman Leventov23 Jan 2023 16:23 UTC
15 points
0 comments5 min readEA link

Get­ting Ac­tual Value from “Info Value”: Ex­am­ple from a Failed Experiment

Nikola26 Jan 2023 17:48 UTC
63 points
0 comments3 min readEA link