We’ll be privately messaging winners and explaining how you should get your prize. Expect a private message from us in the next few days.
Questions
Question
Why am I awarding it a prize? (Brief notes!)
Why is scope insensitivity considered a bias instead of just the way human values work? (Link)
This is very much in the spirit of the thread, asks a question others might be wondering about, and focuses on a topic that is pretty fundamental to effective altruism.
🔎 Despite the fact that this question already has some answers, I think it could benefit from some more.
‘If I take EA thinking, ethics, and cause areas more seriously from now on, how can I cope with the guilt and shame of having been so ethically misguided in my previous life?’ [...] (Link)
I think many people struggle with this.
Why does most AI risk research and writing focus on artificial general intelligence? Are there AI risk scenarios which involve narrow AIs? (Link)
Most AI risk conversations in EA do seem to be about general intelligence, and there’s not much discussion about why this is the case. I imagine lots of people are confused or deferring heavily to the crowd. Questions that get at what seems to be an unspoken assumption are often really useful.
🔎 This is a question that might benefit from having more answers.
Has anyone produced writing on being pro-choice and placing a high value on future lives at the same time? I’d love to read about how these perspectives interact! (Link)
Another question that others might share! I like that the question asks about a possible intersection of viewpoints rather than assuming either conclusion or that the intersection is incompatible.
Why don’t we discount future lives based on the probability of them not existing? These lives might end up not being born, right?
I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions. (Link)
This question is genuine and important, and ends up highlighting a real confusion people have in conversations about “discounting” (“pure” discounting vs discounting for other reasons).
Answers
Question (para- phrased)
Answer
Why am I awarding it a prize? (Brief notes!)
What level of existential risk would we need to achieve for existential risk reduction to no longer be seen as “important”?
What’s directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I’m pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.
The answer notices a confusion in the original question (that the level of existential risk determines whether we should prioritize it), and responds to the confusion (we should prioritize based on how much we can affect the level of risk, what we can do besides work on existential risk reduction, and our empirical and moral beliefs). Note that this is not to say that the original question is bad — the whole point of this thread is to clarify beliefs and be allowed to ask anything.
I also like that this answer is grounded in an example scenario (if the risk were really high).
Does anyone have a good list of books related to existential and global catastrophic risk? This doesn’t have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. [...]
In no particular order. I’ll add to this if I think of extra books. [… LIST]
It’s good to see someone putting together a collection, and the list has lots of relevant books, sorted by topic!
Why is scope insensitivity considered a bias instead of just the way human values work?
Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you’re willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don’t run out of money) your willingness to pay is roughly linear in the number of birds you save.
Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only $0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only $1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.
All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren’t as strong and I don’t know them.
The answer is pretty thorough. The birds study is not my favorite example, but I think it serves as a good illustration here, and I appreciate the note about diminishing returns. The links in the answer are valuable, and allow people wondering about this to explore on their own. I also like that the answer is caveated (“not a philosopher”).
Does anyone know why the Gates Foundation doesn’t fill the GiveWell top charities’ funding gaps?
These comments link to a relevant post (written by the commenter) and update the old content with new information. Moreover, the answer is not what you might expect and shares lots of interesting context.
One recent paper suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B—and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted.
Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].
For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].
GiveDirectly is a charity that can productively absorb very large amounts of donations at scale, because they give unconditional cash transfers to extremely poor people in low-income countries. A Cochrane review suggests that such unconditional cash-transfers “probably or may improve some health outcomes.[21] One analysis suggests that cash-transfers are roughly equivalent as effective as averting a death on the order of $10k .
So essentially cost-effectiveness doesn’t drop off sharply after Givewell’s top charities are ‘fully funded’, and one could spend billions and billions at similar cost-effectiveness, Gates only has ~$100B and only spends~$5B a year.
It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
How confident are safety researchers about this point?
At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?
Definitely not an expert, but I think there is still no consensus on “slow takeoff vs fast takeoff” (fast takeoff is sometimes referred to as FOOM).
Winners of the small prize
We’ll be privately messaging winners and explaining how you should get your prize. Expect a private message from us in the next few days.
Questions
This is very much in the spirit of the thread, asks a question others might be wondering about, and focuses on a topic that is pretty fundamental to effective altruism.
🔎 Despite the fact that this question already has some answers, I think it could benefit from some more.
Most AI risk conversations in EA do seem to be about general intelligence, and there’s not much discussion about why this is the case. I imagine lots of people are confused or deferring heavily to the crowd. Questions that get at what seems to be an unspoken assumption are often really useful.
🔎 This is a question that might benefit from having more answers.
Why don’t we discount future lives based on the probability of them not existing? These lives might end up not being born, right?
I understand the idea of not discount lives due to distance (distance in time as well as distance in space). Knowing a drowning child is 30km away is different from hearing from a friend that there is an x% chance of a drowning child 30km away. In the former, you know something exists; in the latter, there is a probability that it does exist, and you apply an suitable level of confidence in your actions. (Link)
Answers
What’s directly relevant is not the level of existential risk, but how much we can affect it. (If existential risk was high but there was essentially nothing we could do about it, it would make sense to prioritize other issues.) Also relevant is how effectively we can do good in other ways. I’m pretty sure it costs less than 10 billion times as much (in expectation, on the margin) to save the world as to save a human life, which seems like a great deal. (I actually think it costs substantially less.) If it cost much more, x-risk reduction would be less appealing; the exact ratio depends on your moral beliefs about the future and your empirical beliefs about how big the future could be.
(Link)
The answer notices a confusion in the original question (that the level of existential risk determines whether we should prioritize it), and responds to the confusion (we should prioritize based on how much we can affect the level of risk, what we can do besides work on existential risk reduction, and our empirical and moral beliefs). Note that this is not to say that the original question is bad — the whole point of this thread is to clarify beliefs and be allowed to ask anything.
I also like that this answer is grounded in an example scenario (if the risk were really high).
In no particular order. I’ll add to this if I think of extra books. [… LIST]
(Link)
Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you’re willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don’t run out of money) your willingness to pay is roughly linear in the number of birds you save.
Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only $0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only $1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.
Why should you be consistent? One reason is the triage framing, which is given in Replacing Guilt. Another reason is the money-pump; if you value birds at $1 per 100 and $2 per 1000, and are willing to make trades in either direction, there is a series of trades that causes you to lose both $ and birds.
All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren’t as strong and I don’t know them.
(Link)
I wrote a post about this 7 years ago! Still roughly valid.
(Link)
[Joint prize for these two comments.]
These comments link to a relevant post (written by the commenter) and update the old content with new information. Moreover, the answer is not what you might expect and shares lots of interesting context.
One recent paper suggests that an estimated additional $200–328 billion per year is required for the various measures of primary care and public health interventions from 2020 to 2030 in 67 low-income and middle-income countries and this will save 60 million lives. But if you look at just the amount needed in low-income countries for health care - $396B—and divide by the total 16.2 million deaths averted by that, it suggests an average cost-effectiveness of ~$25k/death averted.
Other global health interventions can be similarly or more effective: a 2014 Lancet article estimates that, in low-income countries, it costs $4,205 to avert a death through extra spending on health[22]. Another analysis suggests that this trend will continue and from 2015-2030 additional spending in low-income countries will avert a death for $4,000-11,000[23].
For comparison, in high-income countries, the governments spend $6.4 million to prevent a death (a measure called “value of a statistical life”)[24]. This is not surprising given the poorest countries spend less than $100 per person per year on health on average, while high-income countries almost spend $10,000 per person per year[25].
GiveDirectly is a charity that can productively absorb very large amounts of donations at scale, because they give unconditional cash transfers to extremely poor people in low-income countries. A Cochrane review suggests that such unconditional cash-transfers “probably or may improve some health outcomes.[21] One analysis suggests that cash-transfers are roughly equivalent as effective as averting a death on the order of $10k .
So essentially cost-effectiveness doesn’t drop off sharply after Givewell’s top charities are ‘fully funded’, and one could spend billions and billions at similar cost-effectiveness, Gates only has ~$100B and only spends~$5B a year.
(Link)
It seems that AI safety discussions assume that once general intelligence is achieved, recursive improvement means that superintelligence is inevitable.
How confident are safety researchers about this point?
At some point, the difficulty of additional improvements exceeds the increase in intelligence and the AI will eventually no longer be able to improve itself. Why do safety researchers expect this point to be a vast superintelligence rather than something only slightly smarter than a human?
Definitely not an expert, but I think there is still no consensus on “slow takeoff vs fast takeoff” (fast takeoff is sometimes referred to as FOOM).
It’s a very important topic of disagreement, see e.g. https://www.lesswrong.com/posts/hRohhttbtpY3SHmmD/takeoff-speeds-have-a-huge-effect-on-what-it-means-to-work-1
(Link)