Strong agree, though I hadn’t thought about it this exact way before. IMO the EA movement actually came from this conception - ‘global health’ is not neglected, but ‘schistosomiasis treatments’ and ‘malaria nets’ are. So it’s always seemed weird to me when EAs dismiss something like climate change work on the grounds that it’s ‘not neglected’. [ETA I just saw that Sam Battis said the same thing. Somehow in 15ish years in the movement, I don’t think I’ve ever heard anyone else say this!]
Additionally, I think most solutions follow an S-curve in terms of amount of resources put in vs number of people actually helped. Doing research on economic growth policies, RCTs, energy technology etc is all basically worthless until you have the capacity to deploy something at scale.
Helping the global poor is neglected, and that accounts for most bednet outperformance. GiveDirectly, just giving cash, is thought by GiveWell/GHW to be something like 100x better on direct welfare than rich country consumption (although indirect effects reduce that gap), vs 1000x+ for bednets. So most of the log gains come from doing stuff with the global poor at all. Then bednets have a lot of their gains as positive externalities (protecting one person also protects others around them), and you’re left with a little bit of ’being more confident about bednets than some potential users based on more investigation of the evidence (like vaccines), and some effects like patience/discounting.
Really exceptional intervention-within-area picks can get you a multiplier, but it’s hard to get to the level of difference you see on cause selection, and especially so when you compare attempts to pick out the best in different causes.
Official development assistance is nearly $200billion annually. I think if that’s going to be called ‘neglected’, the term needs some serious refinement.
People have compared various development interventions like antiretroviral drugs for HIV, which have the same positive externalities and (at least according to a presentation Toby Ord gave a few years ago), still something like 100fold difference in expected outcomes from AMF.
$200B includes a lot of aid aimed at other political goals more than humanitarian impact, , with most of a billion people living at less than $700/yr, while the global economy is over $100,000B and cash transfer programs in rich countries are many trillions of dollars. That’s the neglectedness that bumps of global aid interventions relative to local rich country help to the local relative poor.
You can get fairly arbitrarily bad cost-effectiveness in any area by taking money and wasting on it things that generate less value than the money. E.g. spending 99.9% on digging holes and filling them in, and 0.1% on GiveDirectly. But just handing over the money to the poor is a relevant attainable baseline.
Calling an area neglected because a lot of money is spent badly sounds like a highly subjective evaluation that’s hard to turn into a useful principle. Sure, $200B annually is a small proportion of the global economy, but so is almost any cause area you can describe. From a quick search, the World Bank explicitly spends slightly more than a tenth of that on climate change, one of the classically ‘non-neglected’ evaluands of EA. It’s hard to know how to compare these figures, since they obviously omit a huge number of other projects, but I doubt the WB constitutes much less than 10% of explicit climate spend. This articleadvocates a ~$180bn annual budget, so it’s hard to believe it’s not currently less than that.
Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.
So by what principle would you say AI’s neglectedness > global development’s neglectedness > climate change’s neglectedness?
In this 2022 ML survey the median credence on extinction-level catastrophe from AI is 5%, with 48% of respondents giving 10%. Some generalist forecaster platforms put numbers significantly lower, some forecasting teams or researchers with excellent forecasting records and more knowledge of the area put more (with I think the tendency being for more information to yield higher forecasts, and my own expectation). This scale looks like hundreds of millions of deaths or equivalent this century to me, although certainly many disagree. The argument below goes through with 1%.
Expected damages from climate over the century in the IPCC and published papers (which assume no drastic technological advance, which is in tension with forecasts about AI development) give damages of several percent of world product and order100M deaths.
Global absolute poverty affects most of a billion people, with larger numbers somewhat above those poverty lines, and life expectancy many years shorter than wealthy country averages, so it gets into the range of hundreds of millions of lives lost equivalent. Over half a million die from malaria alone each year.
So without considering distant future generations or really large populations or the like, the scales look similar to me, with poverty and AI ahead of climate change but not vastly (with a more skeptical take on AI risk, poverty ahead of the other two).
”Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.”
How exactly could that be true? Total FTEs working on AI alignment, especially scalable alignment are a tiny, tiny fraction. Google Deepmind has a technical safety team with a few handfuls of people, central Alphabet has none as such. Safety teams at OAI and Anthropic are on the same order of magnitude. Aggregate expenditure on AI safety is a few hundreds of millions of dollars, orders of magnitude lower.
I’m not sure this is super relevant to our core disagreement (if we have one), but how are you counting this? Glancing at that article, it looks like a pessimistic take on climate change’s harm puts excess deaths at around 10m per year, and such damage would persist much more than 10 years.
But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?
How exactly could that be true?
Because coders who don’t work explicitly on AI alignment still spend their working lives trying to get code to do what they want. The EA/rat communities tend not to consider that ‘AI safety’, but it seems prejudicial not to do so in the widest sense of the concept.
We might consider ‘jobs with “alignment” or “safety” in the title’ to be a neglected and/or more valuable subfield, but to do so IMO we have to acknowledge the OP’s point.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
I’m happy to leave it there, but to clarify I’m not claiming ‘no difference in the type of work they do’, but rather ‘no a priori reason to write one group off as “not concerned with safety”’.
Just a note that GiveWell started out with many cause areas including “US Equality of Opportunity” and it was only after a few years of work that they realized that their other causes (besides global health and development) were not really justifying continued research.
Strong agree, though I hadn’t thought about it this exact way before. IMO the EA movement actually came from this conception - ‘global health’ is not neglected, but ‘schistosomiasis treatments’ and ‘malaria nets’ are. So it’s always seemed weird to me when EAs dismiss something like climate change work on the grounds that it’s ‘not neglected’. [ETA I just saw that Sam Battis said the same thing. Somehow in 15ish years in the movement, I don’t think I’ve ever heard anyone else say this!]
Additionally, I think most solutions follow an S-curve in terms of amount of resources put in vs number of people actually helped. Doing research on economic growth policies, RCTs, energy technology etc is all basically worthless until you have the capacity to deploy something at scale.
Helping the global poor is neglected, and that accounts for most bednet outperformance. GiveDirectly, just giving cash, is thought by GiveWell/GHW to be something like 100x better on direct welfare than rich country consumption (although indirect effects reduce that gap), vs 1000x+ for bednets. So most of the log gains come from doing stuff with the global poor at all. Then bednets have a lot of their gains as positive externalities (protecting one person also protects others around them), and you’re left with a little bit of ’being more confident about bednets than some potential users based on more investigation of the evidence (like vaccines), and some effects like patience/discounting.
Really exceptional intervention-within-area picks can get you a multiplier, but it’s hard to get to the level of difference you see on cause selection, and especially so when you compare attempts to pick out the best in different causes.
Official development assistance is nearly $200billion annually. I think if that’s going to be called ‘neglected’, the term needs some serious refinement.
People have compared various development interventions like antiretroviral drugs for HIV, which have the same positive externalities and (at least according to a presentation Toby Ord gave a few years ago), still something like 100fold difference in expected outcomes from AMF.
$200B includes a lot of aid aimed at other political goals more than humanitarian impact, , with most of a billion people living at less than $700/yr, while the global economy is over $100,000B and cash transfer programs in rich countries are many trillions of dollars. That’s the neglectedness that bumps of global aid interventions relative to local rich country help to the local relative poor.
You can get fairly arbitrarily bad cost-effectiveness in any area by taking money and wasting on it things that generate less value than the money. E.g. spending 99.9% on digging holes and filling them in, and 0.1% on GiveDirectly. But just handing over the money to the poor is a relevant attainable baseline.
Calling an area neglected because a lot of money is spent badly sounds like a highly subjective evaluation that’s hard to turn into a useful principle. Sure, $200B annually is a small proportion of the global economy, but so is almost any cause area you can describe. From a quick search, the World Bank explicitly spends slightly more than a tenth of that on climate change, one of the classically ‘non-neglected’ evaluands of EA. It’s hard to know how to compare these figures, since they obviously omit a huge number of other projects, but I doubt the WB constitutes much less than 10% of explicit climate spend. This article advocates a ~$180bn annual budget, so it’s hard to believe it’s not currently less than that.
Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.
So by what principle would you say AI’s neglectedness > global development’s neglectedness > climate change’s neglectedness?
In this 2022 ML survey the median credence on extinction-level catastrophe from AI is 5%, with 48% of respondents giving 10%. Some generalist forecaster platforms put numbers significantly lower, some forecasting teams or researchers with excellent forecasting records and more knowledge of the area put more (with I think the tendency being for more information to yield higher forecasts, and my own expectation). This scale looks like hundreds of millions of deaths or equivalent this century to me, although certainly many disagree. The argument below goes through with 1%.
Expected damages from climate over the century in the IPCC and published papers (which assume no drastic technological advance, which is in tension with forecasts about AI development) give damages of several percent of world product and order 100M deaths.
Global absolute poverty affects most of a billion people, with larger numbers somewhat above those poverty lines, and life expectancy many years shorter than wealthy country averages, so it gets into the range of hundreds of millions of lives lost equivalent. Over half a million die from malaria alone each year.
So without considering distant future generations or really large populations or the like, the scales look similar to me, with poverty and AI ahead of climate change but not vastly (with a more skeptical take on AI risk, poverty ahead of the other two).
”Conversely, Alphabet alone had operating expenses for 2022 of $203B, and they’re fairly keen not to end the world, so you could view all of that as AI safety expenditure.”
How exactly could that be true? Total FTEs working on AI alignment, especially scalable alignment are a tiny, tiny fraction. Google Deepmind has a technical safety team with a few handfuls of people, central Alphabet has none as such. Safety teams at OAI and Anthropic are on the same order of magnitude. Aggregate expenditure on AI safety is a few hundreds of millions of dollars, orders of magnitude lower.
I’m not sure this is super relevant to our core disagreement (if we have one), but how are you counting this? Glancing at that article, it looks like a pessimistic take on climate change’s harm puts excess deaths at around 10m per year, and such damage would persist much more than 10 years.
But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?
Because coders who don’t work explicitly on AI alignment still spend their working lives trying to get code to do what they want. The EA/rat communities tend not to consider that ‘AI safety’, but it seems prejudicial not to do so in the widest sense of the concept.
We might consider ‘jobs with “alignment” or “safety” in the title’ to be a neglected and/or more valuable subfield, but to do so IMO we have to acknowledge the OP’s point.
I was going from this: “The DICE baseline emissions scenario results in 83 million cumulative excess deaths by 2100 in the central estimate. Seventy-four million of these deaths can be averted by pursuing the DICE-EMR optimal emissions path.” I didn’t get into deaths vs DALYs (excess deaths among those with less life left to live), chances of scenarios, etc, and gave ‘on the order of’ for slack.
“But I don’t see why we’re talking about scale. Are you defining neglectedness as a ratio of <people potentially killed in worst case>/<dollars spent>?”
Mean not worst case and not just death. That’s the shape of the most interesting form to me. You could say that that cash transfers in every 1000 person town in a country with a billion people (and a uniform cash transfer program) are a millionfold less impact and a million times more neglected than cash transfers to the country as a whole, cancelling out, but the semantics aren’t really going to be interesting to me.
I think it’s fairly clear that there is a vast difference between the work that those concerned with catastrophic AI safety as such have been doing vs random samples of Google staff, and that in relevant fields (e.g. RLHF,LLM red-teaming, or AI forecasting) they are quite noticeable as a share of global activity. You may disagree. I’ll leave the thread at that.
I’m happy to leave it there, but to clarify I’m not claiming ‘no difference in the type of work they do’, but rather ‘no a priori reason to write one group off as “not concerned with safety”’.
Just a note that GiveWell started out with many cause areas including “US Equality of Opportunity” and it was only after a few years of work that they realized that their other causes (besides global health and development) were not really justifying continued research.