Note that Severin is a coauthor on this post, though I haven’t been able to find a way to add his EA Forum account on a crosspost from LessWrong.
StevenKaas
Stampy’s AI Safety Info soft launch
AISafety.info “How can I help?” FAQ
Announcing AISafety.info’s Write-a-thon (June 16-18) and Second Distillation Fellowship (July 3-October 2)
All AGI Safety questions welcome (especially basic ones) [May 2023]
We tried to write a related answer on Stampy’s AI Safety Info:
How could a superintelligent AI use the internet to take over the physical world?
We’re interested in any feedback on improving it, since this is a question a lot of people ask. For example, are there major gaps in the argument that could be addressed without giving useful information to bad actors?
Thanks for reporting the broken links. It looks like a problem with the way Stampy is importing the LessWrong tag. Until the Stampy page is fixed, following the links from LessWrong should work.
There’s an article on Stampy’s AI Safety Info that discusses the differences between FOOM and some other related concepts. FOOM seems to be used synonymously with “hard takeoff” or perhaps with “hard takeoff driven by recursive self-improvement”; I don’t think it has a technical definition separate from that. At the time of the FOOM debate, it was taken more for granted that a hard takeoff would involve recursive self-improvement, whereas now there seems to be more emphasis by MIRI people on the possibility that ordinary “other-improvement” (scaling up and improving AI systems) could result in large performance leaps before recursive self-improvement became important.
- 12 Apr 2023 20:56 UTC; 2 points) 's comment on All AGI Safety questions welcome (especially basic ones) [April 2023] by (LessWrong;
OK, thanks for the link. People can now use this form instead and I’ve edited the post to point at it.
Like you say, people who are interested in AI existential risk tend to be secular/atheists, which makes them uninterested in these questions. Conversely, people who see religion as an important part of their lives tend not to be interested in AI safety or technological futurism in general. I think people have been averse to mixing AI existential ideas with religious ideas, for both epistemic reasons (worries that predictions and concepts would start being driven by meaning-making motives) and reputational reasons (worries that it would become easier for critics to dismiss the predictions and concepts as being driven by meaning-making motives).
(I’m happy to be asked questions, but just so people don’t get the wrong idea, the general intent of the thread is for questions to be answerable by whoever feels like answering them.)
Thank you! I linked this from the post (last bullet point under “guidelines for questioners”). Let me know if you’d prefer that I change or remove that.
All AGI Safety questions welcome (especially basic ones) [April 2023]
As I understand it, overestimation of sensitivity tails has been understood for a long time, arguably longer than EA has existed, and sources like Wagner & Weitzman were knowably inaccurate even when they were published. Also, as I understand it, although it has gotten more so over time, RCP8.5 has been considered to be much worse than the expected no-policy outcome since the beginning despite often being presented as the expected no-policy outcome. It seems to me that referring to most of the information presented by this post as “news” fails to adequately blame the EA movement and others for not having looked below the surface earlier.
What does an eventual warming of six degrees imply for the amount of warming that will take place in (as opposed to due to emissions in), say, the next century? The amount of global catastrophic risk seems like it depends more on whether warming outpaces humanity’s ability to adapt than on how long warming continues.
I was thinking e.g. of Nordhaus’s result that a modest amount of mitigation is optimal. He’s often criticized for his assumptions about discount rate and extreme scenarios, but neither of those is causing the difference in estimates here.
According to your link, recent famines have killed about 1M per decade, so for climate change to kill 1-5M per year through famine, it would have to increase the problem by a factor of 10-50 despite advancing technology and increasing wealth. That seems clearly wrong as a central estimate. The spreadsheet based on the WHO report says 85k-95k additional deaths due to undernutrition, though as you mention, there are limitations to this estimate. (And I guess famine deaths are just a small subset of undernutrition deaths?) Halstead also discusses this issue under “crops”.
I think the upper end of Halstead’s <1%-3.5% x-risk estimate is implausible for a few reasons:
1. As his paper notes and his climate x-risk writeup further discusses, extreme change would probably happen gradually instead of abruptly.
2. As his paper also notes, there’s a case that issues with priors and multiple lines of evidence imply the tails of equilibrium climate sensitivity are much less fat than those used by Weitzman. As I understand it, ECS > 10 would imply paleoclimate estimates are highly misleading and estimates based on the instrumental record are highly misleading and climate models are highly misleading. I don’t know how this sort of reasoning relates to Earth system feedbacks, but I guess the thresholds for them to become relevant would be less likely to be crossed.
3. Even if some of it were abrupt, a 10 degree rise would probably not be an existential disaster in the strict sense, though it would be horrible. (On the other hand, maybe a less than 10 degree rise would still have some risk of causing an existential disaster through some indirect effect on the stability of civilization.)
4. All estimates of the chance that a particular development will cause an existential disaster have to account for the possibility that some other development will have caused an existential disaster by that time and the possibility that some other development will have made humanity mostly immune to existential disasters.
Ah, it looks like I was myself confused by the “deaths/year” in line 20 and onward of the original, which represent an increase per year in the number of additional deaths per year. My apologies. At this point I don’t understand the GWWC article’s reasoning for not multiplying by years an additional time.
My prior was that, since economists argue over the relative value of mitigation (at least beyond low hanging fruit) and present consumption, and present consumption isn’t remotely competitive with global health interventions, a calculation that shows mitigation to be competitive with global health interventions is likely to be wrong. But after looking it over another time, I now think that’s accounted for mostly by:
1. The assumption that climate change increases all causes of death by the same percentage as the causes of death investigated here, which, as the article notes, seems very pessimistic. If 57 million people worldwide died in 2016 (and population is increasing but death rate is decreasing), then 5 million additional deaths per year in 2030-2050 seems implausibly large: almost one in ten deaths would be due to climate change.
2. Cool Earth being estimated here to be orders of magnitude more efficient than the kinds of mitigation that economists usually study. (I have no opinion on whether this is accurate.)
(edit: I no longer endorse this comment)
We don’t expect to be able to recapture most emitted CO2, so a very conservative value to use would be to attribute 50 years of increased deaths to each emission. Hence, this increases the estimate of lives saved by a factor of 50x.
This seems to be the key disagreement between your estimate and GWWC’s. As I understand it, if we reduce emissions for the year X by 1%, different things happen in the two calculations:
In GWWC’s calculation, every year Y for decades, we prevent 1% of the deaths during the year Y that would have been prevented by a delay of all climate change for one year (corresponding to the year X)
In your calculation, every year for decades, we prevent 1% of the deaths that would have been caused by climate change during the year Y
There are two “per year”s at play, “per year of deaths” and “per year of emissions”, and the “per year of deaths” is canceled out by “years of deaths”, leaving only the “per year of emissions”. GWWC treats a one-year-long stop to all emissions (in the present) as equivalent to a delay of warming by one year (in the future). I don’t quite understand why that is, but the units seem right. So if I’m not mistaken, you were understandably confused by the numbers being implicitly “per year per year” rather than just “per year”, and the factor 50 shouldn’t be there.
edit: To be more concrete, if you’re multiplying by 50 years in cell C44 of the updated sheet, then cell C34 should do something like divide the averted emissions by the total emissions over decades rather than by the emissions for just the year 2016.
A piece such as this should engage with the direct cost/benefit calculations that have been done by economists and EAs (e.g. Giving What We Can), which make it seem hard to argue that climate change is competitive with global health as a cause area.
How much it would take to stay under a mostly arbitrary probability of a mostly arbitrary level of temperature change is a less relevant statistic than how much future temperatures would change in response to reduced emissions.
Since somebody was wondering if it’s still possible to participate without having signed up through alignmentjam.com:
Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.