Thanks Noah! I have four series on my blog. The series, “Exaggerating the risks”, makes the case that many different risk estimates have been exaggerated. I’m focusing on Ord’s estimate of climate risk as a first case study. I’ll try to draw some lessons from that discussion, then use them to discuss some other risks where my opinions may be more controversial among effective altruists. For example, I reviewed the Carlsmith report and assigned a much lower probability to AI risk than Carlsmith did. I’ll try to say why I did that.
Like many philosophers, I was raised in grad school to be a Bayesian. I can’t say I’ve never had doubts (I work on bounded rationality, after all), but I’m fairly sympathetic to the broad Bayesian picture.
I really appreciate the suggestion to take on some core epistemic tools and beliefs. Are there any that you would especially like to hear about?
Never fear, I do have some sharper criticisms to make. I’m in the midst of pressing one (that I don’t believe the TIme of Perils Hypothesis) in my series “Existential risk pessimism and the time of perils”. Beyond that … Perhaps you’re right that I should punch things up a bit? I’m trying to take this a bit slowly.
There is some controversy about economic estimates of damages from climate destruction in the mainstream. You might find more contrast and differences if you take a look outside EA and economics for information on climate destruction.
You distinguish catastrophic impacts from existential impacts. I’m conflicted about the distinction you draw, but I noted this conflict about Toby Ord’s discussion as well, he seems to think a surviving city is sufficient to consider humanity “not extinct”. While I agree with you all, I think these distinctions do not motivate many differences in pro-active response, that is, whether a danger is catastrophic, existential, or extinction-level, it’s still pretty bad, and recommendations for change or effort to avoid lesser dangers are typically in line with the recommendations to avoid greater dangers. Furthermore, a climate catastrophe does increase the risk of human extinction, considering that climate change worsens progressively over decades, even after all anthropogenic GHG production has stopped. I would to learn more about your thoughts on those differences, particularly how they influence your ethical deliberations about policy changes in the present.
I’m interested in your critical thoughts on:
typical application or interpretation of Bayesianism in EA.
suitability of distinct EA goals: toward charitable efforts, AGI safety, or longtermism.
earning to give and with respect to what sorts of jobs.
longevity-control, personal choice over how long you live, once life-extension is practical.
expected value calculations wrt fanatical conclusions, huge gains and tiny odds.
the moral status of potential future people in the present.
the value of risk aversion versus commitment to miniscule chances of success
any differing views on technological stagnation or value lock-in from longtermism
your thoughts on cluster thinking as introduced by Holden Karnofsky
the desirability and feasibility of claims to influence or control future people’s behavior
the positive nature of humanity and people (e.g, are we innately “good”?)
priority of avoiding harm to a percentage minority when that harm benefits the majority
the moral status of sentient beings and discounting of moral status by species
moral uncertainty as a prescriptive ethical approach
I’ve done my best on this forum to distinguish my point of view from EAs wherever it was obvious that I disagreed. I’ve also followed the works of others here who hold substantially different points of view than the EA majority (for example, about longtermism). If your disagreements are more subtle than mine, or you would disagree with me on most things, I’m not one to suggest topics that you and I agree on. But the general topics can still be addressed even though we disagree. After all, I’m nobody important but the topics are important.
If you do not take an outsider’s point of view most of the time, then there’s no need to punch things up a bit, but more a need to articulate the nuanced differences you have as well as advocate for the EA point of view wherever you support it. I would still like to read your thoughts from a perspective informed by views outside of EA, as far outside as possible, whether from philosophers that would strongly disagree with EA or from other experts or fields that take a very different point of view than EA’s.
I have advocated for an alternative approach to credences, to treat them as binary beliefs or as subject to constraints(nuance) as one gains knowledge that contradicts some of their elements. And an alternative approach to predictions, one of preconditions leading to consequences, and the predictive work involved being one of identifying preconditions with typical consequences. Identification of preconditions in that model involves matching actual contexts to prototypical contexts, with type of match allowing determination of plausible, expected, or optional (action-decided) futures predictable from the match’s result. My sources for that model were not typical for the EA community, but I did offer it here.
If you can do similar with knowledge of your own, that would interest me. Any tools that are very different but have utility are interesting to me. Also how you might contextualize current epistemic tools, as I said before, interests me.
The distinction between catastrophic and existential risks is a standard distinction in the literature and generally considered to be very important. On the notion of a catastrophic risk see Bostrom and Cirkovic (2008). On the notion of an existential risk … that’s still up for grabs, but the Bostrom (2013) definition I cite is a decent guide.
The reason why many people have thought it is important to distinguish catastrophic from existential risks is that “pretty bad” can cover differences of many orders of magnitude. There are billions of people alive right now, and catastrophic risks could make many of their lives bad. But the number of potential future people is so large I’d need a Latin textbook before I knew how to say it, and existential risks could make those people’s lives pretty bad (or non-existent). The thought is, at least on the standard line, that existential risks would be many, many, many times worse than catastrophic risks, so that it’s really quite important to make sure that something really poses an existential risk as opposed to a catastrophic risk.
I’ll take a look at some of your suggestions—thanks! Maybe we can talk about AGI safety in a few weeks. My current plan is to talk through the Carlsmith report a bit, but I might start with my paper “Against the singularity hypothesis”.
Do you know the philosophical literature on credences as binary beliefs? This is definitely a position that you can hold (that credences are binary beliefs), but it’s a bit controversial. I guess Jackson (2020) is a pretty good overview.
Let me get your take on a few more controversial topics:
Attitudes towards expertise in the EA community. (EAs may not place enough trust in outside experts).
Standards of evidence and proof in the EA community. (I think they are too low).
Status of women in the EA community. (I think it could be better).
Stating credences as a practice (and why I think it’s often harmful).
Epistemic status of unpublished reports and blogposts. (I think these are given too much weight. I do see the irony in this statement).
Distorting influence of money in academia. (Does using money to build a research field conduce towards the truth?).
Sound interesting? A bit too much for now? I’m not sure how much I want to dial up the controversy just yet, but perhaps I can punch it up a little.
OK, well, I browsed the two articles, I don’t want to get into a semantics argument. There’s definitely a tension between some uses of existential risk and others, but some agreement about what are global catastrophic risks.
AGI safety is about whether some organizations can develop robots or artificial life that have human-level or greater intelligence but do the bidding of humans reasonably well. AGI designers intend to create slaves for humanity. I don’t foresee that ending well for humanity and it doesn’t start well for AGI’s. Robotic and software automation is a related technology development pathway with AGI as one possible endpoint, but uses of automation are underexplored, and AGI might not be necessary to develop in order to serve some stakeholders. Or course, that doesn’t mean those folks should be served.
I’m interested in your paper on the singularity hypothesis.
I browsed the Jackson paper. I offered a model of EA’s actually scoring the intensity of feelings separately from whatever evidence supports their credence level here. The analysis of good-faith EA’s doing feeling intensity measurements identifies a potential defense of credence-eliminativism, provided one can believe that feelings of certainty are orthogonal to evidence accumulation in support of a proposition. I do believe that is the case.
What Jackson identifies as “simple, all-out belief” is in fact a proposition that passes the filter of a mental (rational, logical, ontology-matching) model but has some feelings associated with it, maybe some of which summarize results of a self-applied epistemic evidence-measurement tool, or maybe not. Most of the time, people coast on easy matches to existing cognitive templates, performing various complicated operations without much new learning or internal conflict involved. Sometimes there’s more complicated feelings implying self-contradicting epistemic states, but those can involve epistemic or motivated reasoning, and be about the same or a different focus of attention than the one identified as consciously considered. A solution is to use a cognitive aid, one that, for example:
reminds you of collected information.
corrects for differences of emotional impact created by different kinds of evidence.
discourages cognitive bias against valid evidence or valid premises.
maintains your commitment to the relevance of a specific focus of attention.
Since you asked for my takes on your list of topics, here they are:
Attitudes towards expertise in the EA community. ME: EA research can be original in a field, for example, suppose Gibbins develops weather control technology, then obviously, EA’s have reason to cite her work. Alternatively, Halstead’s research is a report on findings by other researchers. Given the broad direct research of climate change conducted outside the EA community, turning to other sources than Halstead is easier to justify. In EA, outside research is digested for consideration inside the ITN framework. In that case, revisiting an existing ITN analysis could involve redoing the same outside research gathering over again.
Standards of evidence and proof in the EA community. ME: EA could use explicit cognitive aid designs to support evidence accumulation and processing, such software technology that tracks and accumulate evidence, assisted by an inference engine for argument processing, something that ignores weighting of evidence in favor of explicit statements of conclusions or flagging of evidence conflicts, ambiguities, and information gaps.
Status of women in the EA community. ME: Women in the community? I have no idea. My only interface with EA’s is through this forum, and I’m not even sure most of the time what the gender is of the people reading or responding to what I write. My ignorance about social relations among EA’s dwarfs my knowledge of it. I’ve never been to an EA conference, for example. Is there a lot of drinking or drugs? Do people retire into hotel rooms and have sex with new acquaintances? Is it a party or a serious affair, typically? Is there lots of private gossip and petty politics or are people just about their cause and their values? Apparently there are EA dormitories or something? What goes on there? I wouldn’t know, and certainly wouldn’t know whether there’s a lot of misogyny among EA’s.
Stating credences as a practice (and why I think it’s often harmful). ME: yes, by credence you mean the technical definition that Jackson identifies, correct? A proposition with a probability assigned to it? The target for my red team criticism of EA’s was the use of credences and “updating”, because both showed a lack of epistemic procedures to help identify preconditions and maintain achievement of goals through compensating actions that create preconditions for goal achievement. Credences and Bayesianism could serve AI and forecasters in some domains. Most of EA use of credences is false imitation of a different kind of cognitive processor than a human being.
We are different from bayesian processors in several ways including:
our selective and fallible memory. Without cognitive aids, our domain-specific ontologies decay or mutate, particularly if they are detailed or frequently updated, as an expert’s might be. Typically, we fail to consider evidence after a time-delay in cognitive processing of that evidence. We just stop thinking of it.
we learn by imitation and report. We adopt beliefs and world models wholesale based on imitation of others behaviors and thinking. We learn through believing what we’re told and imitating what other’s do.
we uncritically accept other’s beliefs through motivated reasoning toward agreement or disagreement with particular people. Human emotions designed for socialization and procreation (including feelings such as certainty) lead epistemic processes, rather than follow them.
most people don’t do mental arithmetic well. .67* −50 + .33* 650? It’s doable with practice, but about every deliberation involving more than one alternative outcome?
NOTE: Scaling and scoring of value in well-defined categories for application of decision theory is a challenge, even with cognitive aids and careful thought. Yes, we can succeed with decision theory in specific domains useful for it, with practice and study, and use of cognitive aids so we don’t make many errors, that is, in specific contexts.
Following on from the previous point, human cognitive tendencies are not improved on by a normative model (such as Bayesian reasoning) that ignores them. Yes, there’s science, rigorous explanations, and so on. We’ve defeated our genetic limitations on our cognitive operations, to some extent. Or maybe closer to the truth is that we’ve succeeded in some contexts (hard sciences) defined by our limitations and continue to fail in others (psychology, social sciences, politics).This normative turn toward Bayesianism appears to me to be rationalist fantasy given too much, ahem, credence. EA researchers will do better turning back to traditional methods of critical thinking, argumentation, and scientific research. A virtue of those methods is that they were useful for people who held good-old-fashioned beliefs, as we all do.
Epistemic status of unpublished reports and blogposts. ME: Epistemic status statements seem to offer and validate what are usually considered fallacious reasons to reject an argument. My own epistemic status analyses do include whether my argument is self-serving.
Distorting influence of money in academia. ME: if the money is offered with an agenda, then yeah, it seeks research to support its agenda. Sometimes the agenda is justified by evidence and norms, while other times it’s not. AGI safety helps organizations looking to accumulate wealth or concentrate power with some minority group through deployment of AGI. That reflects the worldview of folks supporting AGI safety, rather than some conspiracy involving them.
First, I’d like to thank you both for this instructive discussion, and Thorstad for the post and the blog. Second, I’d like to join the fray and ask for more info on what might be the next chaters in the climate series. I don’t think it is a problem if you only focus on “Ord vs. Halstead”, but then perhaps you should make it more explicit, or people may take it as the final word on the matter.
Also, I commend your analysis of Ord, because I’ve seen people take his estimate as authoritative (e.g., here), instead of a guesstimate updated on a prior for extinction. However, to be fair to Ord, he was not running a complex scenario analysis, but basically updating from the prior for human extinction, conditioned on no major changes. That’s very different from Halstead’s report, so it might be proper to have a caveat emphasizing the differences in their scopes and methodologies (I mean, we can already see that in the text, but I’d not count on a readers inferential capacity for this). Also, if you want to dive more into this (and I’d like to read it), there’s already a thriving literature on climate change worst-case scenarios (particularly outside of EA-space) that perhaps you’d like to check—especially on climate change as a GCR that increases the odds of other man-made risks. But it’s already pretty good the way it is now.
Thanks Ramiro! Very helpful. I was intending to wrap up the climate portion of “Exaggerating the risks” with some more discussion of Halstead, and some general lessons. I started my discussion with climate risks because I think that climate risks are among the most empirically tractable risks, and one of the places where a frequently-cited estimate seems much too high.
My intention was to move after that towards some risks that the EA community emphasizes more, such as engineered pandemics and artificial intelligence. These topics take a bit more care, since by construction it is harder to get evidence about such matters, and I have to admit a bit of reluctance to speculate too broadly about them. My tentative plan is to say a few words about the Carlsmith report next. I guess you might know that I was one of the reviewers for the Carlsmith report. I didn’t think the risk was very high. The internet wasn’t particularly happy about this. (For a while, LessWrong’s top comment on the matter was: “I guffawed when I saw Thorstad’s Overall ~P Doom 0.00002%, really? And some of those other probabilities weren’t much better. Calibrate people”). I’d like to explain why I still don’t think the risk is very high.
Do you have any favorite readings on worst-case climate risk? I was happy to see that the Kemp et al. piece made it into PNAS. I hope that this will give the literature on worst-case climate risk some much-needed visibility. (I am quite concerned about worst cases! I just think that outright extinction is a very unlikely scenario, even among worst cases).
Hmm, let me know if you have any thoughts on my responses to your request for my takes, David.
Ramiro, I’m curious about resources that you want to share about climate change, it is the only GCR that EA’s regularly deny is a GCR, for some reason. I don’t think David’s question is entirely fair, but paper topics that could illustrate some expectations include:
multi-breadbasket failure due to extreme weather and drought
tipping elements posed to fall this century (including the Amazon),
the signed climate emergency paper,
recent papers about methane hydrate melting in the past,
(informal) analyses of the recent summit rain on Greenland
recent analyses of pressures on rate of melting of the Antarctic
notes from climate scientists that IPCC models leave out positive feedbacks from physical forcings on tipping elements like:
warming ocean currents running against our ice sheets
moraines, drainage holes, ice darkening, and bottom lubrication of Greenland ice sheets
change of snow to rain on Greenland as Greenland receives warmer weather and Greenland’s altitude drops
changes in wind patterns carrying moisture to different places globally
slowing of the AMOC as freshening occurs in the North Atlantic
burning and cutting of the Amazon rainforest
increased or continual fires in permafrost regions
or feedbacks from declining carbon sinks, like:
respiration increase past photosynthesis thresholds in plants
Brazil rainforest change to a carbon source and savannah
decline of plankton due to acidification, ocean heat waves, and declines in certain ocean species (for example, whales)
forest fires in the permafrost
desertification during long-term drought
the feasibility and timeliness of BECCS or DACCS at scale
the general trend of decline in predicted GAST increases required to tip large Earth system tipping elements.
an expected increase in human pressures on natural systems as weather and climate worsens (for example, increased pressure on fisheries as they decline)
These topics are what Halstead didn’t really draw together or foresee had implications this century.
Below is a prediction that I posted to gjopen a few months ago, at the start of their series of questions on climate change. It was not written for an EA audience, but it does show my thinking on the matter. Maybe I’m just mistaken that global society will totally flub our response to the GCR that is climate destruction. Maybe that is just what is happening so far but we will radically change for the better. Meanwhile, I reject the EA claim that climate change is not a neglected cause area, but I speculate that EA’s think climate change is intractable. It is not intractable. There are multiple pathways to solutions, but only the muddling ones appeal to me. The extreme technology pathway (nanotech) is actually more frightening than climate change. Nanotechnology is a GCR of its own.
...
Our civilization is on a pathway to make Earth uninhabitable for any large group of humans by 2100, all other things equal. I suppose there might be a few humans in some underwater city, underground camp, or space station.
We have had muddling solutions available for 50 years. A muddling solution is a sensible but reactive solution to a predicted problem, that is implemented quickly, that is not terribly innovative, and is followed for as long as necessary, meaning decades or even centuries.
Here’s a list of muddling solutions that could have prevented our problems if resorted to them beginning in the 1970′s:
* providing family planning services globally
* encouraging access to education and financial opportunities for women worldwide
* voluntarily reducing the birth rate across the world to 1.5 (1-2 children)
* relying on vegetarian (soy or amino-supplemented staple grains) protein
* subsidizing conservation and micro-grid technologies, not oil and gas industries
* removing all personhood rights from corporations
* raising fuel economy of cars over 50mpg and preferring trains, taxis, or human-powered vehicles
* emphasizing water conservation in agriculture
* forcing costs of industrial and construction waste onto companies, suppliers, or consumers
* maintaining regulations on the finance and credit industries (preventing their obvious excesses)
* protecting most land areas from development and only allowing narrow human corridors through them
* disallowing advertising of vice goods (alcohol, cigarettes, pornography, restaurant foods, candy, soda)
* avoiding all medical and pharmaceutical advertising
* disallowing commercial fishing and farm-animal operations
* providing sewage handling and clean water globally
* preventing run-off from industrial agriculture
* requiring pesticides to meet certain criteria
* encouraging wider use of alternative agriculture methods
* avoiding low-value (most) use of plastic
* recycling all container materials in use (wood, metal, glass, plastic, etc)
* capturing all minerals and metals contained in agricultural, industrial, consumer and other waste streams
* and the list goes on…
Some people believe that contraception violates their religion. Some believe that humans should be able to live everywhere regardless of ecological impacts. Vices are the spice of life for most people. There were incentives to avoid all the past solutions on my list, I admit. However, those solutions, implemented and accepted globally, would have prevented catastrophe. This list is true to the thought experiment, “What could we have done to avoid our climate change problem over the last 50 years that we knew to do but didn’t do”. In my view, those solutions are obviously necessary and not overly burdensome. A small percentage of people would have made a lot less money. A lot of illness and suffering in our society would be absent. But just like all solutions that require action, these solutions could only succeed if they were implemented and accepted. Our civilization did not take those actions over the last 50 years.
Now we need other solutions (involving welcoming migration and choosing extreme curbs on birth rate and consumption in developed countries) as well as those on my list, but much faster (for example, to save our ocean life from acidification, overfishing, and pollution effects over the next few decades). People in the developed world won’t do it. Instead, the developed world will follow conventional wisdom.
Conventional wisdom is to:
* wall ourselves off (for example, ignore others well-being, hoard resources, and wait for technology breakthroughs).
* innovate our way out (for example, through intensive development of breakthrough technologies)
I don’t think walling off will work, because the natural systems that are sometimes called tipping points are now changing. The effects of those tipping points will cut off supply chains over the next few decades, leading to multi-breadbasket failure, destroyed critical infrastructure, and destroyed political systems. Every country is vulnerable to those consequences.
Theoretically, we can innovate our way out. However, the innovations need to address more than energy production. They have to let us:
* control local weather.
* remove GHG’s from the atmosphere.
* replace modern agriculture at scale.
* quickly reverse ocean acidification.
* reverse ecosystem destruction or replace ecosystems (for example, replace extinct pollinators).
* remove pollution quickly (within months or years) from land and ocean pollution sinks.
* replace modern manufacturing at scale.
No futuristic technology can meet the required timeline except for large-scale manufacturing with nanotechnology (assembling materials and self-assembling devices, from micro- to macro-scale, at extreme speed). The timeline becomes shorter with each decade that passes. We won’t recognize the extreme impact of the current processes for another 10-20 years. I think the latest we could introduce nanotechnology to do all those things and still have a livable Earth for the entire global population is 2040, before ecosystem damage becomes so great that it destroys civilization on its own. But it won’t happen in time.
Instead, after 2060, we’ll be left with:
* very little good topsoil or clean water anywhere
* poor air quality in most places (dust storms, toxic algae gassing off, air pollution from local manufacturing)
* no guarantee of mild weather anywhere in any season (so any farming has to be in artificially protected environments),
* most land species extinct (including pollinators),
* mostly dead oceans (no pteropods or zooplankton and declining phytoplankton).
Today:
* the Arctic ice is retreating fast
* the Amazon is becoming a carbon source
* the permafrost is melting faster (with local feedback from fires and the warming Arctic ocean)
* Greenland is having unexpectedly large melting events
* the jet stream is becoming wavy instead of hanging in a tight circle
* surprising levels of GHGs other than CO2 are already in the atmosphere
Climate modelers in general are playing catch up to all these changes, IPCC scenarios don’t really account for tipping points processes happening as quickly as they are. Countries have no plan to stop producing CO2 or releasing other GHG’s, so the IPCC’s business-as-usual scenario will go as long as it can. None of the anticipated CCS solutions are feasible and timely at scale (including planting trees).
By the end of the century:
* The Greenland ice sheet and some or all of the West Antarctic will have melted.
* The methane hydrates of the [ESIS] in the Arctic will have dumped their gas load
* the permafrost across the high latitudes will be either melted or refreezing in a mini-ice age
* the Amazon will have long-since disappeared in drought and lightning fires
* Several large heat waves will have hit the tropical latitudes, killing every mammal outdoors (not wearing a cooling jacket) after several hours.
* there won’t be significant land or ocean sinks for CO2.
* tropical temperatures will be unlivable without cooling technologies.
* the 6th great extinction will be over.
* at least one human famine will have hit all countries around the world simultaneously.
I personally believe that climate change is now self-amplifying. We can slow the rate by removing anthropogenic forcings of global atmospheric heating, but if we are late to doing that, then we have already lost control of the heating rate to intrinsic feedbacks. I don’t know how far along that self-amplification is now. I do know that between release of frozen GHG’s and destruction of CO2 sinks and loss of stratocumulus cloud cover, the Earth can take us past 6C of warming. [GAST increase]
Today’s problem lies with the situation and human psychology. Obvious solutions are unpalatable.
First, you can’t point at plenty, predict it will all be gone in a few decades, and then ask people to deprive themselves of that plenty. We don’t choose voluntary deprivation for the greater good based on theories or science.
Second, the problem of nonlinear changes in climate conditions and Earth inhabitability is that we cannot conceive of them as real. But they are real. People would rather die than give up hamburgers? Maybe not, but if we wait until that seems like a real decision to make, it will be too late. When the signal from climate change is so strong that everyone is terrified, and willing to do something like give up hamburgers, it will be too late to give up hamburgers. Instead, the consequences of raising all those cows will be knocking.
Finally, the consequences of climate change are not our instant extinction. Instead, humanity will go through a drawn-out, painful, lengthy whithering of life quality against increasing harms from climate events, social upheavals and decreasing resources. That situation will erode heroic efforts and noble causes, extinguishing hope as frustrating obstacles mount for any organized effort to stop climate change.
I think human society in the developed world just hasn’t felt the climate change signal yet, and isn’t really ready to face the problem until it does. And then it will be too late to do much of anything about climate change. I used to think “too late” meant 2060, about when we realized that CCS solutions were always hypotheticals. Now I think it means 2030, the earliest that we might lock in the death of ocean life from multiple anthropogenic forcings, suffer a giant methane bubble from the Arctic, or see massive melt on Greenland. That’s why I think my prediction is correct, we really only have less than a decade to push our climate (and biosphere) onto another pathway. All those solutions I listed are how to do it. Anyone think they look worthwhile?
...
Thank you for reading, if you got this far. This is just a scenario and analysis with a few proposed plausible alternatives. If your counterargument is that we have more electric cars or that solar is cheaper than ever, then you need to explore the problem more carefully.
Thanks Noah, will do! Sorry for the delay. I can’t manage to take a full week off for vacation, so I’m taking five scattered days off this month and today is one of my days off. I’ll try to reply as soon as I can.
Pft, thats OK, David. Reading over how much I wrote, I’ll be surprised if you get through it all. Thanks for the showing some interest, and don’t forget to enjoy some of that vacation time! Bummer it’s split up like that.
Thanks a lot for your suggestions. I’m very happy by the fact that you didn’t get upset with me for making them. I’m … trying to tone things down a bit at the start, and I think these are some of the topics that might cause a bit more controversy. I’m also continually impressed by the ability of EAs to have hard conversations. Maybe it’s time to start on some of these topics.
I’ll read your red-teaming contest submission shortly.
I think your very helpful and honest response about the status of women in the EA community is perhaps a good reason to talk about it: many people just aren’t paying much attention to these issues. I guess see this for the latest public problem-statement (https://forum.effectivealtruism.org/posts/t5vFLabB2mQz2tgDr/i-m-a-22-year-old-woman-involved-in-effective-altruism-i-m), although there’s a fairly long history of issues going back a few years, many of which received much poorer responses from the community.
I think maybe your point about epistemic status statements is an important one that I should fold in to the discussion of credence stating. I have a suspicion that just stating epistemic statuses may not be enough to secure good epistemic standings for a literature largely founded on blog posts and forum posts (and that it really would be better to have a higher proportion of published work). I’ll see what I can do to write something up about that, again conscious of the irony that I am typing these words on a forum post about my blog.
Thanks for being patient with me Noah! I enjoyed this discussion. (I’m going to be checking the EA forum less in the coming weeks, since I’m not always a regular here, but I’ll try to check back when I can).
I am working on a write-up that addresses climate change impacts differently than Halstead, but progress is slow because my attention and time are divided. I will share the work once it’s complete.
On most topics relevant to this forum’s readers, that is. For example, I haven’t found a good conversation on longevity control, and I’m not sure how appropriate it is to explore here, but I will note, briefly, that once people can choose to extend their lives, there will be a few ways that they can choose to end their lives, only one of which is growing old. Life extension technology poses indirect ethical and social challenges, and widespread use of it might have surprising consequences.
Thanks Noah! I have four series on my blog. The series, “Exaggerating the risks”, makes the case that many different risk estimates have been exaggerated. I’m focusing on Ord’s estimate of climate risk as a first case study. I’ll try to draw some lessons from that discussion, then use them to discuss some other risks where my opinions may be more controversial among effective altruists. For example, I reviewed the Carlsmith report and assigned a much lower probability to AI risk than Carlsmith did. I’ll try to say why I did that.
Like many philosophers, I was raised in grad school to be a Bayesian. I can’t say I’ve never had doubts (I work on bounded rationality, after all), but I’m fairly sympathetic to the broad Bayesian picture.
I really appreciate the suggestion to take on some core epistemic tools and beliefs. Are there any that you would especially like to hear about?
Never fear, I do have some sharper criticisms to make. I’m in the midst of pressing one (that I don’t believe the TIme of Perils Hypothesis) in my series “Existential risk pessimism and the time of perils”. Beyond that … Perhaps you’re right that I should punch things up a bit? I’m trying to take this a bit slowly.
There is some controversy about economic estimates of damages from climate destruction in the mainstream. You might find more contrast and differences if you take a look outside EA and economics for information on climate destruction.
You distinguish catastrophic impacts from existential impacts. I’m conflicted about the distinction you draw, but I noted this conflict about Toby Ord’s discussion as well, he seems to think a surviving city is sufficient to consider humanity “not extinct”. While I agree with you all, I think these distinctions do not motivate many differences in pro-active response, that is, whether a danger is catastrophic, existential, or extinction-level, it’s still pretty bad, and recommendations for change or effort to avoid lesser dangers are typically in line with the recommendations to avoid greater dangers. Furthermore, a climate catastrophe does increase the risk of human extinction, considering that climate change worsens progressively over decades, even after all anthropogenic GHG production has stopped. I would to learn more about your thoughts on those differences, particularly how they influence your ethical deliberations about policy changes in the present.
I’m interested in your critical thoughts on:
typical application or interpretation of Bayesianism in EA.
suitability of distinct EA goals: toward charitable efforts, AGI safety, or longtermism.
earning to give and with respect to what sorts of jobs.
longevity-control, personal choice over how long you live, once life-extension is practical.
expected value calculations wrt fanatical conclusions, huge gains and tiny odds.
the moral status of potential future people in the present.
the value of risk aversion versus commitment to miniscule chances of success
any differing views on technological stagnation or value lock-in from longtermism
your thoughts on cluster thinking as introduced by Holden Karnofsky
the desirability and feasibility of claims to influence or control future people’s behavior
the positive nature of humanity and people (e.g, are we innately “good”?)
priority of avoiding harm to a percentage minority when that harm benefits the majority
the moral status of sentient beings and discounting of moral status by species
moral uncertainty as a prescriptive ethical approach
I’ve done my best on this forum to distinguish my point of view from EAs wherever it was obvious that I disagreed. I’ve also followed the works of others here who hold substantially different points of view than the EA majority (for example, about longtermism). If your disagreements are more subtle than mine, or you would disagree with me on most things, I’m not one to suggest topics that you and I agree on. But the general topics can still be addressed even though we disagree. After all, I’m nobody important but the topics are important.
If you do not take an outsider’s point of view most of the time, then there’s no need to punch things up a bit, but more a need to articulate the nuanced differences you have as well as advocate for the EA point of view wherever you support it. I would still like to read your thoughts from a perspective informed by views outside of EA, as far outside as possible, whether from philosophers that would strongly disagree with EA or from other experts or fields that take a very different point of view than EA’s.
I have advocated for an alternative approach to credences, to treat them as binary beliefs or as subject to constraints(nuance) as one gains knowledge that contradicts some of their elements. And an alternative approach to predictions, one of preconditions leading to consequences, and the predictive work involved being one of identifying preconditions with typical consequences. Identification of preconditions in that model involves matching actual contexts to prototypical contexts, with type of match allowing determination of plausible, expected, or optional (action-decided) futures predictable from the match’s result. My sources for that model were not typical for the EA community, but I did offer it here.
If you can do similar with knowledge of your own, that would interest me. Any tools that are very different but have utility are interesting to me. Also how you might contextualize current epistemic tools, as I said before, interests me.
Thanks! :)
Thanks Noah!
The distinction between catastrophic and existential risks is a standard distinction in the literature and generally considered to be very important. On the notion of a catastrophic risk see Bostrom and Cirkovic (2008). On the notion of an existential risk … that’s still up for grabs, but the Bostrom (2013) definition I cite is a decent guide.
The reason why many people have thought it is important to distinguish catastrophic from existential risks is that “pretty bad” can cover differences of many orders of magnitude. There are billions of people alive right now, and catastrophic risks could make many of their lives bad. But the number of potential future people is so large I’d need a Latin textbook before I knew how to say it, and existential risks could make those people’s lives pretty bad (or non-existent). The thought is, at least on the standard line, that existential risks would be many, many, many times worse than catastrophic risks, so that it’s really quite important to make sure that something really poses an existential risk as opposed to a catastrophic risk.
I’ll take a look at some of your suggestions—thanks! Maybe we can talk about AGI safety in a few weeks. My current plan is to talk through the Carlsmith report a bit, but I might start with my paper “Against the singularity hypothesis”.
Do you know the philosophical literature on credences as binary beliefs? This is definitely a position that you can hold (that credences are binary beliefs), but it’s a bit controversial. I guess Jackson (2020) is a pretty good overview.
Let me get your take on a few more controversial topics:
Attitudes towards expertise in the EA community. (EAs may not place enough trust in outside experts).
Standards of evidence and proof in the EA community. (I think they are too low).
Status of women in the EA community. (I think it could be better).
Stating credences as a practice (and why I think it’s often harmful).
Epistemic status of unpublished reports and blogposts. (I think these are given too much weight. I do see the irony in this statement).
Distorting influence of money in academia. (Does using money to build a research field conduce towards the truth?).
Sound interesting? A bit too much for now? I’m not sure how much I want to dial up the controversy just yet, but perhaps I can punch it up a little.
OK, well, I browsed the two articles, I don’t want to get into a semantics argument. There’s definitely a tension between some uses of existential risk and others, but some agreement about what are global catastrophic risks.
AGI safety is about whether some organizations can develop robots or artificial life that have human-level or greater intelligence but do the bidding of humans reasonably well. AGI designers intend to create slaves for humanity. I don’t foresee that ending well for humanity and it doesn’t start well for AGI’s. Robotic and software automation is a related technology development pathway with AGI as one possible endpoint, but uses of automation are underexplored, and AGI might not be necessary to develop in order to serve some stakeholders. Or course, that doesn’t mean those folks should be served.
I’m interested in your paper on the singularity hypothesis.
I browsed the Jackson paper. I offered a model of EA’s actually scoring the intensity of feelings separately from whatever evidence supports their credence level here. The analysis of good-faith EA’s doing feeling intensity measurements identifies a potential defense of credence-eliminativism, provided one can believe that feelings of certainty are orthogonal to evidence accumulation in support of a proposition. I do believe that is the case.
What Jackson identifies as “simple, all-out belief” is in fact a proposition that passes the filter of a mental (rational, logical, ontology-matching) model but has some feelings associated with it, maybe some of which summarize results of a self-applied epistemic evidence-measurement tool, or maybe not. Most of the time, people coast on easy matches to existing cognitive templates, performing various complicated operations without much new learning or internal conflict involved. Sometimes there’s more complicated feelings implying self-contradicting epistemic states, but those can involve epistemic or motivated reasoning, and be about the same or a different focus of attention than the one identified as consciously considered. A solution is to use a cognitive aid, one that, for example:
reminds you of collected information.
corrects for differences of emotional impact created by different kinds of evidence.
discourages cognitive bias against valid evidence or valid premises.
maintains your commitment to the relevance of a specific focus of attention.
Since you asked for my takes on your list of topics, here they are:
Attitudes towards expertise in the EA community. ME: EA research can be original in a field, for example, suppose Gibbins develops weather control technology, then obviously, EA’s have reason to cite her work. Alternatively, Halstead’s research is a report on findings by other researchers. Given the broad direct research of climate change conducted outside the EA community, turning to other sources than Halstead is easier to justify. In EA, outside research is digested for consideration inside the ITN framework. In that case, revisiting an existing ITN analysis could involve redoing the same outside research gathering over again.
Standards of evidence and proof in the EA community. ME: EA could use explicit cognitive aid designs to support evidence accumulation and processing, such software technology that tracks and accumulate evidence, assisted by an inference engine for argument processing, something that ignores weighting of evidence in favor of explicit statements of conclusions or flagging of evidence conflicts, ambiguities, and information gaps.
Status of women in the EA community. ME: Women in the community? I have no idea. My only interface with EA’s is through this forum, and I’m not even sure most of the time what the gender is of the people reading or responding to what I write. My ignorance about social relations among EA’s dwarfs my knowledge of it. I’ve never been to an EA conference, for example. Is there a lot of drinking or drugs? Do people retire into hotel rooms and have sex with new acquaintances? Is it a party or a serious affair, typically? Is there lots of private gossip and petty politics or are people just about their cause and their values? Apparently there are EA dormitories or something? What goes on there? I wouldn’t know, and certainly wouldn’t know whether there’s a lot of misogyny among EA’s.
Stating credences as a practice (and why I think it’s often harmful). ME: yes, by credence you mean the technical definition that Jackson identifies, correct? A proposition with a probability assigned to it? The target for my red team criticism of EA’s was the use of credences and “updating”, because both showed a lack of epistemic procedures to help identify preconditions and maintain achievement of goals through compensating actions that create preconditions for goal achievement. Credences and Bayesianism could serve AI and forecasters in some domains. Most of EA use of credences is false imitation of a different kind of cognitive processor than a human being. We are different from bayesian processors in several ways including:
our selective and fallible memory. Without cognitive aids, our domain-specific ontologies decay or mutate, particularly if they are detailed or frequently updated, as an expert’s might be. Typically, we fail to consider evidence after a time-delay in cognitive processing of that evidence. We just stop thinking of it.
we learn by imitation and report. We adopt beliefs and world models wholesale based on imitation of others behaviors and thinking. We learn through believing what we’re told and imitating what other’s do.
we uncritically accept other’s beliefs through motivated reasoning toward agreement or disagreement with particular people. Human emotions designed for socialization and procreation (including feelings such as certainty) lead epistemic processes, rather than follow them.
most people don’t do mental arithmetic well. .67* −50 + .33* 650? It’s doable with practice, but about every deliberation involving more than one alternative outcome? NOTE: Scaling and scoring of value in well-defined categories for application of decision theory is a challenge, even with cognitive aids and careful thought. Yes, we can succeed with decision theory in specific domains useful for it, with practice and study, and use of cognitive aids so we don’t make many errors, that is, in specific contexts.
Following on from the previous point, human cognitive tendencies are not improved on by a normative model (such as Bayesian reasoning) that ignores them. Yes, there’s science, rigorous explanations, and so on. We’ve defeated our genetic limitations on our cognitive operations, to some extent. Or maybe closer to the truth is that we’ve succeeded in some contexts (hard sciences) defined by our limitations and continue to fail in others (psychology, social sciences, politics).This normative turn toward Bayesianism appears to me to be rationalist fantasy given too much, ahem, credence. EA researchers will do better turning back to traditional methods of critical thinking, argumentation, and scientific research. A virtue of those methods is that they were useful for people who held good-old-fashioned beliefs, as we all do.
Epistemic status of unpublished reports and blogposts. ME: Epistemic status statements seem to offer and validate what are usually considered fallacious reasons to reject an argument. My own epistemic status analyses do include whether my argument is self-serving.
Distorting influence of money in academia. ME: if the money is offered with an agenda, then yeah, it seeks research to support its agenda. Sometimes the agenda is justified by evidence and norms, while other times it’s not. AGI safety helps organizations looking to accumulate wealth or concentrate power with some minority group through deployment of AGI. That reflects the worldview of folks supporting AGI safety, rather than some conspiracy involving them.
First, I’d like to thank you both for this instructive discussion, and Thorstad for the post and the blog. Second, I’d like to join the fray and ask for more info on what might be the next chaters in the climate series. I don’t think it is a problem if you only focus on “Ord vs. Halstead”, but then perhaps you should make it more explicit, or people may take it as the final word on the matter.
Also, I commend your analysis of Ord, because I’ve seen people take his estimate as authoritative (e.g., here), instead of a guesstimate updated on a prior for extinction. However, to be fair to Ord, he was not running a complex scenario analysis, but basically updating from the prior for human extinction, conditioned on no major changes. That’s very different from Halstead’s report, so it might be proper to have a caveat emphasizing the differences in their scopes and methodologies (I mean, we can already see that in the text, but I’d not count on a readers inferential capacity for this). Also, if you want to dive more into this (and I’d like to read it), there’s already a thriving literature on climate change worst-case scenarios (particularly outside of EA-space) that perhaps you’d like to check—especially on climate change as a GCR that increases the odds of other man-made risks. But it’s already pretty good the way it is now.
Thanks Ramiro! Very helpful. I was intending to wrap up the climate portion of “Exaggerating the risks” with some more discussion of Halstead, and some general lessons. I started my discussion with climate risks because I think that climate risks are among the most empirically tractable risks, and one of the places where a frequently-cited estimate seems much too high.
My intention was to move after that towards some risks that the EA community emphasizes more, such as engineered pandemics and artificial intelligence. These topics take a bit more care, since by construction it is harder to get evidence about such matters, and I have to admit a bit of reluctance to speculate too broadly about them. My tentative plan is to say a few words about the Carlsmith report next. I guess you might know that I was one of the reviewers for the Carlsmith report. I didn’t think the risk was very high. The internet wasn’t particularly happy about this. (For a while, LessWrong’s top comment on the matter was: “I guffawed when I saw Thorstad’s Overall ~P Doom 0.00002%, really? And some of those other probabilities weren’t much better. Calibrate people”). I’d like to explain why I still don’t think the risk is very high.
Do you have any favorite readings on worst-case climate risk? I was happy to see that the Kemp et al. piece made it into PNAS. I hope that this will give the literature on worst-case climate risk some much-needed visibility. (I am quite concerned about worst cases! I just think that outright extinction is a very unlikely scenario, even among worst cases).
Hmm, let me know if you have any thoughts on my responses to your request for my takes, David.
Ramiro, I’m curious about resources that you want to share about climate change, it is the only GCR that EA’s regularly deny is a GCR, for some reason. I don’t think David’s question is entirely fair, but paper topics that could illustrate some expectations include:
multi-breadbasket failure due to extreme weather and drought
tipping elements posed to fall this century (including the Amazon),
the signed climate emergency paper,
recent papers about methane hydrate melting in the past,
(informal) analyses of the recent summit rain on Greenland
recent analyses of pressures on rate of melting of the Antarctic
notes from climate scientists that IPCC models leave out positive feedbacks from physical forcings on tipping elements like:
warming ocean currents running against our ice sheets
moraines, drainage holes, ice darkening, and bottom lubrication of Greenland ice sheets
change of snow to rain on Greenland as Greenland receives warmer weather and Greenland’s altitude drops
changes in wind patterns carrying moisture to different places globally
slowing of the AMOC as freshening occurs in the North Atlantic
burning and cutting of the Amazon rainforest
increased or continual fires in permafrost regions
or feedbacks from declining carbon sinks, like:
respiration increase past photosynthesis thresholds in plants
Brazil rainforest change to a carbon source and savannah
decline of plankton due to acidification, ocean heat waves, and declines in certain ocean species (for example, whales)
forest fires in the permafrost
desertification during long-term drought
the feasibility and timeliness of BECCS or DACCS at scale
the general trend of decline in predicted GAST increases required to tip large Earth system tipping elements.
an expected increase in human pressures on natural systems as weather and climate worsens (for example, increased pressure on fisheries as they decline)
These topics are what Halstead didn’t really draw together or foresee had implications this century.
Below is a prediction that I posted to gjopen a few months ago, at the start of their series of questions on climate change. It was not written for an EA audience, but it does show my thinking on the matter. Maybe I’m just mistaken that global society will totally flub our response to the GCR that is climate destruction. Maybe that is just what is happening so far but we will radically change for the better. Meanwhile, I reject the EA claim that climate change is not a neglected cause area, but I speculate that EA’s think climate change is intractable. It is not intractable. There are multiple pathways to solutions, but only the muddling ones appeal to me. The extreme technology pathway (nanotech) is actually more frightening than climate change. Nanotechnology is a GCR of its own.
...
Our civilization is on a pathway to make Earth uninhabitable for any large group of humans by 2100, all other things equal. I suppose there might be a few humans in some underwater city, underground camp, or space station.
We have had muddling solutions available for 50 years. A muddling solution is a sensible but reactive solution to a predicted problem, that is implemented quickly, that is not terribly innovative, and is followed for as long as necessary, meaning decades or even centuries.
Here’s a list of muddling solutions that could have prevented our problems if resorted to them beginning in the 1970′s:
* providing family planning services globally
* encouraging access to education and financial opportunities for women worldwide
* voluntarily reducing the birth rate across the world to 1.5 (1-2 children)
* relying on vegetarian (soy or amino-supplemented staple grains) protein
* subsidizing conservation and micro-grid technologies, not oil and gas industries
* removing all personhood rights from corporations
* raising fuel economy of cars over 50mpg and preferring trains, taxis, or human-powered vehicles
* emphasizing water conservation in agriculture
* forcing costs of industrial and construction waste onto companies, suppliers, or consumers
* maintaining regulations on the finance and credit industries (preventing their obvious excesses)
* protecting most land areas from development and only allowing narrow human corridors through them
* disallowing advertising of vice goods (alcohol, cigarettes, pornography, restaurant foods, candy, soda)
* avoiding all medical and pharmaceutical advertising
* disallowing commercial fishing and farm-animal operations
* providing sewage handling and clean water globally
* preventing run-off from industrial agriculture
* requiring pesticides to meet certain criteria
* encouraging wider use of alternative agriculture methods
* avoiding low-value (most) use of plastic
* recycling all container materials in use (wood, metal, glass, plastic, etc)
* capturing all minerals and metals contained in agricultural, industrial, consumer and other waste streams
* and the list goes on…
Some people believe that contraception violates their religion. Some believe that humans should be able to live everywhere regardless of ecological impacts. Vices are the spice of life for most people. There were incentives to avoid all the past solutions on my list, I admit. However, those solutions, implemented and accepted globally, would have prevented catastrophe. This list is true to the thought experiment, “What could we have done to avoid our climate change problem over the last 50 years that we knew to do but didn’t do”. In my view, those solutions are obviously necessary and not overly burdensome. A small percentage of people would have made a lot less money. A lot of illness and suffering in our society would be absent. But just like all solutions that require action, these solutions could only succeed if they were implemented and accepted. Our civilization did not take those actions over the last 50 years.
Now we need other solutions (involving welcoming migration and choosing extreme curbs on birth rate and consumption in developed countries) as well as those on my list, but much faster (for example, to save our ocean life from acidification, overfishing, and pollution effects over the next few decades). People in the developed world won’t do it. Instead, the developed world will follow conventional wisdom.
Conventional wisdom is to:
* wall ourselves off (for example, ignore others well-being, hoard resources, and wait for technology breakthroughs).
* innovate our way out (for example, through intensive development of breakthrough technologies)
I don’t think walling off will work, because the natural systems that are sometimes called tipping points are now changing. The effects of those tipping points will cut off supply chains over the next few decades, leading to multi-breadbasket failure, destroyed critical infrastructure, and destroyed political systems. Every country is vulnerable to those consequences.
Theoretically, we can innovate our way out. However, the innovations need to address more than energy production. They have to let us:
* control local weather.
* remove GHG’s from the atmosphere.
* replace modern agriculture at scale.
* quickly reverse ocean acidification.
* reverse ecosystem destruction or replace ecosystems (for example, replace extinct pollinators).
* remove pollution quickly (within months or years) from land and ocean pollution sinks.
* replace modern manufacturing at scale.
No futuristic technology can meet the required timeline except for large-scale manufacturing with nanotechnology (assembling materials and self-assembling devices, from micro- to macro-scale, at extreme speed). The timeline becomes shorter with each decade that passes. We won’t recognize the extreme impact of the current processes for another 10-20 years. I think the latest we could introduce nanotechnology to do all those things and still have a livable Earth for the entire global population is 2040, before ecosystem damage becomes so great that it destroys civilization on its own. But it won’t happen in time.
Instead, after 2060, we’ll be left with:
* very little good topsoil or clean water anywhere
* poor air quality in most places (dust storms, toxic algae gassing off, air pollution from local manufacturing)
* no guarantee of mild weather anywhere in any season (so any farming has to be in artificially protected environments),
* most land species extinct (including pollinators),
* mostly dead oceans (no pteropods or zooplankton and declining phytoplankton).
Today:
* the Arctic ice is retreating fast
* the Amazon is becoming a carbon source
* the permafrost is melting faster (with local feedback from fires and the warming Arctic ocean)
* Greenland is having unexpectedly large melting events
* the jet stream is becoming wavy instead of hanging in a tight circle
* surprising levels of GHGs other than CO2 are already in the atmosphere
Climate modelers in general are playing catch up to all these changes, IPCC scenarios don’t really account for tipping points processes happening as quickly as they are. Countries have no plan to stop producing CO2 or releasing other GHG’s, so the IPCC’s business-as-usual scenario will go as long as it can. None of the anticipated CCS solutions are feasible and timely at scale (including planting trees).
By the end of the century:
* The Greenland ice sheet and some or all of the West Antarctic will have melted.
* The methane hydrates of the [ESIS] in the Arctic will have dumped their gas load
* the permafrost across the high latitudes will be either melted or refreezing in a mini-ice age
* the Amazon will have long-since disappeared in drought and lightning fires
* Several large heat waves will have hit the tropical latitudes, killing every mammal outdoors (not wearing a cooling jacket) after several hours.
* there won’t be significant land or ocean sinks for CO2.
* tropical temperatures will be unlivable without cooling technologies.
* the 6th great extinction will be over.
* at least one human famine will have hit all countries around the world simultaneously.
I personally believe that climate change is now self-amplifying. We can slow the rate by removing anthropogenic forcings of global atmospheric heating, but if we are late to doing that, then we have already lost control of the heating rate to intrinsic feedbacks. I don’t know how far along that self-amplification is now. I do know that between release of frozen GHG’s and destruction of CO2 sinks and loss of stratocumulus cloud cover, the Earth can take us past 6C of warming. [GAST increase]
Today’s problem lies with the situation and human psychology. Obvious solutions are unpalatable.
First, you can’t point at plenty, predict it will all be gone in a few decades, and then ask people to deprive themselves of that plenty. We don’t choose voluntary deprivation for the greater good based on theories or science.
Second, the problem of nonlinear changes in climate conditions and Earth inhabitability is that we cannot conceive of them as real. But they are real. People would rather die than give up hamburgers? Maybe not, but if we wait until that seems like a real decision to make, it will be too late. When the signal from climate change is so strong that everyone is terrified, and willing to do something like give up hamburgers, it will be too late to give up hamburgers. Instead, the consequences of raising all those cows will be knocking.
Finally, the consequences of climate change are not our instant extinction. Instead, humanity will go through a drawn-out, painful, lengthy whithering of life quality against increasing harms from climate events, social upheavals and decreasing resources. That situation will erode heroic efforts and noble causes, extinguishing hope as frustrating obstacles mount for any organized effort to stop climate change.
I think human society in the developed world just hasn’t felt the climate change signal yet, and isn’t really ready to face the problem until it does. And then it will be too late to do much of anything about climate change. I used to think “too late” meant 2060, about when we realized that CCS solutions were always hypotheticals. Now I think it means 2030, the earliest that we might lock in the death of ocean life from multiple anthropogenic forcings, suffer a giant methane bubble from the Arctic, or see massive melt on Greenland. That’s why I think my prediction is correct, we really only have less than a decade to push our climate (and biosphere) onto another pathway. All those solutions I listed are how to do it. Anyone think they look worthwhile?
...
Thank you for reading, if you got this far. This is just a scenario and analysis with a few proposed plausible alternatives. If your counterargument is that we have more electric cars or that solar is cheaper than ever, then you need to explore the problem more carefully.
Thanks Noah, will do! Sorry for the delay. I can’t manage to take a full week off for vacation, so I’m taking five scattered days off this month and today is one of my days off. I’ll try to reply as soon as I can.
Pft, thats OK, David. Reading over how much I wrote, I’ll be surprised if you get through it all. Thanks for the showing some interest, and don’t forget to enjoy some of that vacation time! Bummer it’s split up like that.
Thanks Noah! Yeah, it’s better than nothing but every once in a while it’s nice to just spend a day at home, cook a nice meal and watch a movie.
I really will get back to you. I just need a bit :).
Thanks Noah, and sorry again for the late reply. (Vacation is over, so it’s back to work today).
I’ll do my best to run a series on the singularity hypothesis paper soon! (I’ve got a pretty big backlog, so it might be a few months, but the paper is up on the GPI website if you want to take a look before then: https://globalprioritiesinstitute.org/against-the-singularity-hypothesis-david-thorstad/).
Thanks a lot for your suggestions. I’m very happy by the fact that you didn’t get upset with me for making them. I’m … trying to tone things down a bit at the start, and I think these are some of the topics that might cause a bit more controversy. I’m also continually impressed by the ability of EAs to have hard conversations. Maybe it’s time to start on some of these topics.
I’ll read your red-teaming contest submission shortly.
I think your very helpful and honest response about the status of women in the EA community is perhaps a good reason to talk about it: many people just aren’t paying much attention to these issues. I guess see this for the latest public problem-statement (https://forum.effectivealtruism.org/posts/t5vFLabB2mQz2tgDr/i-m-a-22-year-old-woman-involved-in-effective-altruism-i-m), although there’s a fairly long history of issues going back a few years, many of which received much poorer responses from the community.
I think maybe your point about epistemic status statements is an important one that I should fold in to the discussion of credence stating. I have a suspicion that just stating epistemic statuses may not be enough to secure good epistemic standings for a literature largely founded on blog posts and forum posts (and that it really would be better to have a higher proportion of published work). I’ll see what I can do to write something up about that, again conscious of the irony that I am typing these words on a forum post about my blog.
Thanks for being patient with me Noah! I enjoyed this discussion. (I’m going to be checking the EA forum less in the coming weeks, since I’m not always a regular here, but I’ll try to check back when I can).
Sure, you’re welcome, one day is not long for me to wait. My thoughts:
I’m interested in your thoughts on the singularity., and am looking forward to reading your article.
My red-team submission needs better arguments, more content, and concision.
*As far as the status of women in the community, if this is about social behavior, then I favor dissolution of the social community version of EA.
In case you follow up a bit more on the idea of cognitive aids.
Here’s my two takes on epistemic status:
how EA’s do it
how I do it in my daily life (I hope)
I am working on a write-up that addresses climate change impacts differently than Halstead, but progress is slow because my attention and time are divided. I will share the work once it’s complete.
Thanks Noah! Please do share.
Oh, I do! :)
On most topics relevant to this forum’s readers, that is. For example, I haven’t found a good conversation on longevity control, and I’m not sure how appropriate it is to explore here, but I will note, briefly, that once people can choose to extend their lives, there will be a few ways that they can choose to end their lives, only one of which is growing old. Life extension technology poses indirect ethical and social challenges, and widespread use of it might have surprising consequences.