New blog: Some doubts about effective altruism
I’m a research fellow in philosophy at the Global Priorities Institute. There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.
About me
I’m a research fellow in philosophy at the Global Priorities Institute, and a Junior Research Fellow at Kellogg College. Before coming to Oxford, I did a PhD in philosophy at Harvard under the incomparable Ned Hall, and BA in philosophy and mathematics at Haverford College. I held down a few jobs along the way, including a stint teaching high-school mathematics in Lawrence, Massachusetts and a summer gig as a librarian for the North Carolina National Guard. I’m quite fond of dogs.
Who should read this blog?
The aim of the blog is to feature (1) long-form, serial discussions of views and practices in and around effective altruism, (2) driven by academic research, and from a perspective that (3) shares a number of important views and methods with many effective altruists.
This blog might be for you if:
You would like to know why someone who shares many background views with effective altruists could nonetheless be worried about some existing views and practices.
You are interested in learning more about the implications of academic research for views and practices in effective altruism.
You think that empirically-grounded philosophical reflection is a good way to gain knowledge about the world.
You have a moderate amount of time to devote to reading and discussion (20-30mins/post).
You don’t mind reading series of overlapping posts.
This blog might not be for you if:
You would like to know why someone who has little in common with effective altruists might be worried about the movement.
You aren’t keen on philosophy, even when empirically grounded.
You have a short amount of time to devote to reading.
You like standalone posts and hate series.
Blog series
The blog is primarily organized around series of posts, rather than individual posts. I’ve kicked off the blog with four series.
Academic papers: This series summarizes cutting-edge academic research relevant to questions in and around the effective altruism movement.
Existential risk pessimism and the time of perils:
Part 1 introduces a tension between Existential Risk Pessimism (risk is high) and the Astronomical Value Thesis (it’s very important to drive down risk).
Part 2 looks at some failed solutions to the tension.
Part 3 looks at a better solution: the Time of Perils Hypothesis.
Part 4 looks at one argument for the Time of Perils Hypothesis, which appeals to space settlement.
Part 5 looks at a second argument for the Time of Perils Hypothesis, which appeals to the concept of an existential risk Kuznets curve.
Parts 6-8 (coming soon) round out the paper and draw implications.
Academics review What we owe the future: This series looks at book reviews of MacAskill’s What we owe the future by leading academics to draw out insights from those reviews.
Part 1 looks at Kieran Setiya’s review, focusing on population ethics.
Part 2 (coming soon) looks at Richard Chappell’s review.
Part 3 (coming soon) looks atRegina Rini’s review.
Exaggerating the risks: I think that current levels of existential risk are substantially lower than many leading EAs take them to be. In this series, I say why I think that.
Part 1 introduces the series.
Part 2 looks at Ord’s discussion of climate risk in The Precipice.
Part 3 takes a first look at the Halstead report on climate risk.
Parts 4-6 (coming soon) wrap up the discussion of climate risk and draw lessons.
Billionaire philanthropy: What is the role of billionaire philanthropists within the EA movement and within a democratic society? What should that role be?
Part 1 introduces the series.
I’ll try to post at least one a week for the next few months. Comment below to tell me what sort of content you would like to see.
A disclaimer: I am writing in my personal capacity
I am writing this blog in my personal capacity. The views expressed in this blog are not the views of the Global Priorities Institute, or of Oxford University. In fact, many of my views diverge strongly from views accepted by some of my colleagues. Although many hands have helped me to shape this blog, the views expressed are mine and mine alone.
FAQ
Q: Is this just a way of making fun of effective altruism?
A: Absolutely not. In writing this blog, I am not trying to ridicule effective altruism, to convince you that effective altruism is worthless, to convince effective altruists to abandon the movement, or to contribute to the destruction of effective altruism.
I take effective altruism seriously. I have been employed for several years by the Global Priorities Institute, a research institute at Oxford University dedicated to foundational academic research on how to do good most effectively. I have organized almost a dozen workshops on global priorities research. I have presented my work at other events within the effective altruism community, including several EAG and EAGx conferences. I have consulted for Open Philanthropy, posted on the EA Forum, and won prizes for my posts.
A view that I share with effective altruists is that it is very important to learn to do good better. I will count myself successful if some of my posts help others to do good better.
Q: Why not just post on the EA Forum? Why is a new blog needed?
A: The EA forum is an important venue for discussions among effective altruists. I’ve posted on the EA Forum in the past, and won prizes for my post.
As an academic, I aim to write for a broad audience. While I certainly hope that EAs will read and engage with my work (that’s why I’m posting here!), I also want to make my work accessible to others who might not usually read the EA Forum.
Q: I’d like to talk to you about X (something I liked; something I didn’t like; a guest post; etc.). How do I do that?
A: Post here or email me at david.thorstad@philosophy.ox.ac.uk. I don’t bite, I promise.
Everything else
Please comment below to let me know what you think and what you’d like to see. If you like the blog, consider subscribing, liking or sharing. If you don’t like the blog, my cat wrote it. If you really hate the blog, it was my neighbor’s cat.
Your blog’s name discusses “ineffective altruism” and it intends to criticize Effective Altruism, but your focus appears to be reification of prevailing views within the community with regard to existential risk from climate change. Your entire climate change analysis appears to be summarizing Halstead’s report and contrasting it with Ord’s work. You judge two EA’s against each other, not two EA’s against prevailing discussions of climate change dangers outside the community. I would like to read your own analysis of where both Ord and Halstead are wrong, given your research work into climate change, since while anyone in EA can read Ord and Halstead, it appears that EA’s have little to go on about the quality of Ord or Halstead’s research, except the EA brand and typical use of those authors as sources on climate change risks. Compared to mainstream researchers of climate change, neither author sees climate change as particularly threatening, and that is a contrast that you could draw upon.
On a separate topic, I would like to understand your own views on probabilism and Bayesian updating, if they are in any way different from EA recommendations of how to think about credences or risk.
Given EA offers its own set of epistemic tools, its epistemic recommendations come from a small core of beliefs that EA’s promulgate as part of the movement’s identity. EA epistemics are nonstandard. To the extent that people adopt those beliefs, they also cohere to the unofficial requirements of being part of the EA research or social community. It would be a welcome counterpoint, and show good faith interest in criticizing the community, for you to take on such core beliefs, and point out their failings as you find them. After all, effectiveness rarely allows for maintaining social pretenses in the name of good epistemics. This would assist the community in evolving its epistemic tools, thereby improving the effectiveness of EA researchers. You could contextualize its existing epistemic tools or suggest new ones using your background in philosophy.
I would like to see more content critical of EA core beliefs available on your blog toward the purpose of helping the EA research community improve its work. Alternatively, I suggest a name change for your blog to remove the ironic reference to ineffective altruism. So long as you defend prevailing EA views on your blog, the name of the blog (ineffective altruism blog) misrepresents your opinion of prevailing EA views, and has unintended irony. An earnest blog title could serve you better.
Thanks Noah! I have four series on my blog. The series, “Exaggerating the risks”, makes the case that many different risk estimates have been exaggerated. I’m focusing on Ord’s estimate of climate risk as a first case study. I’ll try to draw some lessons from that discussion, then use them to discuss some other risks where my opinions may be more controversial among effective altruists. For example, I reviewed the Carlsmith report and assigned a much lower probability to AI risk than Carlsmith did. I’ll try to say why I did that.
Like many philosophers, I was raised in grad school to be a Bayesian. I can’t say I’ve never had doubts (I work on bounded rationality, after all), but I’m fairly sympathetic to the broad Bayesian picture.
I really appreciate the suggestion to take on some core epistemic tools and beliefs. Are there any that you would especially like to hear about?
Never fear, I do have some sharper criticisms to make. I’m in the midst of pressing one (that I don’t believe the TIme of Perils Hypothesis) in my series “Existential risk pessimism and the time of perils”. Beyond that … Perhaps you’re right that I should punch things up a bit? I’m trying to take this a bit slowly.
There is some controversy about economic estimates of damages from climate destruction in the mainstream. You might find more contrast and differences if you take a look outside EA and economics for information on climate destruction.
You distinguish catastrophic impacts from existential impacts. I’m conflicted about the distinction you draw, but I noted this conflict about Toby Ord’s discussion as well, he seems to think a surviving city is sufficient to consider humanity “not extinct”. While I agree with you all, I think these distinctions do not motivate many differences in pro-active response, that is, whether a danger is catastrophic, existential, or extinction-level, it’s still pretty bad, and recommendations for change or effort to avoid lesser dangers are typically in line with the recommendations to avoid greater dangers. Furthermore, a climate catastrophe does increase the risk of human extinction, considering that climate change worsens progressively over decades, even after all anthropogenic GHG production has stopped. I would to learn more about your thoughts on those differences, particularly how they influence your ethical deliberations about policy changes in the present.
I’m interested in your critical thoughts on:
typical application or interpretation of Bayesianism in EA.
suitability of distinct EA goals: toward charitable efforts, AGI safety, or longtermism.
earning to give and with respect to what sorts of jobs.
longevity-control, personal choice over how long you live, once life-extension is practical.
expected value calculations wrt fanatical conclusions, huge gains and tiny odds.
the moral status of potential future people in the present.
the value of risk aversion versus commitment to miniscule chances of success
any differing views on technological stagnation or value lock-in from longtermism
your thoughts on cluster thinking as introduced by Holden Karnofsky
the desirability and feasibility of claims to influence or control future people’s behavior
the positive nature of humanity and people (e.g, are we innately “good”?)
priority of avoiding harm to a percentage minority when that harm benefits the majority
the moral status of sentient beings and discounting of moral status by species
moral uncertainty as a prescriptive ethical approach
I’ve done my best on this forum to distinguish my point of view from EAs wherever it was obvious that I disagreed. I’ve also followed the works of others here who hold substantially different points of view than the EA majority (for example, about longtermism). If your disagreements are more subtle than mine, or you would disagree with me on most things, I’m not one to suggest topics that you and I agree on. But the general topics can still be addressed even though we disagree. After all, I’m nobody important but the topics are important.
If you do not take an outsider’s point of view most of the time, then there’s no need to punch things up a bit, but more a need to articulate the nuanced differences you have as well as advocate for the EA point of view wherever you support it. I would still like to read your thoughts from a perspective informed by views outside of EA, as far outside as possible, whether from philosophers that would strongly disagree with EA or from other experts or fields that take a very different point of view than EA’s.
I have advocated for an alternative approach to credences, to treat them as binary beliefs or as subject to constraints(nuance) as one gains knowledge that contradicts some of their elements. And an alternative approach to predictions, one of preconditions leading to consequences, and the predictive work involved being one of identifying preconditions with typical consequences. Identification of preconditions in that model involves matching actual contexts to prototypical contexts, with type of match allowing determination of plausible, expected, or optional (action-decided) futures predictable from the match’s result. My sources for that model were not typical for the EA community, but I did offer it here.
If you can do similar with knowledge of your own, that would interest me. Any tools that are very different but have utility are interesting to me. Also how you might contextualize current epistemic tools, as I said before, interests me.
Thanks! :)
Thanks Noah!
The distinction between catastrophic and existential risks is a standard distinction in the literature and generally considered to be very important. On the notion of a catastrophic risk see Bostrom and Cirkovic (2008). On the notion of an existential risk … that’s still up for grabs, but the Bostrom (2013) definition I cite is a decent guide.
The reason why many people have thought it is important to distinguish catastrophic from existential risks is that “pretty bad” can cover differences of many orders of magnitude. There are billions of people alive right now, and catastrophic risks could make many of their lives bad. But the number of potential future people is so large I’d need a Latin textbook before I knew how to say it, and existential risks could make those people’s lives pretty bad (or non-existent). The thought is, at least on the standard line, that existential risks would be many, many, many times worse than catastrophic risks, so that it’s really quite important to make sure that something really poses an existential risk as opposed to a catastrophic risk.
I’ll take a look at some of your suggestions—thanks! Maybe we can talk about AGI safety in a few weeks. My current plan is to talk through the Carlsmith report a bit, but I might start with my paper “Against the singularity hypothesis”.
Do you know the philosophical literature on credences as binary beliefs? This is definitely a position that you can hold (that credences are binary beliefs), but it’s a bit controversial. I guess Jackson (2020) is a pretty good overview.
Let me get your take on a few more controversial topics:
Attitudes towards expertise in the EA community. (EAs may not place enough trust in outside experts).
Standards of evidence and proof in the EA community. (I think they are too low).
Status of women in the EA community. (I think it could be better).
Stating credences as a practice (and why I think it’s often harmful).
Epistemic status of unpublished reports and blogposts. (I think these are given too much weight. I do see the irony in this statement).
Distorting influence of money in academia. (Does using money to build a research field conduce towards the truth?).
Sound interesting? A bit too much for now? I’m not sure how much I want to dial up the controversy just yet, but perhaps I can punch it up a little.
OK, well, I browsed the two articles, I don’t want to get into a semantics argument. There’s definitely a tension between some uses of existential risk and others, but some agreement about what are global catastrophic risks.
AGI safety is about whether some organizations can develop robots or artificial life that have human-level or greater intelligence but do the bidding of humans reasonably well. AGI designers intend to create slaves for humanity. I don’t foresee that ending well for humanity and it doesn’t start well for AGI’s. Robotic and software automation is a related technology development pathway with AGI as one possible endpoint, but uses of automation are underexplored, and AGI might not be necessary to develop in order to serve some stakeholders. Or course, that doesn’t mean those folks should be served.
I’m interested in your paper on the singularity hypothesis.
I browsed the Jackson paper. I offered a model of EA’s actually scoring the intensity of feelings separately from whatever evidence supports their credence level here. The analysis of good-faith EA’s doing feeling intensity measurements identifies a potential defense of credence-eliminativism, provided one can believe that feelings of certainty are orthogonal to evidence accumulation in support of a proposition. I do believe that is the case.
What Jackson identifies as “simple, all-out belief” is in fact a proposition that passes the filter of a mental (rational, logical, ontology-matching) model but has some feelings associated with it, maybe some of which summarize results of a self-applied epistemic evidence-measurement tool, or maybe not. Most of the time, people coast on easy matches to existing cognitive templates, performing various complicated operations without much new learning or internal conflict involved. Sometimes there’s more complicated feelings implying self-contradicting epistemic states, but those can involve epistemic or motivated reasoning, and be about the same or a different focus of attention than the one identified as consciously considered. A solution is to use a cognitive aid, one that, for example:
reminds you of collected information.
corrects for differences of emotional impact created by different kinds of evidence.
discourages cognitive bias against valid evidence or valid premises.
maintains your commitment to the relevance of a specific focus of attention.
Since you asked for my takes on your list of topics, here they are:
Attitudes towards expertise in the EA community. ME: EA research can be original in a field, for example, suppose Gibbins develops weather control technology, then obviously, EA’s have reason to cite her work. Alternatively, Halstead’s research is a report on findings by other researchers. Given the broad direct research of climate change conducted outside the EA community, turning to other sources than Halstead is easier to justify. In EA, outside research is digested for consideration inside the ITN framework. In that case, revisiting an existing ITN analysis could involve redoing the same outside research gathering over again.
Standards of evidence and proof in the EA community. ME: EA could use explicit cognitive aid designs to support evidence accumulation and processing, such software technology that tracks and accumulate evidence, assisted by an inference engine for argument processing, something that ignores weighting of evidence in favor of explicit statements of conclusions or flagging of evidence conflicts, ambiguities, and information gaps.
Status of women in the EA community. ME: Women in the community? I have no idea. My only interface with EA’s is through this forum, and I’m not even sure most of the time what the gender is of the people reading or responding to what I write. My ignorance about social relations among EA’s dwarfs my knowledge of it. I’ve never been to an EA conference, for example. Is there a lot of drinking or drugs? Do people retire into hotel rooms and have sex with new acquaintances? Is it a party or a serious affair, typically? Is there lots of private gossip and petty politics or are people just about their cause and their values? Apparently there are EA dormitories or something? What goes on there? I wouldn’t know, and certainly wouldn’t know whether there’s a lot of misogyny among EA’s.
Stating credences as a practice (and why I think it’s often harmful). ME: yes, by credence you mean the technical definition that Jackson identifies, correct? A proposition with a probability assigned to it? The target for my red team criticism of EA’s was the use of credences and “updating”, because both showed a lack of epistemic procedures to help identify preconditions and maintain achievement of goals through compensating actions that create preconditions for goal achievement. Credences and Bayesianism could serve AI and forecasters in some domains. Most of EA use of credences is false imitation of a different kind of cognitive processor than a human being. We are different from bayesian processors in several ways including:
our selective and fallible memory. Without cognitive aids, our domain-specific ontologies decay or mutate, particularly if they are detailed or frequently updated, as an expert’s might be. Typically, we fail to consider evidence after a time-delay in cognitive processing of that evidence. We just stop thinking of it.
we learn by imitation and report. We adopt beliefs and world models wholesale based on imitation of others behaviors and thinking. We learn through believing what we’re told and imitating what other’s do.
we uncritically accept other’s beliefs through motivated reasoning toward agreement or disagreement with particular people. Human emotions designed for socialization and procreation (including feelings such as certainty) lead epistemic processes, rather than follow them.
most people don’t do mental arithmetic well. .67* −50 + .33* 650? It’s doable with practice, but about every deliberation involving more than one alternative outcome? NOTE: Scaling and scoring of value in well-defined categories for application of decision theory is a challenge, even with cognitive aids and careful thought. Yes, we can succeed with decision theory in specific domains useful for it, with practice and study, and use of cognitive aids so we don’t make many errors, that is, in specific contexts.
Following on from the previous point, human cognitive tendencies are not improved on by a normative model (such as Bayesian reasoning) that ignores them. Yes, there’s science, rigorous explanations, and so on. We’ve defeated our genetic limitations on our cognitive operations, to some extent. Or maybe closer to the truth is that we’ve succeeded in some contexts (hard sciences) defined by our limitations and continue to fail in others (psychology, social sciences, politics).This normative turn toward Bayesianism appears to me to be rationalist fantasy given too much, ahem, credence. EA researchers will do better turning back to traditional methods of critical thinking, argumentation, and scientific research. A virtue of those methods is that they were useful for people who held good-old-fashioned beliefs, as we all do.
Epistemic status of unpublished reports and blogposts. ME: Epistemic status statements seem to offer and validate what are usually considered fallacious reasons to reject an argument. My own epistemic status analyses do include whether my argument is self-serving.
Distorting influence of money in academia. ME: if the money is offered with an agenda, then yeah, it seeks research to support its agenda. Sometimes the agenda is justified by evidence and norms, while other times it’s not. AGI safety helps organizations looking to accumulate wealth or concentrate power with some minority group through deployment of AGI. That reflects the worldview of folks supporting AGI safety, rather than some conspiracy involving them.
First, I’d like to thank you both for this instructive discussion, and Thorstad for the post and the blog. Second, I’d like to join the fray and ask for more info on what might be the next chaters in the climate series. I don’t think it is a problem if you only focus on “Ord vs. Halstead”, but then perhaps you should make it more explicit, or people may take it as the final word on the matter.
Also, I commend your analysis of Ord, because I’ve seen people take his estimate as authoritative (e.g., here), instead of a guesstimate updated on a prior for extinction. However, to be fair to Ord, he was not running a complex scenario analysis, but basically updating from the prior for human extinction, conditioned on no major changes. That’s very different from Halstead’s report, so it might be proper to have a caveat emphasizing the differences in their scopes and methodologies (I mean, we can already see that in the text, but I’d not count on a readers inferential capacity for this). Also, if you want to dive more into this (and I’d like to read it), there’s already a thriving literature on climate change worst-case scenarios (particularly outside of EA-space) that perhaps you’d like to check—especially on climate change as a GCR that increases the odds of other man-made risks. But it’s already pretty good the way it is now.
Thanks Ramiro! Very helpful. I was intending to wrap up the climate portion of “Exaggerating the risks” with some more discussion of Halstead, and some general lessons. I started my discussion with climate risks because I think that climate risks are among the most empirically tractable risks, and one of the places where a frequently-cited estimate seems much too high.
My intention was to move after that towards some risks that the EA community emphasizes more, such as engineered pandemics and artificial intelligence. These topics take a bit more care, since by construction it is harder to get evidence about such matters, and I have to admit a bit of reluctance to speculate too broadly about them. My tentative plan is to say a few words about the Carlsmith report next. I guess you might know that I was one of the reviewers for the Carlsmith report. I didn’t think the risk was very high. The internet wasn’t particularly happy about this. (For a while, LessWrong’s top comment on the matter was: “I guffawed when I saw Thorstad’s Overall ~P Doom 0.00002%, really? And some of those other probabilities weren’t much better. Calibrate people”). I’d like to explain why I still don’t think the risk is very high.
Do you have any favorite readings on worst-case climate risk? I was happy to see that the Kemp et al. piece made it into PNAS. I hope that this will give the literature on worst-case climate risk some much-needed visibility. (I am quite concerned about worst cases! I just think that outright extinction is a very unlikely scenario, even among worst cases).
Hmm, let me know if you have any thoughts on my responses to your request for my takes, David.
Ramiro, I’m curious about resources that you want to share about climate change, it is the only GCR that EA’s regularly deny is a GCR, for some reason. I don’t think David’s question is entirely fair, but paper topics that could illustrate some expectations include:
multi-breadbasket failure due to extreme weather and drought
tipping elements posed to fall this century (including the Amazon),
the signed climate emergency paper,
recent papers about methane hydrate melting in the past,
(informal) analyses of the recent summit rain on Greenland
recent analyses of pressures on rate of melting of the Antarctic
notes from climate scientists that IPCC models leave out positive feedbacks from physical forcings on tipping elements like:
warming ocean currents running against our ice sheets
moraines, drainage holes, ice darkening, and bottom lubrication of Greenland ice sheets
change of snow to rain on Greenland as Greenland receives warmer weather and Greenland’s altitude drops
changes in wind patterns carrying moisture to different places globally
slowing of the AMOC as freshening occurs in the North Atlantic
burning and cutting of the Amazon rainforest
increased or continual fires in permafrost regions
or feedbacks from declining carbon sinks, like:
respiration increase past photosynthesis thresholds in plants
Brazil rainforest change to a carbon source and savannah
decline of plankton due to acidification, ocean heat waves, and declines in certain ocean species (for example, whales)
forest fires in the permafrost
desertification during long-term drought
the feasibility and timeliness of BECCS or DACCS at scale
the general trend of decline in predicted GAST increases required to tip large Earth system tipping elements.
an expected increase in human pressures on natural systems as weather and climate worsens (for example, increased pressure on fisheries as they decline)
These topics are what Halstead didn’t really draw together or foresee had implications this century.
Below is a prediction that I posted to gjopen a few months ago, at the start of their series of questions on climate change. It was not written for an EA audience, but it does show my thinking on the matter. Maybe I’m just mistaken that global society will totally flub our response to the GCR that is climate destruction. Maybe that is just what is happening so far but we will radically change for the better. Meanwhile, I reject the EA claim that climate change is not a neglected cause area, but I speculate that EA’s think climate change is intractable. It is not intractable. There are multiple pathways to solutions, but only the muddling ones appeal to me. The extreme technology pathway (nanotech) is actually more frightening than climate change. Nanotechnology is a GCR of its own.
...
Our civilization is on a pathway to make Earth uninhabitable for any large group of humans by 2100, all other things equal. I suppose there might be a few humans in some underwater city, underground camp, or space station.
We have had muddling solutions available for 50 years. A muddling solution is a sensible but reactive solution to a predicted problem, that is implemented quickly, that is not terribly innovative, and is followed for as long as necessary, meaning decades or even centuries.
Here’s a list of muddling solutions that could have prevented our problems if resorted to them beginning in the 1970′s:
* providing family planning services globally
* encouraging access to education and financial opportunities for women worldwide
* voluntarily reducing the birth rate across the world to 1.5 (1-2 children)
* relying on vegetarian (soy or amino-supplemented staple grains) protein
* subsidizing conservation and micro-grid technologies, not oil and gas industries
* removing all personhood rights from corporations
* raising fuel economy of cars over 50mpg and preferring trains, taxis, or human-powered vehicles
* emphasizing water conservation in agriculture
* forcing costs of industrial and construction waste onto companies, suppliers, or consumers
* maintaining regulations on the finance and credit industries (preventing their obvious excesses)
* protecting most land areas from development and only allowing narrow human corridors through them
* disallowing advertising of vice goods (alcohol, cigarettes, pornography, restaurant foods, candy, soda)
* avoiding all medical and pharmaceutical advertising
* disallowing commercial fishing and farm-animal operations
* providing sewage handling and clean water globally
* preventing run-off from industrial agriculture
* requiring pesticides to meet certain criteria
* encouraging wider use of alternative agriculture methods
* avoiding low-value (most) use of plastic
* recycling all container materials in use (wood, metal, glass, plastic, etc)
* capturing all minerals and metals contained in agricultural, industrial, consumer and other waste streams
* and the list goes on…
Some people believe that contraception violates their religion. Some believe that humans should be able to live everywhere regardless of ecological impacts. Vices are the spice of life for most people. There were incentives to avoid all the past solutions on my list, I admit. However, those solutions, implemented and accepted globally, would have prevented catastrophe. This list is true to the thought experiment, “What could we have done to avoid our climate change problem over the last 50 years that we knew to do but didn’t do”. In my view, those solutions are obviously necessary and not overly burdensome. A small percentage of people would have made a lot less money. A lot of illness and suffering in our society would be absent. But just like all solutions that require action, these solutions could only succeed if they were implemented and accepted. Our civilization did not take those actions over the last 50 years.
Now we need other solutions (involving welcoming migration and choosing extreme curbs on birth rate and consumption in developed countries) as well as those on my list, but much faster (for example, to save our ocean life from acidification, overfishing, and pollution effects over the next few decades). People in the developed world won’t do it. Instead, the developed world will follow conventional wisdom.
Conventional wisdom is to:
* wall ourselves off (for example, ignore others well-being, hoard resources, and wait for technology breakthroughs).
* innovate our way out (for example, through intensive development of breakthrough technologies)
I don’t think walling off will work, because the natural systems that are sometimes called tipping points are now changing. The effects of those tipping points will cut off supply chains over the next few decades, leading to multi-breadbasket failure, destroyed critical infrastructure, and destroyed political systems. Every country is vulnerable to those consequences.
Theoretically, we can innovate our way out. However, the innovations need to address more than energy production. They have to let us:
* control local weather.
* remove GHG’s from the atmosphere.
* replace modern agriculture at scale.
* quickly reverse ocean acidification.
* reverse ecosystem destruction or replace ecosystems (for example, replace extinct pollinators).
* remove pollution quickly (within months or years) from land and ocean pollution sinks.
* replace modern manufacturing at scale.
No futuristic technology can meet the required timeline except for large-scale manufacturing with nanotechnology (assembling materials and self-assembling devices, from micro- to macro-scale, at extreme speed). The timeline becomes shorter with each decade that passes. We won’t recognize the extreme impact of the current processes for another 10-20 years. I think the latest we could introduce nanotechnology to do all those things and still have a livable Earth for the entire global population is 2040, before ecosystem damage becomes so great that it destroys civilization on its own. But it won’t happen in time.
Instead, after 2060, we’ll be left with:
* very little good topsoil or clean water anywhere
* poor air quality in most places (dust storms, toxic algae gassing off, air pollution from local manufacturing)
* no guarantee of mild weather anywhere in any season (so any farming has to be in artificially protected environments),
* most land species extinct (including pollinators),
* mostly dead oceans (no pteropods or zooplankton and declining phytoplankton).
Today:
* the Arctic ice is retreating fast
* the Amazon is becoming a carbon source
* the permafrost is melting faster (with local feedback from fires and the warming Arctic ocean)
* Greenland is having unexpectedly large melting events
* the jet stream is becoming wavy instead of hanging in a tight circle
* surprising levels of GHGs other than CO2 are already in the atmosphere
Climate modelers in general are playing catch up to all these changes, IPCC scenarios don’t really account for tipping points processes happening as quickly as they are. Countries have no plan to stop producing CO2 or releasing other GHG’s, so the IPCC’s business-as-usual scenario will go as long as it can. None of the anticipated CCS solutions are feasible and timely at scale (including planting trees).
By the end of the century:
* The Greenland ice sheet and some or all of the West Antarctic will have melted.
* The methane hydrates of the [ESIS] in the Arctic will have dumped their gas load
* the permafrost across the high latitudes will be either melted or refreezing in a mini-ice age
* the Amazon will have long-since disappeared in drought and lightning fires
* Several large heat waves will have hit the tropical latitudes, killing every mammal outdoors (not wearing a cooling jacket) after several hours.
* there won’t be significant land or ocean sinks for CO2.
* tropical temperatures will be unlivable without cooling technologies.
* the 6th great extinction will be over.
* at least one human famine will have hit all countries around the world simultaneously.
I personally believe that climate change is now self-amplifying. We can slow the rate by removing anthropogenic forcings of global atmospheric heating, but if we are late to doing that, then we have already lost control of the heating rate to intrinsic feedbacks. I don’t know how far along that self-amplification is now. I do know that between release of frozen GHG’s and destruction of CO2 sinks and loss of stratocumulus cloud cover, the Earth can take us past 6C of warming. [GAST increase]
Today’s problem lies with the situation and human psychology. Obvious solutions are unpalatable.
First, you can’t point at plenty, predict it will all be gone in a few decades, and then ask people to deprive themselves of that plenty. We don’t choose voluntary deprivation for the greater good based on theories or science.
Second, the problem of nonlinear changes in climate conditions and Earth inhabitability is that we cannot conceive of them as real. But they are real. People would rather die than give up hamburgers? Maybe not, but if we wait until that seems like a real decision to make, it will be too late. When the signal from climate change is so strong that everyone is terrified, and willing to do something like give up hamburgers, it will be too late to give up hamburgers. Instead, the consequences of raising all those cows will be knocking.
Finally, the consequences of climate change are not our instant extinction. Instead, humanity will go through a drawn-out, painful, lengthy whithering of life quality against increasing harms from climate events, social upheavals and decreasing resources. That situation will erode heroic efforts and noble causes, extinguishing hope as frustrating obstacles mount for any organized effort to stop climate change.
I think human society in the developed world just hasn’t felt the climate change signal yet, and isn’t really ready to face the problem until it does. And then it will be too late to do much of anything about climate change. I used to think “too late” meant 2060, about when we realized that CCS solutions were always hypotheticals. Now I think it means 2030, the earliest that we might lock in the death of ocean life from multiple anthropogenic forcings, suffer a giant methane bubble from the Arctic, or see massive melt on Greenland. That’s why I think my prediction is correct, we really only have less than a decade to push our climate (and biosphere) onto another pathway. All those solutions I listed are how to do it. Anyone think they look worthwhile?
...
Thank you for reading, if you got this far. This is just a scenario and analysis with a few proposed plausible alternatives. If your counterargument is that we have more electric cars or that solar is cheaper than ever, then you need to explore the problem more carefully.
Thanks Noah, will do! Sorry for the delay. I can’t manage to take a full week off for vacation, so I’m taking five scattered days off this month and today is one of my days off. I’ll try to reply as soon as I can.
Pft, thats OK, David. Reading over how much I wrote, I’ll be surprised if you get through it all. Thanks for the showing some interest, and don’t forget to enjoy some of that vacation time! Bummer it’s split up like that.
Thanks Noah! Yeah, it’s better than nothing but every once in a while it’s nice to just spend a day at home, cook a nice meal and watch a movie.
I really will get back to you. I just need a bit :).
Thanks Noah, and sorry again for the late reply. (Vacation is over, so it’s back to work today).
I’ll do my best to run a series on the singularity hypothesis paper soon! (I’ve got a pretty big backlog, so it might be a few months, but the paper is up on the GPI website if you want to take a look before then: https://globalprioritiesinstitute.org/against-the-singularity-hypothesis-david-thorstad/).
Thanks a lot for your suggestions. I’m very happy by the fact that you didn’t get upset with me for making them. I’m … trying to tone things down a bit at the start, and I think these are some of the topics that might cause a bit more controversy. I’m also continually impressed by the ability of EAs to have hard conversations. Maybe it’s time to start on some of these topics.
I’ll read your red-teaming contest submission shortly.
I think your very helpful and honest response about the status of women in the EA community is perhaps a good reason to talk about it: many people just aren’t paying much attention to these issues. I guess see this for the latest public problem-statement (https://forum.effectivealtruism.org/posts/t5vFLabB2mQz2tgDr/i-m-a-22-year-old-woman-involved-in-effective-altruism-i-m), although there’s a fairly long history of issues going back a few years, many of which received much poorer responses from the community.
I think maybe your point about epistemic status statements is an important one that I should fold in to the discussion of credence stating. I have a suspicion that just stating epistemic statuses may not be enough to secure good epistemic standings for a literature largely founded on blog posts and forum posts (and that it really would be better to have a higher proportion of published work). I’ll see what I can do to write something up about that, again conscious of the irony that I am typing these words on a forum post about my blog.
Thanks for being patient with me Noah! I enjoyed this discussion. (I’m going to be checking the EA forum less in the coming weeks, since I’m not always a regular here, but I’ll try to check back when I can).
Sure, you’re welcome, one day is not long for me to wait. My thoughts:
I’m interested in your thoughts on the singularity., and am looking forward to reading your article.
My red-team submission needs better arguments, more content, and concision.
*As far as the status of women in the community, if this is about social behavior, then I favor dissolution of the social community version of EA.
In case you follow up a bit more on the idea of cognitive aids.
Here’s my two takes on epistemic status:
how EA’s do it
how I do it in my daily life (I hope)
I am working on a write-up that addresses climate change impacts differently than Halstead, but progress is slow because my attention and time are divided. I will share the work once it’s complete.
Thanks Noah! Please do share.
Oh, I do! :)
On most topics relevant to this forum’s readers, that is. For example, I haven’t found a good conversation on longevity control, and I’m not sure how appropriate it is to explore here, but I will note, briefly, that once people can choose to extend their lives, there will be a few ways that they can choose to end their lives, only one of which is growing old. Life extension technology poses indirect ethical and social challenges, and widespread use of it might have surprising consequences.
Cool. I’m interested to read some of these.
Hot take: I think you should change the name.
Current name has several issues:
(1) Confusing: based on the name alone, I’d expect the blog to contain very fundamental criticism of EA ideas or community, rather than criticism that is pretty well in the spirit of the enterprise. I’d also expect it to be more hostile than is justified by values or epistemic norms I share.
(2) Bad associations: I’ve seen the phrase “ineffective altruism” a few times before. All the examples I can remember were in the context of low-quality criticism and vibe-based sniping on Twitter.
(3) Hostile to a key audience: one of your most important audiences probably won’t like the name much. If you’re trying to have a good discussion, it’s usually a bad idea to open by suggesting your interlocutor is misguided or “ineffective”. [1]
The combination of (1)-(3) probably explain why this post doesn’t already have 100 karma (83 at the time of writing). I’d guess the name will reduce engagement and sharing going forwards by at least 10% compared to a neutral name like “Thorstad’s blog”.
Thanks again for starting this. I will follow along with interest.
[1] Unfortunately the name “effective altruism” also does this, because it’s dunking on the foil (regular altruism).
Thanks Peter! Your feedback on the name is much appreciated.
Please do let me know what you think of future posts. It’s always good to hear from interested readers.
Thanks David. I might be missing something here, but it seems like this is more criticism of Longtermism rather than EA as a whole. I’ve only read your summary, but I can’t find critcisms of core EA principles and short-termist stuff. Perhaps the title should reflect that?
Thanks Nick! A lot of my skepticism does concentrate on longtermism. I think that short-termist EA has often done a lot of good, and that’s one of the things I like about effective altruism. I will try in the future to say some things that aren’t confined to longtermism. In particular, so far I’ve been talking entirely about views held by effective altruists, but I’d also like to have some discussions of practices within the movement, and those practices often cut across short- and long-termism.
Within short-termist EA, one way to think about why even a broadly consequentialist, science-loving academic might have some lingering doubts would be to ask who benefits vs. who is harmed, and who is represented or heard vs. who is not. I think that the forthcoming Adams et al volume does a really good job bringing out some of these questions. I’m holding off on discussing it because the release date is a few months off, and it doesn’t do much good to discuss a book that nobody can buy yet. But I’m very much looking forward to talking about these sorts of issues.
This was very interesting, thank you! I wish someone made an interactive model online where you could just plug in the relevant parameters such as the amount of existential risk in the future, the length of time of perils, how much you think extreme bliss scenarios are likelier compared to extreme suffering risks, and so on and see how much value the future holds. I’m curious what are the weakest assumptions you need to show that longtermist interventions are cost-effective. Tarsney’s Epistemic Challenge to Longtermism paper was pretty good at this.
Thanks Emre! I think Sascha Cooper is working on something like the model you mention. I forget what exactly goes into his model, but I’m quite sure he has parameters for the time of perils.
I’m also a big fan of Tarsney’s paper. We were colleagues until very recently. Fingers crossed that the paper finds a good home soon.
GPI is editing an (open-access) anthology on longtermism. We’ll have a paper by Elliott Thornley and Carl Shulman about whether some longtermist interventions can be justified based on standard cost-effectiveness metrics even if you don’t care about future people. Hopefully that will help a bit with your question about weakest-possible assumptions for cost-effectiveness of longtermist interventions. I hope we can get the anthology out in 2023, but it might take until 2024.
Thank you for sharing this, I’m glad to learn some people are working on these. Excited for these projects.
Hi David,
I think it would be nice if you crossposted your posts to EA Forum. I only found about your blog now, and am finding it really good!
Thanks Vasco! Much appreciated. I really appreciate your comment, and you’re not the first one who said it.
I’m going to post a blog update next month, and I’ll try to post some updates going forward after that.
I try to make sure I have enough independence from EAs that I can speak my own mind without having to change my views or how I say things, or what I assume as background and so on. That means I don’t usually want to directly post to the EA Forum, but I’d be thrilled if someone else wanted to crosspost some/any/all posts.
David
Is there any particular background you expect your readers to have that typical EA readers lack?
Often, to be honest, it goes the other way. The average engaged EA knows a tremendous amount about EA, whereas many educated readers (including academics, who are another key part of my audience) know relatively little.
I guess one key audience of mine is academic philosophers. This audience often wants to see discussions of philosophical issues in population ethics, decision theory, and the like at a level that assumes quite a high level of background (often, alas, more than I have!).
I think in practice I often don’t provide the second audience (academics, especially philosophers) with as much content as I’d like for them, and I’m trying to do what I can to grow my audience a bit more evenly.
I don’t think either of these should really be a barrier to posting on the forum. There is a lot of introductory content here that restates what people already know—they can always just skip over those sections—and likewise there is a lot of technical material that is inaccessible to the median reader—and that is also fine!
I know the Billionaire Philantrophy series is just starting, but so far it’s not clear how it is going to be EA-focused. “SBF was a problematic philantrophist even setting aside the fraud” isn’t that instructive, unless you can show the whole situation reflects on EA as an idea—or at least the EA community—at some fairly fundamental level.
“Billionaire philantrophy is problematic” is an interesting idea, but it’s unclear how it applies specifically to EA billionaires more so than billionaire philantropists as a whole. To the extent that you’re going to suggest removing some tax incentives for philantrophy by the uber-rich, I think you’re going to need to address the effectiveness of giving the US Government and other governments more money—not based on the programs you would most like for them to enact with a little extra cash, but based on what they are likely to actually do with extra money. I can make almost any tax policy change look good or bad by changing what the government is going to do (or give up) on the other side of the ledger.
Still on billionaire philanthropy, regarding Question “6. Permissible donor influence”: it’d be interesting to consider not only how depending on a smal concentrated set of donors may pose a risk of undue influence, but also how this creates a problem of “few points of failure”.
a) With FTX collapse, crypto financial crisis and the tech stocks low prices… EA suddenly appears to be more funding constrained than one year ago, and needing to manage rep risks, right after having made great plans when people thought there was a “funding overhang”.
b) SBF actually had made our major sources of funding appear to be less concentrated—we went from “relying mostly on Open Phil” to ”… also on FTX.”
Agreed! Will do.
Thanks Jason—very helpful. I think that to a large extent you are right: concerns about billionaire philanthropists are not unique to the EA movement. In the United States, these concerns go back at least as far as the Rockefeller Foundation, which was met with so much skepticism that the US Congress refused to allow it to be established. Grappling with billionaire philanthropy is something that many of us must do. (How should we feel about the Koch Brothers?). In this respect, I hope that EAs can learn from the reflection on billionaire philanthropy that has taken place in other circles.
But I think one reason why EAs in particular need to think carefully about the role of billionaire philanthropy is that EA was not always funded by billionaires. In the early days of earning to give, EA was rather poorly funded, and members were encouraged to work for high salaries and donate what they could to the movement. Matters changed when the billionaires came. Now there wasn’t much need for earning to give. But billionaire philanthropy also raised some problems.
One problem is the role of donor discretion. In theory, EAs are committed to giving to the highest-impact causes. But in practice, money is guided by the wishes of leaders and high-profile donors. SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.
Another challenge is that EA didn’t merely take money from SBF. EA played a large role in the rise of SBF, and helped to shape his public image after he became wealthy. From this perspective, EAs can’t simply step back and say: “well, it’s too bad that there are billionaires, but what are we to do? Turn down their donations?” After all, EAs helped to make this man a billionaire.
There are also some questions about the special obligations that billionaires might incur by virtue of what they did to get wealthy. For example, suppose that many Silicon Valley billionaires (especially cryptocurrency billionaires) made their money in industries that created a great deal to global warming. Can they then turn around and refuse to fund climate mitigation efforts on the grounds that other efforts are more important? Or might they have an obligation to first undo the harms they caused on their way up the ladder?
Were there any issues about billionaire philanthropy that you found especially interesting (or uninteresting) as subjects for future discussion?
I think you’re right that influence/control by megadonors is a thing. But I think almost all ways of funding charitable work have funding-source problems, so I would be interested in seeing more about whether (and if so, why) you thought the funding-source problem from billionaires are worse for a charitable movement than the alternatives.
At the outset, I should note I am not a longtermist, and so criticisms that apply to all EA cause areas will be more salient/interesting to me as a reader than ones that depend on your assessment of current longtermist initiatives.
Perhaps something like earning to give, from a small army of mid-size donors (e.g., mid-six to low-seven figures a year) is the best funding source. There’s a strong argument that EA should not diminished the value of EtG as a role. But—the fact remains that that Moskovitz funneled more to charities through GiveWell recommendations than everyone else combined per the 2021 metrics report. Another 18 donors were about half the remainder, averaging about $7MM each. As a strong believer in GiveWell-type work, that’s a lot of impact to give up by foreswearing Big Money. Also, if your movement only has a limited supply of foot soliders, nudging a number of your best and brightest into working on the supply/logistics chain—no matter how critical that role is—necessarily trims the number you can deploy to the front lines doing direct work.
Relying on governments for funding has its own set of problems, and relying on hundreds of thousands of low-engagement, small-dollar donors requires you to target your efforts to what the general public will give toward—which often has nothing to do with effectiveness. The Cynic’s Golden Rule—he who has the gold, makes the rules—has nearly universal application to charity work. But as a whole, are there good reasons to think megadonors are worse taskmasters than the alternatives?
I am less convinced than many people here that EA can regularly create billionaires, but am open to changing my mind. So I’m personally less interested in the “EA helped create SBF” angle unless it can be tied to some warning or lesson for the future. Absent that, it sounds like a story about a few EA-aligned individuals who made an error in judging character (or perhaps turned a blind eye to yellow or even red flags) on a specific individual, rather than a particularly important story about the nature of the EA community itself.
I think the “special obligations” discussion is interesting. On the intellectual side (but slipping into theological metaphors), it’s not entirely clear to me why—for instance, the penance for one’s sins against the climate have to be repaid in climate-related donations, if some other form of penance would be more useful to humanity (especially to the most disadvantaged). Of course, you might feel climate work is the best use of charitable money, but each billionaire’s trail of collateral damage will be different. And there’s no a priori reason to think that the specific penance for the damage caused by a specific billionaire will ordinarily have greater-than-average socially utility as a place to send donations. I can apprehend why there might be an obligation to pay the penance to the benefit of the group of people the billionaire harmed, but am having a hard time understanding why it must be repaid to ameliorate the specific way in which the billionaire harmed those people as opposed to meeting their other interests. Moreover, from a distributional perspective, a billionaire’s collateral damage may tend to be more localized to his/her high-privilege geographical area, and I worry about a principle that would often imply that one must first meet a moral duty toward relatively privileged people in the U.S. before they should care about the global poor.
But the more practical question about “special obligations” is what the idea means for a charitable movement—I don’t think any of your readers are likely to be billionaires. A movement doesn’t really have any real leverage over the megadonor, and I don’t think there are any good ways to fix that. Are all the charities in the world supposed to refuse to take money from Polluter Paul until he first donates a suitable amount of money for pollution remediation? If you really believe your charitable movement does a lot of good for the world, it’s morally costly to tell Polluter Paul to go give to the opera houses instead to get his reputational boost because they will take his money without asking any questions.
Even worse (using a GiveWell-type framework because that’s my cause area), that moral cost is not borne by me. It’s mostly borne by small children in Africa—perhaps hundreds of thousands of them if the donation is big enough—who will die because I told Polluter Paul his money was too dirty for me. While I don’t cleanly identify as a utilitarian, that is a bitter pill to swallow.
Finally, you write about “rewriting the rules to make sure that philanthropic influence is used fairly, effectively, and in a way that does not disempower ordinary citizens.” I am interested in hearing more about that, but particularly in why you think that can be done in a way that doesn’t disincentivize would-be billionaire philantropists and push them toward just buying a mega-yacht and a professional sports team instead. Unless, of course, you feel billionaire philantrophy is a net negative for the world and should be discouraged, even though it will mean considerably less philantrophy overall.
“SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.”
I don’t think there is evidence that Dustin Moskovitz, Cari Tuna or SBF had an outsized influece on the types of cause areas that the EA community worked on. Looking at the things that were discussed in the community before and after these donors came in, I can’t see much difference. The ideas of AI safety, longtermism, animal welfare, and global health are pretty old. I’m sure SBF had his own opinions on specific matters and had some influence over ways to evaluate different projects, but nonetheless, I guess the overwhelming majority of the projects funded by SBF would still be funded by the EA community if there were enough resources. My guess is many of the projects initially funded by FTX will still be funded by other donors in the community.
The key words there are “if there were enough resources.”
As a practical matter, what EA does is inevitably and heavily influenced by what gets funded. That, and what will people think will get funded, influence what gets talked about at conferences, what areas new EAs go into, and so on. And, for the most part, what gets funded is ultimately up to a few people and their delegates.
Imagine a world in which SBF existed (in non-fraudulent form) and the FTX Animal Fund was handing out $150MM a year to animal-welfare organizations and peanuts to longtermism. I’d suggest that EA would already look significantly different than it did in October 2022, and would have looked even more significantly different in October 2027.
I don’t think the original poster is wrong that megadonors have an outsized influence on which cause areas EA is doing significant work in.
Also, one potential EA-focused topic on billionaire funding is the particular risks posed by certain “meta” funding. Some of the benefits of that funding accrue in part to the benefit of insiders—most people like going to fancy conferences in $15MM manor houses—with the idea that the spending will ultimately achieve more for EA’s goals than spending the money on direct work would. The benefits of much meta funding are too diffuse and indirect to be captured by GiveWell-style analyses or to be readily subject to evaluation by non-insiders. I’m not sure if other charitable movements pay so much attention to meta, but at least none I’m aware of do so as explicitly.
I suggest that there is a particular potential problem with billionaire funding of certain sorts of meta work that does not exist with billionaire funding of direct work (e.g., bednets, AI safety fellowships, etc.) I speculate that most billionaires lack motivation to attempt to monitor and evaluate the effectiveness of meta work—it’s too complex, and each individual spend is pretty small in the billionaire mind. Of course, the billionaire may be relying on delegates to evaluate all of their grantees anyway. But the potential problem is that the billionaire is relying on insiders to evaluate the effectiveness of the meta work, and insiders may have a bias in favor of that work.
I don’t fund meta work (with my donations as a public-sector attorney not in EA...) as a general rule because I do not feel qualified to assess its value. But if I were a billionaire, I would probably require a “community co-pay” before giving to certain sorts of meta work. For example, I might only match funds (up to a certain point) that small/medium donors contributed specifically for conferences. Since money is fungible, I’d be using the community’s willingness to pay for this expense—rather than donate more to effective charities—as an information signal about how valuable the conferences actually were. And with the community’s skin in the game, I’d have more confidence in their ability/capacity to police whether conference money was being spent wisely than in my own. Such a practice would also encourage what we might call intra-EA democracy—the decision about how much to fund conferences no longer depends predominately on my judgment but significantly depends on the judgment of a number of rank-and-file EAers as well. I would submit that is a feature, not a bug.
Ah right, good point! I’ll try to focus more on meta funding. You’re definitely right to be suspicious of this (hard to monitor; people have bad incentives; looks like we’re spending an awful lot on it now). I’ll see what I can say about this, and please do keep thinking about this if you have more thoughts. I like your suggestion of a co-pay.