Director of Research at PAISRI
You want more good and less bad in the world? Would it be better if we had a little more good and a little less bad? If so, then we should care about the efficiency of our efforts to make the world better.
*note that I of course here mean something like efficiency that includes Pareto efficiency, not the narrow notion of efficiency we use everyday; you could also say “effective” but you asked for why giving should be effective, and we can ground effectiveness in Pareto efficiency across all dimensions we care about
I’ve been pretty skeptical that mental health is something EAs should focus on. One thing I see lacking in this report (apologies if it’s there and I didn’t find it) seems to be a way of comparing it to alternatives, since I don’t think that mental health is a source of suffering for people is in question, but whether it’s compares favorably to other issues.
For example I’d love something like QALY analysis on mental health that would allow us to compare it to other cause areas more directly.
Having lived with someone who suffered chronic kidney stones, at least within the US, a huge problem in recent years has been the over-reaction to the so-called opioid crisis. The result has been a decreased willingness to actually treat what we might call chronic acute pain, like the kind that comes from kidney stones.
This is a somewhat technical distinction I’m making here. Kidney stone pain is acute in that it has a clear cause that can be remediated. However if someone produces kidney stones chronically (let’s say at least one a month), they are chronically in acute pain. This creates a problem because standard treatment protocols for chronic pain don’t always work because this is a continuous level of pain above what’s normally experienced by chronic pain sufferers, perhaps with the exception of migraines. But since migraine pain is best treated with non-opioid drugs, they don’t run into the same problems as chronic kidney stone sufferers do who need repeated access to opioids to deal with pain that can break through maintenance pain medications.
The result is people left in agony who suffer from chronic kidney stones that are resistant to treatment because of restrictions on opioid drug use in the name of curbing abuse. To make matters worse, treatment can become a catch-22: chronic pain doctors won’t treat such pain because it’s “acute” and at some point other doctors will stop wanting to treat repeated kidney stones because they are “chronic”. The incentives are aligned perfectly to get doctors to not treat these patients since they can risk losing their license for improperly prescribing opioids. It doesn’t matter if it’s valid, all that matters is that it looks suspicious in a database, and doctors would rather avoid that attention than risk it to treat patients (but of course not all doctors are like this, just that there’s a lot of them who follow the incentives rather than work against them in the name of patient care).
Regarding the difference in prevalence between chronic pain in men and women, there’s a tendency, at least within the US medical system, to dismiss women’s pain more often than men. A good example of this is pain resulting for endometriosis, which is often dismissed or downplayed by doctors as “just bad period cramps” rather than a serious source of chronic pain. So too for many other sources of pain unique to women.
I don’t have a source, but my experience is that most of this seems to be due to a variant of the typical mind fallacy: male doctors and some female doctors have never experienced similar pain and so fail to appreciate its severity and sympathize with it less on the margin, being more likely to recommend more conservative treatment rather than more aggressively try to remediate the pain.
My model is the that the global angle is kind of boring: the drug war was pushed by the US, and I expect if the US ends it then other nations will either follow their example or at least drift in random directions with the US no longer imposing the drug war on them by threat of trade penalties.
I think this starts to get at questions of tractability, i.e. how neglected is this contingent on tractability (and vice versa). In my mind this is one of the big challenges of any kind of policy work where there’s already a decent number of folks in the space: you have to have reasonably high confidence that you can do better than everyone else is doing now (and not just that you have an idea for how to do better, but like can actually succeed in executing better) in order for it to cross the bar of a sufficiently effective intervention (in expectation) to be worth working on.
I would expect this not to be very neglected, hence I would expect EAs to be able to have much impact here only if, for example, it’s effectively neglected because the existing people pushing for an end to the drug war are unusually ineffective.
For example, there’s already NORML, who’s been working on cannabis angle of this since the 1970s to decent success, Portugal has already ended the drug war locally, and Oregon recently decriminalized possession of drugs for personal use.
Getting involved feels a bit like getting involved in, say, marriage equality in the 2000s: the change was already clearly in motion, plenty of people were working to push for it, and so there’s not clearly a lot additional that EAs could have brought to the table.
On the one hand I’m in favor of more housing. I live in the SF Bay Area where this is also a problem, and really insufficient housing is a problem for all of California, so I’m naturally supportive of efforts to address this problem. However, I’m not sure this project is a high priority for EAs.
This seems like something that’s not especially neglected (lots of people are thinking about ways to improve the housing situation in American cities) and also unlikely to have high impact in relative terms (viz. globally rich Americans are not suffering as much due to expensive, limited housing in desirable cities as the global poor, animals, or far future beings (in expectation)). Cf. ITN framework for why I’m thinking about these criteria.
I think it would be hard to convince me this is working on something neglected, but I’m pretty open to the idea that I might be wrong about impact, especially if better housing in American cities is somehow on a critical path to other, more obviously higher impact projects. I’d be interested if there are better arguments for why this is impactful enough to be prioritized over other, more obviously high impact causes.
One, I’d argue that hits-based giving is a natural consequence of working through what using “high-quality evidence and careful reasoning to work out how to help others as much as possible” reallying means, since that statement doesn’t say anything about excluding high-variance strategies. For example, many would say there’s high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.
Two, it’s likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.
I do like the idea of being able to construct an experiment to test naturalism. I think it’s mistaken in that I doubt there are any facts about what is right and wrong to be discovered, by observing the world or otherwise, but currently I and anyone else who wants to talk about metaethics is forced to rely primarily on argumentation. Being able to run an experiment using minds different from our own seems quite compelling to testing a variety of metaethical hypotheses.
I’m also somewhat concerned because this seems like a clear case of a dual use intervention that makes life better for the animals but also confers benefits to the farmers that may ultimately result in more suffering rather than less by, for example, making chickens more palatable to consumers as “humanely farmed” (I’m guessing that’s what is meant by “humane-washing”) or making chicken production more profitable (either by humane-washing or by making the chickens produce a better quality meat product that is in higher demand).
I can’t seem to find the previous posts at the moment, but I have this sense that this is not an isolated issue and that ACE has some serious problems given that it draws continued criticism, not for its core mission, but for the way it carries that mission out. Although I can’t remember at the moment what that other criticism was, I recall thinking “wow, ACE needs to get it together” or something similar. Maybe it has learned from those things and gotten better, but I notice I’m developing a belief that ACE is failing at the “effective” part of effective altruism.
Does this match what others are thinking or am I off?
I’ll note that I used to have some reservations but no longer do, so I’ll answer about why I previously had reservations.
When EA got interested in what we now call longtermism, it didn’t seem obvious to me that EA was for me. My read was that EA was about near concerns like global poverty and animal welfare and not far concerns like x-risk and aging. So it seemed natural to me that I was on the outside of EA looking in because my primary cause area (though note that I wouldn’t have thought of it that way at the time) wasn’t clearly under the EA umbrella.
Obviously this has changed now, but hopefully useful for historical purposes, and there may be folks who still feel this way about other causes, like effective governance, that are, from my perspective, on the fringes of what EA is focused on.
“Effective Altruism” sounds self-congratulatory and arrogant to some people:
Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it’s appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there’s something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.
Running with the valley metaphor, perhaps the 1990s were when we reached the most verdant floor of the valley. It remains unclear if we’re still there or have started to climb out and away from it, assuming the model to be correct.
The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.
The people I know of who are best at mentorship are quite busy. As far as I can tell, they are already putting effort into mentoring and managing people. Mentorship and management also both directly trade off against other high value work they could be doing.
There are people with more free time, but those people are also less obviously qualified to mentor people. You can (and probably should) have people across the EA landscape mentoring each other. But, you need to be realistic about how valuable this is, and how much it enables EA to scale.
Slight push back here in that I’ve seen plenty of folks who make good mentors but who wouldn’t be doing a lot of mentoring if not for systems in place to make that happen (because they stop doing it once they aren’t within whatever system was supporting their mentoring), which makes me think there’s a large supply of good mentors who just aren’t connected in ways that help them match with people to mentor.
This suggests a lot of the difficulty with having enough mentorship is that the best mentors need to not only be good at mentoring but also be good at starting the mentorship relationship. Plenty of people, it seems though, can be good mentors if someone does the matching part for them and creates the context between them and the mentees.
On a related but different note, I wish there was a way to combine conversations on cross-posts between EA Forum and LW. I really like the way AI Alignment Forum works with LW and wish EA Forum worked the same way.
I often make an adjacent point to folks, which is something like:
EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn’t work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic “goods”.
Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we’d ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.
I think the same is true of EA. I care a lot about AI x-risk and it’s what I focus on, but that doesn’t mean I think everyone should do the same. In fact, if they did, I’m not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don’t address them, end up making trying to address AI risks moot. I’m always very glad to see folks working on things, even things I don’t personally think are worthwhile, both because of uncertainty about what is best and because there’s multiple dimensions along which it seems we can optimize (and would be happy if we did).
I think it’s worth saying that the context of “maximize paperclips” is not one where the person literally says the words “maximize paperclips” or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you’ll get it doing things you wouldn’t as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they’d have to kill themselves or their loved ones to make more paperclips.
The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we’d still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.
I wrote about something similar about a year ago: https://forum.effectivealtruism.org/posts/Z94vr6ighvDBXmrRC/illegible-impact-is-still-impact