The only reason I don’t identify as longtermist is tractability. I would appreciate a definition that allowed me to affirm that when a being occurs in time is morally arbitrary without also committing me to focusing my efforts on the long-term.
one thing to bear in mind is that even using the the weighting scheme I suggested in the post—which seemingly strongly favors young people—that would move the median voter (in the US) from age 55 to age 40.
How do you get this result? Are you just saying with these multipliers applied to the current age distribution of voters, the median US vote would be cast by a 40 yo? Or if this anticipating the response to the multipliers? Like, for example, does this take into account that young people would probably vote more if their votes counted 6x more?
I’m not knocking the overall idea, but I am skeptical that young people will be that much better at resisting short-term political temptations than old people. If young people got huge vote multipliers, politicians would only pander to their weaknesses more. I guess like most people commenting here I have the most faith in middle-aged people. I like the idea of a more gradual tapering up and down of the vote multiplier, but a system that complicated is probably doomed.
Maybe parents should get huge vote multipliers. Seems to me they usually care about the future a lot more than the young people who are on track to outlive them.
I downvoted your comments as well, Milan, because I think this is exactly the kind of thing that should go on the EA Forum. The emergence of this term “longtermism” to describe a vaguer philosophy that was already there has been a huge, perhaps the main EA topic for like 2 years. I don’t even subscribe to longtermism (well, at least not to strong longtermism, which I considered to be the definition before reading this post) but the question of whether to hyphenate has come up many times for me. This was all useful information that I’m glad was put up for engagement within EA.
And the objection that words can never be precise is pretty silly. Splitting hairs can be annoying but this was an important consideration of meaningfully different definitions of longtermism. It’s very smart for EA to figure this out now to avoid all the problems that Will mentioned, like vagueness, when the term has become more widely known.
It sounded like your objection was that this post was about words and strategy instead of about the concepts. I for one am glad that EA is not just about thinking but about doing what needs to be done, including reaching agreement about how to talk about ideas and what kind of pitches we should be making.
An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs. This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future
I’ve referred to this definition as “temporal cosmopolitanism.” Whatever we call it, I agree that we should have some way of distinguishing the view that time at which something occurs is morally arbitrary from a view that prioritizes acting today to try to affect the long-run future.
FWIW, I think the young lacking life experience and crystallized intelligence is pretty clutch. This argument rests on the young having not only a greater stake in future but being able to make sensible decisions about what to do with it. I would at least suggest that 18-25 yo voters not have a multiplier.
I do like reducing the influence of the old who know very well when voting that, for instance, climate change will not really affect them. But I think any vote weighting scheme has to take stakeholding and competence into account.
So you think he’s worried about other people being misled?
You’ve done a good job at reporting the trends in thought and terminology here. I’m not directing the following at you, but at the trend in the field you’re describing.
I’m an evolutionary biologist and I’m tired of people saying r/K has been discredited. I think what really happened is that people realized r/K was a generalization without realizing that every other useful principle in evolutionary biology is also a generalization.
I use r/K parlance and I never get any complaints from the evolutionary theorists and population geneticists around me. It’s just a heuristic. Would you say the logistic model of population dynamics has been debunked because someone points out that it’s doesn’t capture every variable that affects population growth? No, because it’s just a model, so that was obvious from the start. Hence I don’t see why people pointing out that there are other dimensions to life history somehow invalidates using the r/K spectrum as a knowing simplification. I’m all for clarifying that r/K is just a heuristic and educating people about the fundamentals of life history theory, but I don’t think the fact that there’s more to it invalidates r->K as a useful dimension.
There’s never going to be a life history theory that’s both 100% accurate and can provide generalizations at the gross level at which we typically consider life history traits. In order to make any useful statements about the relationship between offspring number and life span, for example, we’re going to have to allow for exceptions.
My point is that Ben is in fact able to do whatever legal thing he wants. He doesn’t need to make us wrong to do so. It’s interesting that he feels the need to. Whether EA or Peter Singer has suggested that it’s morally wrong not to give, Ben is free to follow his own conscience/desires and does not need our approval. If his real argument is that he should be respected by EAs for his decision not to give, I think that should be distinguished from a pseudo-factual argument that we’re deceived about the need to give money.
But you seem to be also arguing “you don’t need to justify your actions to yourself / at all”
Kinda. More like “nobody can make you act in accordance with your own true values—you just have to want to.”
If people aren’t required to live in accordance with even their own values, what’s the point in having values?
To fully explain my position would require a lot of unpacking. But, in brief, no—how could people be required to live in accordance with their own values? Other people might try to enforce value-aligned living, but they can’t read your mind or fully control you—hardly makes it a “requirement.” If what you’re getting at is that people **should** live according to their values, then, sure, maybe (not sure I would make this a rule on utilitarian grounds because a lot of people’s values or attempts to live up to their values would be harmful).
Suffice to say that, if Ben does not want to give money, he does not have to explain himself to us. The natural consequence of that may be losing respect from EAs he knows, like his former colleagues at GiveWell. He may be motivated to come up with spurious justifications for his actions so that it isn’t apparent to others that either his values have changed or he’s failing to live up to them. I would like to create conditions where Ben can be honest with himself. That way he either realizes that he still believes it’s best to give even though the effects or giving are more abstract or he faces up to the fact that his values have changed in an unpopular way but is able to stay in alignment with them. (This is all assuming that his post did not represent his true rejection, which it very well might have.)
“However, effective altruism really is warm and calculating.”
I can’t believe I’ve never thought of this! That’s great :)
Great post, too. I think EA has a helpful message for most people who are drawn to it, and for many people that message is overcoming status quo indifference. However, I worry that caring too much, as in overidentifying with or feeling personally responsible for the suffering of the world, is also a major EA failure mode. I have observed that most people assume their natural tendency towards either indifference or overresponsibility is shared by basically everyone else, and this assumption determines what message they think the world needs to hear. For instance, I’m someone who’s naturally overresponsible. I don’t need EA to remind me to care. I need it to remind me that the indiscriminate fucks I’m giving are wasted, because they can take a huge toll on me and aren’t particularly helping anyone else. Hence, I talk a lot about self-care and the pitfalls of trying to be too morally perfect within EA. When spreading the word about EA, I emphasize the moral value of prioritization and effectiveness because that’s what was missing for me.
EA introduced me to many new things to care about, but I only didn’t care about them before because I hadn’t realized they were actionable. This might be quibbling, but I wouldn’t say I was indifferent before—I just had limiting assumptions about how I could help. I side more with Aaron’s “unawareness” frame on this.
Is this speaking to a concern someone has that terraforming would make a bunch more animals to suffer? What motivated this piece?
From the early sections, I thought you were going in the opposite direction—how already involved EAs can be mindful of their secret motives for being involved. (I think that’s super-important, btw.) For outreach, I would have thought the implication was that we should balance the need to appeal to and accomodate the human need for status with the possibility that EA would get diluted by the attempt to market EA in a low-fidelity way. I agree with CEA’s emphasis on the high-fidelity model: there’s no point in growing EA if it stops being EA in the process.
I think there is some very low-hanging fruit EA orgs can pick re:prestige they can offer recruits. #1 is making sure the name of the organization and the name of positions are as impressive and not-loaded as possible. Foundational Research Institute, for example, went with that title over “The Future of Suffering Institute” because they got feedback from academics that they wouldn’t be able to put that name on their CVs. At Harvard EA, we have multiple named fellowships for students (the undergrad one is the “Arete Fellowship”). There is no reason we can’t call our programs fellowships or name them, even though they are just student club programming. But being able to put “2016 Fellow of the Harvard College Effective Altruism Arete Fellowship” on a resume gives Harvard students the prestige they need to justify spending their time on us. There is a ton of cheap status EA can confer without it costing us anything (just requires us to contribute to the inflation of terms for volunteering, employment, and awards—I’m not losing any sleep).
Now that I’ve made all these comments, I realize I should have just asked Ben if his post was his true rejection of EA-style giving. My comments have all been motivated by suspicion that Ben just isn’t convinced by arguments about giving enough to give himself, but he feels like he has to prove them wrong on their own terms instead of just acting as he sees fit. (That’s a lot of assumptions on my part.) If that particular scenario happens to be true for him or anyone reading, my message is that you are in charge of these decisions and you don’t have to justify yourself to EAs.
The broader issue that concerns me here is people thinking that the only way to do the things they want to make them is happy is to convince everyone else that those things are objectively right. There are a lot of us here with perilously high need for consistency. When we don’t respect personal freedom and freedom of conscience, people will start to hijack EA ideas to make them more pallatable for them without having to admit to being inconsistent or failing to live up to their ideals. This happens all the time in religious movements.
I can’t promise Ben that no one will judge him morally inferior for not giving. But I can promote respect for people in the community feeling empowered to follow their own judgment within their own domains. EA benefits from debate, but much more so if that debate is restricted to true rejections and not coming from a need for self-justification. Reminding people that all EA lifestyle decisions are choices is thus a means of community epistemic hygiene.
Singer says it’s wrong to spend frivolously on ourselves while there are others in need but he doesn’t say it should be illegal. He also doesn’t give any hard and fast rules about giving, and he doesn’t think people who don’t give should be shamed. He simply points out how much more the money could do for others, each of whom matter as much as any of us.
I just get the feeling that Ben isn’t comfortable doing what he wants or what he thinks would make most of us (wealthy people) happier without getting us to agree with him first that it’s what everyone should do. I want to remind him that what he does within the law is his prerogative. We don’t have to be wrong for him to do what he wants. If he just wants to focus on himself and his loved ones, he doesn’t have to convince us that we’ve filled every funding gap so our ideas are moot and he’s still a good person despite not giving. He’s already free to act as he sees fit. The last thing he needs to do to feel in charge of his own life and resources is attack EA.
I say this all because that line about focusing on your loved one and doing “concrete” things made me suspect that that desire might have motivated the whole argument. In that case, we can avoid a pointless argument of dueling back-of-the-envelope estimates by pointing out that EA doesn’t have to be wrong for Ben and others like him to do what they want with their lives.
I could be wrong and the post could represent Ben’s true rejection. In that case, I’d expect to hear back that he is doing what he wants, and what he wants depends on the frequency of drowning children, which is why he’s trying to figure this out.
As I commented on Ben’s blog, I just think it bears mentioning that we’re allowed to focus on our own lives whether or not there are people who could use or money more than us. So if anyone were motivated to undermine the need for donations in order to feel justified in focusing on themselves and their loved ones, they needn’t do it. It’s already okay to do that, and no one’s perfectly moral. Maybe if you don’t feel the need to prove EA wrong before taking care of yourself, you’ll want to return to giving or other EA activities after giving yourself some tlc, because instead of feeling forced, you know you want to do these things of your own free will.
I’d like to propose another group that shouldn’t donate: people with a pre-disposition to conditions that require treatment with medication that is hard on the kidneys.
I’m really glad I didn’t try to donate my kidney a few years ago before I knew I would need to be taking a med (probably for the rest of my life) that can cause serious renal damage. In fact, kidney damage is a major reason people have to go off this drug and often they don’t find an equivalent cocktail for dealing with the disease symptoms.
I imagine getting treated with any brutal medication is harder with one kidney. I hope this is something discussed with altruistic donors, but I never hear about it. I only hear about how you’d be higher up on the transplant list if you had kidney disease, and that that’s an advantage because most kidney disease would have hit both kidneys (were they there) anyway. But that makes me imagine disease arising within the kidney or the body, not kidney damage due to treating other conditions.
People who are doing direct work, if they expect three weeks of their work to produce more QALYs than donating.
It may be worth considering whether the enforced rest from donating a kidney would have some of the benefits of taking a vacation for you.
This could be turned into a searing satire of EA. “Earn a rest from the work that’s too marginally impactful to pause for a few weeks by donating a kidney. To you, post-surgical recovery will seem like a vacation!”
The real goal you seem to be advancing, Milan, is spirituality, not psychedelics per se. Based on testimony from people I trust and some slightly dubious research, I think psychedelics can likely be helpful in that, but they shouldn’t be our frontline tool. I think meditation is a much better candidate for that.
Sam Harris and Michael Pollan argue that psychedelics are useful for convincing people there’s a there there, and that makes sense to me. You have to put a lot of time and blind effort into meditation to get that same assurance. But the struggle, and particularly “asking” for deeper wisdom through your faithful efforts, is a really important part of spiritual realization according to most traditions (and in my personal experience). Based on what I’ve read (haven’t taken them), I don’t think taking psychedelics often does the trick on its own.
And there are many downsides to psychedelics. People who don’t know how mentally unstable they are may take them and be thrown badly off-kilter. Bad trips are harrowing and can reach unimaginable heights of terror. I don’t think most people have the slightest clue how deeply and completely their minds could torture them. Even if people one day are grateful for what they’ve been through (as I am now with my mental illness), I would not knowingly inflict that risk on people when there are gentler ways. Even intense meditation can have these destabilizing effects, but psychedelics are much more potent, can’t be stopped on demand, and can be wielded by totally unskilled people. My guess is that the the most common harm comes from tripping habitually out of sensation-seeking rather than humbly to gain self-insight or wisdom. Again, this can happen in meditation, too, but it’s a lot less likely. When you add in all the infrastructure necessary to mitigate these risks, like comprehensive mental health screenings and guides and practice sessions, doing psychedelics right doesn’t seem that much easier than a meditation retreat and it doesn’t teach you any skills. The advantage of psychedelics at that point is speed and the guarantee that some experience of altered consciousness will take place, which is not nothing, but all this safety equipment undercuts the elegance of taking a little pill proponents have harped on.
Psychedelics could be a more EA-style intervention than meditation (if either of them qualify) because pills are scalable, but creating a safe environment with skilled guides is a lot less so. Meditation can be taught by one teacher to many people in parallel with much less equipment. It can even be taught pretty well through apps. Meditation takes longer to reach the experiences/insights psychedelics throw up in your face, but they are more digestible through meditation and insight alone is insufficient for most people to transform their lives—the vast majority also need skills like equanimity acquired through practice.
Psychedelics probably have a role to play, but I do not think they are the magic bullet proponents claim they are. They come with serious dangers, and mitigating those dangers undercuts their scalability, which was imo their biggest EA selling point. Safer alternatives, the vast array of meditative schools and techniques, exist. Psychedelics have some advantages over traditional meditation—speed and guaranteed action—but they are no panacea. My best guess is that they should be a targeted prescription for certain roadblocks on the spiritual path.