I’m not especially familiar with the history—I came to EA after the term “longtermism” was coined so that’s just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old → not neglected. And that does not follow. I don’t know how old the idea of longtermism is. I don’t particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.
River
Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.
merely possible people are not people
And this, again, is just plane false, at least in the morally relevant senses of these words.
I will admit that my initial statement was imprecise, because I was not attempting to be philosophically rigorous. You seem to be focusing in on the word “actual”, which was a clumsy word choice on my part, because “actual” is not in the phrase “person affecting views”. Perhaps what I should have said is that Parfit seems to think that possible people are somehow not people with moral interests.
But at the end of the day, I’m not concerned with what academic philosophers think. I’m interested in morality and persuasion, not philosophy. It may be that his practical recommendations are similar to mine, but if his rhetorical choices undermine those recommendations, as I believe they do, that does not make him a friend, much less a godfather of longermism. If he wasn’t capable of thinking about the rhetorical implications of his linguistic choices, then he should not have started commenting on morality at all.
You seem to be making an implicit assumption that longtermism originated in philosophical literature, and that therefor whoever first put an idea in the philosophical literature is the originator of that idea. I call bullshit on that. These are not complicated ideas that first arose amongst philosophers. These are relatively simple ideas that I’m sure many people had thought before anyone thought to write them down. One of the things I hate most about philosophers is their tendency to claim dominion over ideas just because they wrote long and pointless tomes about them.
This is utter nonsense. Of course your parents choice to have sex when they did rather than at some other time benefited you and hurt the other child they could have had. Of course a nuclear war that prevents a future person from existing harms that future person. This is obvious. And this is a central premise of longtermism. If you are right that Parfit coined the term, and he was not an advocate of person-affecting views, then the only conclusion I can draw is that he had an incredibly poor ability to think about the rhetorical implications of his language choices. That is actually pretty plausible for a philosopher now that I think about it. But even if this is the case, it is a pretty strong mark against counting Parfit as a godfather of longtermism. And it is a linguistic choice that we, ad advocates of longtermism need to correct every time we encounter it. So-called person-affecting views are not the views that consider effects on persons generally, they are the misnamed views that consider effects on an arbitrary subset of people, and we need to make that point every time this language of person-affecting views is brought up.
Lets clarify this a bit then. Suppose there is a massive nuclear exchange tomorrow, which leads in short order to the extinction of humanity. I take it both proponents and opponents of person affecting views will agree that that is bad for the people who are alive just before the nuclear detonations, and die either from those detonations or shortly after because of them. Would that also be bad for a person who counterfactually would have been conceived the day after tomorrow, or in a thousand years had there not been a nuclear exchange? I think the obviously correct answer is yes, and I think the longtermist has to answer yes, because that future person who exists in some timelines and not others is an actual person with actual interests that any ethical person must account for. My understanding is that person-affecting views say no, because they have mislabeled that future person as not an actual person. Am I misunderstanding what is meant by person-affecting views? Because if I have understood the term correctly, I have to stand by the position that it is an obviously biased term.
Put another way, it sounds like the main point of a person-affecting view is to deny that preventing a person from existing causes them harm (or maybe benefits them if their life would not have been worth living). Such a view does this by labeling such a person as somehow not a person. This is obviously wrong and biased.
Ah. I mistakenly thought that Parfit coined the term “person affecting view”, which is such an obviously biased term I thought he must have been against longtermism, but I can’t actually find confirmation of that so maybe I’m just wrong about the origin of the term. I would be curious if anyone knows who did coin it.
How on earth is Derek Parfit the godfather of longermism? If I recall correctly, this is the person who thinks future people are somehow not actual people, thereby applying the term “person affecting views” to exactly the opposite of the set of views a longtermist would think that label applies to.
I would not frame the relationship that way, no. I would say EA is built on top of rationality. Rationality talks about how to understand the world and achieve your goals, it defines itself as systematized winning. But it is agnostic as to what those goals are. EA takes those rationality skills, and fills in some particular goals. I think EA’s mistake was in creating a pipeline that often brought people into the movement without fully inculcating the skills and norms of rationality.
EA, back in the day, refused to draw a boundary with the rationality movement in the Bay area
That’s a hell of a framing. EA is an outgrowth of the rationality movement which is centered in the bay area. EA wouldn’t be EA without rationality.
I take it “any bad can be offset by a sufficient good” is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC.
What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don’t you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this “some bads cannot be offset” stuff.
My main issue here is a linguistic one. I’ve considered myself a utilitarian for years. I’ve never seen anything like this UC, though I think I agree with it, and with a stronger version of premise 4 that does insist on something like a mapping to the real numbers. You are essentially constructing an ethical theory, which very intentionally insists that there is no amount good that can offset certain bads, and trying to shove it under the label “utilitarian”. Why? What is your motivation? I don’t get that. We already have a label for such ethical theories, deontology. The usefulness of having the label “utilitarian” is precisely to pick out those ethical theories that do at least in principle allow offsetting any bad with a sufficient good. That is a very central question on which people’s ethical intuitions and judgments differ, and which this language of utilitarianism and deontology has been created to describe. This is where one of realities joints is.
For myself, I do not share your view that some bads cannot be offset. When you talk of 70 years of the worst suffering in exchange for extreme happiness until the heat death of the universe, I would jump on that deal in a heartbeat. There is no part of me that questions whether that is a worthwhile trade. I cannot connect with your stated rejection of it. And I want to have labels like “utiliarian” and “effective altruist” to allow me to find and cooperate with others who are like me in this regard. Your attempt to get your view under these labels seems both destructive of my ability to do that, and likely unproductive for you as well. Why don’t you want to just use other more natural labels like “deontology” to find and cooperate with others like you?
For instance, if someone is interested in AI safety, we want them to know that they could find a position or funding to work in that area.
But that isn’t true, never has been, and never will be. Most people who are interested in AI safety will never find paid work in the field, and we should not lead them to expect otherwise. There was a brief moment when FTX funding made it seem like everyone could get funding for anything, but that moment is gone, and it’s never coming back. The economics of this are pretty similar to a church—yes there are a few paid positions, but not many, and most members will never hold one. When there is a member who seems particularly well suited to the paid work, yes, it makes sense to suggest it to them. But we need to be realistic with newcomers that they will probably never get a check from EA, and the ones who leave because of that weren’t really EAs to begin with. The point of a local EA org, whether university based or not, isn’t to funnel people into careers at EA orgs, it’s to teach them ideas that they can apply in their lives outside of EA orgs. Lets not loose sight of that.
I discovered EA well after my university years, which maybe gives me a different perspective. It sounds to me like both you and your group member share a fundamental misconception of what EA is, what questions are the central ones EA seeks to answer. You seem to be viewing it as a set of organizations from which to get funding/jobs. And like, there is a more or less associated set of organizations which provide a small number of people with funding and jobs, but that’s not central to EA, and if that is your motivation for being part of EA, then you’ve missed what EA is fundamentally about. Most EAs will never receive a check from an EA org, and if your interest in EA is based on the expectation that you will, then you are not the kind of person we should want in EA. EA is, at its core, a set of ideas about how we should deploy whatever resources (our time and our money) that we choose to devote to benefiting strangers. Some of those are object level ideas (we can have the greatest impact on people far away in time and/or space), some are more meta level (the ITN framework), but they are about how we give, not how we get. If you think that you can have more impact in the near term than the long term, we can debate that within EA, but ultimately as long as you are genuinely trying to answer that question and base your giving decisions on it, you are doing EA. You can allocate your giving to near term causes and that is fine. But if you expect EAs who disagree with you to spread their giving in some even way, rather than allocating their giving to the causes they think are most effective, then you are expecting those EAs to do something other than EA. EA isn’t about spreading giving in any particular way across cause areas, it is about identifying the most effective cause areas and interventions and allocating giving there. The only reason we have more than one cause area is because we don’t all agree on which ones are most effective.
I’m not sure I see the problem here. By donating to effective charities, you are doing a lot of good. Whatever decision you make about eating meat or helping a random stranger who manages to approach you actually is trivial in comparison. Do those things or don’t. It doesn’t matter in the scheme of things. They aren’t what makes you good or bad, your donations are.
Again you are not making the connection, or maybe not seeing my basic point. Even if someone dislikes leftist-coded things, and this causes them both to oppose wokism and to oppose foreign aid, this still does not make opposition to foreign aid about anti-wokism. The original post suggested there was a causal arrow running between foreign aid and wokism, not that both have a causal arrow coming from the same source.
EA is an offshoot of the rationalist movement! The whole point of EA’s existence is to try to have better conversations, not to accept that most conversations suck and speak in vibes!
I also don’t think it’s true that conservatives don’t draw the distinction between foreign aid and USAID. Spend five minutes listening to any conservative talk about the decision to shut down USAID. They’re not talking about foreign aid being bad in general. They are talking about things USAID has done that do not look like what people expect foreign aid to look like. They seem to enjoy harking on the claim that USAID was buying condoms for Gaza. Now, whether or not that claim is true, and whether or not you think it is good to give Gazans condoms, you have to admit that condoms are not what anybody thinks of when they think of foreign aid.
You missed my point. I agree that foreign aid is charged along partisian lines. My point was that most things that are charged along partisian lines are not charged along woke/anti-woke lines. Foreign aid is not an exception to that rule, USAID is..
I appreciate that you have a pretty nuanced view here. Much of it I agree with, some of it I do not, but I don’t want to get into these weeds. I’m not sure how any of it undermines the point that wokism and opposition to foreign aid are basically orthogonal.
I don’t think foreign aid is at risk of being viewed as woke. Even the conservative criticisms of USAID tend to focus on things that look very ideological and very not like traditional foreign aid. And fundamentally opposition to wokism is motivated by wanting to treat all people equally regardless of race or sex, which fits very well with EA ideas generally and with work on global health and development specifically.
That said, it is true that for contingent historical reasons, ideas that have little to do with each other, or may even be in tension, often end up being supported by the same political party. And at our current moment in history, anti-wokism and nationalism do seem to have ended up in the same political party. I’m just saying it is the nationalism, not the anti-wokism, that is the potential issue for global health and development work.
I also don’t see how wokeness would have much to do with animal advocacy. I have found EA animal advocacy people to generally be more woke than other EAs, but that is not because of their ideas about animals, it is because of other aspects of how they conduct themselves. I don’t know if that generalizes to non-EA animal advocates. The concern about oligarchy pushing against animal welfare I think is a justified one, all I’m saying is wokeness doesn’t really factor into that dynamic at all.
The online format seems rather different from the in-person Inkhaven, which seems like the main difference. But in what way is literal Inkhaven not for EAs?