This is utter nonsense. Of course your parents choice to have sex when they did rather than at some other time benefited you and hurt the other child they could have had. Of course a nuclear war that prevents a future person from existing harms that future person. This is obvious. And this is a central premise of longtermism. If you are right that Parfit coined the term, and he was not an advocate of person-affecting views, then the only conclusion I can draw is that he had an incredibly poor ability to think about the rhetorical implications of his language choices. That is actually pretty plausible for a philosopher now that I think about it. But even if this is the case, it is a pretty strong mark against counting Parfit as a godfather of longtermism. And it is a linguistic choice that we, ad advocates of longtermism need to correct every time we encounter it. So-called person-affecting views are not the views that consider effects on persons generally, they are the misnamed views that consider effects on an arbitrary subset of people, and we need to make that point every time this language of person-affecting views is brought up.
River
Lets clarify this a bit then. Suppose there is a massive nuclear exchange tomorrow, which leads in short order to the extinction of humanity. I take it both proponents and opponents of person affecting views will agree that that is bad for the people who are alive just before the nuclear detonations, and die either from those detonations or shortly after because of them. Would that also be bad for a person who counterfactually would have been conceived the day after tomorrow, or in a thousand years had there not been a nuclear exchange? I think the obviously correct answer is yes, and I think the longtermist has to answer yes, because that future person who exists in some timelines and not others is an actual person with actual interests that any ethical person must account for. My understanding is that person-affecting views say no, because they have mislabeled that future person as not an actual person. Am I misunderstanding what is meant by person-affecting views? Because if I have understood the term correctly, I have to stand by the position that it is an obviously biased term.
Put another way, it sounds like the main point of a person-affecting view is to deny that preventing a person from existing causes them harm (or maybe benefits them if their life would not have been worth living). Such a view does this by labeling such a person as somehow not a person. This is obviously wrong and biased.
Ah. I mistakenly thought that Parfit coined the term “person affecting view”, which is such an obviously biased term I thought he must have been against longtermism, but I can’t actually find confirmation of that so maybe I’m just wrong about the origin of the term. I would be curious if anyone knows who did coin it.
How on earth is Derek Parfit the godfather of longermism? If I recall correctly, this is the person who thinks future people are somehow not actual people, thereby applying the term “person affecting views” to exactly the opposite of the set of views a longtermist would think that label applies to.
I would not frame the relationship that way, no. I would say EA is built on top of rationality. Rationality talks about how to understand the world and achieve your goals, it defines itself as systematized winning. But it is agnostic as to what those goals are. EA takes those rationality skills, and fills in some particular goals. I think EA’s mistake was in creating a pipeline that often brought people into the movement without fully inculcating the skills and norms of rationality.
EA, back in the day, refused to draw a boundary with the rationality movement in the Bay area
That’s a hell of a framing. EA is an outgrowth of the rationality movement which is centered in the bay area. EA wouldn’t be EA without rationality.
I take it “any bad can be offset by a sufficient good” is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC.
What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don’t you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this “some bads cannot be offset” stuff.
My main issue here is a linguistic one. I’ve considered myself a utilitarian for years. I’ve never seen anything like this UC, though I think I agree with it, and with a stronger version of premise 4 that does insist on something like a mapping to the real numbers. You are essentially constructing an ethical theory, which very intentionally insists that there is no amount good that can offset certain bads, and trying to shove it under the label “utilitarian”. Why? What is your motivation? I don’t get that. We already have a label for such ethical theories, deontology. The usefulness of having the label “utilitarian” is precisely to pick out those ethical theories that do at least in principle allow offsetting any bad with a sufficient good. That is a very central question on which people’s ethical intuitions and judgments differ, and which this language of utilitarianism and deontology has been created to describe. This is where one of realities joints is.
For myself, I do not share your view that some bads cannot be offset. When you talk of 70 years of the worst suffering in exchange for extreme happiness until the heat death of the universe, I would jump on that deal in a heartbeat. There is no part of me that questions whether that is a worthwhile trade. I cannot connect with your stated rejection of it. And I want to have labels like “utiliarian” and “effective altruist” to allow me to find and cooperate with others who are like me in this regard. Your attempt to get your view under these labels seems both destructive of my ability to do that, and likely unproductive for you as well. Why don’t you want to just use other more natural labels like “deontology” to find and cooperate with others like you?
For instance, if someone is interested in AI safety, we want them to know that they could find a position or funding to work in that area.
But that isn’t true, never has been, and never will be. Most people who are interested in AI safety will never find paid work in the field, and we should not lead them to expect otherwise. There was a brief moment when FTX funding made it seem like everyone could get funding for anything, but that moment is gone, and it’s never coming back. The economics of this are pretty similar to a church—yes there are a few paid positions, but not many, and most members will never hold one. When there is a member who seems particularly well suited to the paid work, yes, it makes sense to suggest it to them. But we need to be realistic with newcomers that they will probably never get a check from EA, and the ones who leave because of that weren’t really EAs to begin with. The point of a local EA org, whether university based or not, isn’t to funnel people into careers at EA orgs, it’s to teach them ideas that they can apply in their lives outside of EA orgs. Lets not loose sight of that.
I discovered EA well after my university years, which maybe gives me a different perspective. It sounds to me like both you and your group member share a fundamental misconception of what EA is, what questions are the central ones EA seeks to answer. You seem to be viewing it as a set of organizations from which to get funding/jobs. And like, there is a more or less associated set of organizations which provide a small number of people with funding and jobs, but that’s not central to EA, and if that is your motivation for being part of EA, then you’ve missed what EA is fundamentally about. Most EAs will never receive a check from an EA org, and if your interest in EA is based on the expectation that you will, then you are not the kind of person we should want in EA. EA is, at its core, a set of ideas about how we should deploy whatever resources (our time and our money) that we choose to devote to benefiting strangers. Some of those are object level ideas (we can have the greatest impact on people far away in time and/or space), some are more meta level (the ITN framework), but they are about how we give, not how we get. If you think that you can have more impact in the near term than the long term, we can debate that within EA, but ultimately as long as you are genuinely trying to answer that question and base your giving decisions on it, you are doing EA. You can allocate your giving to near term causes and that is fine. But if you expect EAs who disagree with you to spread their giving in some even way, rather than allocating their giving to the causes they think are most effective, then you are expecting those EAs to do something other than EA. EA isn’t about spreading giving in any particular way across cause areas, it is about identifying the most effective cause areas and interventions and allocating giving there. The only reason we have more than one cause area is because we don’t all agree on which ones are most effective.
I’m not sure I see the problem here. By donating to effective charities, you are doing a lot of good. Whatever decision you make about eating meat or helping a random stranger who manages to approach you actually is trivial in comparison. Do those things or don’t. It doesn’t matter in the scheme of things. They aren’t what makes you good or bad, your donations are.
Again you are not making the connection, or maybe not seeing my basic point. Even if someone dislikes leftist-coded things, and this causes them both to oppose wokism and to oppose foreign aid, this still does not make opposition to foreign aid about anti-wokism. The original post suggested there was a causal arrow running between foreign aid and wokism, not that both have a causal arrow coming from the same source.
EA is an offshoot of the rationalist movement! The whole point of EA’s existence is to try to have better conversations, not to accept that most conversations suck and speak in vibes!
I also don’t think it’s true that conservatives don’t draw the distinction between foreign aid and USAID. Spend five minutes listening to any conservative talk about the decision to shut down USAID. They’re not talking about foreign aid being bad in general. They are talking about things USAID has done that do not look like what people expect foreign aid to look like. They seem to enjoy harking on the claim that USAID was buying condoms for Gaza. Now, whether or not that claim is true, and whether or not you think it is good to give Gazans condoms, you have to admit that condoms are not what anybody thinks of when they think of foreign aid.
You missed my point. I agree that foreign aid is charged along partisian lines. My point was that most things that are charged along partisian lines are not charged along woke/anti-woke lines. Foreign aid is not an exception to that rule, USAID is..
I appreciate that you have a pretty nuanced view here. Much of it I agree with, some of it I do not, but I don’t want to get into these weeds. I’m not sure how any of it undermines the point that wokism and opposition to foreign aid are basically orthogonal.
I don’t think foreign aid is at risk of being viewed as woke. Even the conservative criticisms of USAID tend to focus on things that look very ideological and very not like traditional foreign aid. And fundamentally opposition to wokism is motivated by wanting to treat all people equally regardless of race or sex, which fits very well with EA ideas generally and with work on global health and development specifically.
That said, it is true that for contingent historical reasons, ideas that have little to do with each other, or may even be in tension, often end up being supported by the same political party. And at our current moment in history, anti-wokism and nationalism do seem to have ended up in the same political party. I’m just saying it is the nationalism, not the anti-wokism, that is the potential issue for global health and development work.
I also don’t see how wokeness would have much to do with animal advocacy. I have found EA animal advocacy people to generally be more woke than other EAs, but that is not because of their ideas about animals, it is because of other aspects of how they conduct themselves. I don’t know if that generalizes to non-EA animal advocates. The concern about oligarchy pushing against animal welfare I think is a justified one, all I’m saying is wokeness doesn’t really factor into that dynamic at all.
I guess I’d be somewhat interested to know why serious harassment is so unlikely. The sources that I cited seemed to be quite worrying to me on this front.
The Guardian reported the following: “Trump’s escalating threats to pervert the criminal justice system need to be taken seriously,” said the former justice department inspector general Michael Bromwich. “We have never had a presidential candidate state as one of his central goals mobilizing the levers of justice to punish enemies and reward friends. No one has ever been brazen enough to campaign on an agenda of retribution and retaliation.” And NPR reported that “Trump has issued more than 100 threats to investigate, prosecute, imprison or otherwise punish his perceived opponents”.
I think a lot of where we differ is in how much we trust the media when it comes to Trump. I’ve generally found that media will report ordinary things as though they were extraordinary and bad, will take the worst possible interpretation of ambiguous quotes, and do whatever else they think will keep people irrationally afraid of Trump. Take the particular claim that you made your centerpeice—that Musk was throwing around his wealth to support Trump’s nominees. “Rich American uses wealth to influence politicians” is not exactly news—that happens every day on both sides of the aisle. And what you put in block quotes was just that fact wrapped in hyperbolic language. I looked very briefly at your seven sources. All were during the election and all seemed to draw on the same quote: “Trump said that if ‘radical left lunatics’ disrupt the election, ‘it should be very easily handled by — if necessary, by National Guard, or if really necessary, by the military.’” The author somehow read that as “Trump has expressed support for using government force against domestic political rivals.” Talk about a straw man! Trump was suggesting using the military to ensure the election happens. That may be unwise and unprecedented and predictably totally unnecessary, but it is nothing like using the military against political rivals. This is the press’s standard MO when reporting on Trump. So when they say “100 threats to whatever” I just assume that few if any of the things on their list are actually what they are claimed to be.
Having said that, the point I was making relied less on whether Trump would actually seriously harass people, but rather whether they would fear that Trump would do so, and specifically fear this enough that they would avoid taking actions which might act as a check/balance on presidential power. Do you believe that people don’t have this fear?
I agree that people have that fear. I do not think it is warranted. And I think indulging an unwarranted fear is generally a bad idea—you just incentivize people to have unwarranted fears in the future. We need political rhetoric to cool down right now, not heat up.
For your bulleted list of bad things, I agree that many of them are unqualifiedly bad. A few of them I have more nuanced views on. But I don’t want to go point by point through it, as I don’t think that would shed any light on whether democracy is at risk or what we should do about it.
With regard to IGs and OGE, I’m not too familiar with these institutions, but my read is that the practical implications of who holds these posts are basically nil. IGs don’t directly remedy anything, they just write reports. They weren’t going to accomplish anything Trump’s appointees didn’t want them to accomplish anyway.
I get worried when I see people questioning whether the president has the right to fire them, because I value democracy in the literal sense—accountability to the people—and the mechanism by which any executive branch employee is accountable to the people in our system is through the president’s ability to fire them.
As for the neglectedness of lawsuits, I think we need to ask about particular lawsuits. It is certainly true that some lawsuits can fail for lack of money or talent. I don’t know of a reason to think that either is in short supply when it comes to challenging Trump. As you’ve pointed out, people are scared as shit, and there are plenty of liberals in the legal profession. But if we want to make the neglectedness case, I want to see it at the level of a particular legal issue, not just “trump is bad for [checks and balances and good government]”.
That seems reasonable up to a certain point. It seems reasonable for long-term grants to be paid out on some schedule and for a researcher to arrange a study such that a loss of funding would force them to wrap up early and without the data they were hoping for. But I think a researcher should still have an obligation to have enough money in their own bank account that, if funding gets cut, they can wrap up the study in a way that is safe for the subjects—they should have enough cash to ween the subjects off drugs or remove devices early or whatever else is involved in wrapping up. Funding getting cut is a risk that I would think would be fairly obvious when planning a study—especially if your source of funding is a government agency in a country that you know will have an election before the study concludes.
I’m not sold, and I’m going to lay out some general reasons.
Firstly, a lot of the concerns expressed here I think are extremely unlikely. I do not think there is any serious risk that Trump will send the military after, or otherwise seriously harass, former government employees. I do not think he will pursue a third term or otherwise interfere with our tradition of free and fair elections. And I do not think he will openly defy a court order.
Some of the other things you fear I don’t necessarily see as bad. As a matter of democratic accountability, by which I mean accountability to the people rather than checks and balances or “good” governance, I do think the president has the right to fire executive branch employees, whether or not we like the particular decisions he makes.
Secondly, and my error bars on this point do cross the zero line, but my expectation is that Trump will reduce the risk of a nuclear or biological catastrophe. The wars in Ukraine and Israel both started on Biden’s watch. With Trump I am hopeful that both will reach some kind of resolution on somewhat favorable terms.
I do think it is good that people are filing lawsuits challenging the questionably legal things Trump is doing. I don’t think that this intervention is particularly neglected.
Also, you seem to suggest that Danielle Sassoon’s actions in regard to the Eric Adams case are somehow an instance of legislative checks on the executive. I don’t get that. Sassoon was an employee of the executive branch, not the legislative. That’s why an executive branch official was able to fire her.
And this, again, is just plane false, at least in the morally relevant senses of these words.
I will admit that my initial statement was imprecise, because I was not attempting to be philosophically rigorous. You seem to be focusing in on the word “actual”, which was a clumsy word choice on my part, because “actual” is not in the phrase “person affecting views”. Perhaps what I should have said is that Parfit seems to think that possible people are somehow not people with moral interests.
But at the end of the day, I’m not concerned with what academic philosophers think. I’m interested in morality and persuasion, not philosophy. It may be that his practical recommendations are similar to mine, but if his rhetorical choices undermine those recommendations, as I believe they do, that does not make him a friend, much less a godfather of longermism. If he wasn’t capable of thinking about the rhetorical implications of his linguistic choices, then he should not have started commenting on morality at all.
You seem to be making an implicit assumption that longtermism originated in philosophical literature, and that therefor whoever first put an idea in the philosophical literature is the originator of that idea. I call bullshit on that. These are not complicated ideas that first arose amongst philosophers. These are relatively simple ideas that I’m sure many people had thought before anyone thought to write them down. One of the things I hate most about philosophers is their tendency to claim dominion over ideas just because they wrote long and pointless tomes about them.