I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don’t think it’s reasonable to say it’s blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. “The global poor” isn’t a uniform pool, and helping those who are part of “the global poor” in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn’t acceptable—for example, if the future benefit is discounted so heavily that it’s outweighed by the present harm.
On your first para, I was responding to this claim: “It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.” I said that most work on bio and AI was not just theory but was applied. I don’t think the things you say in the first para present any evidence against that claim, but rather they seem to grant my initial point.
I agree that there are some things in Bio and AI that are applied—though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to “I don’t think it counterfactually harms the global poor.”
person-affecting view of ethics, which longtermists reject
I’m a longtermist and I don’t reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it’s bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.
Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here’s a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn’t necessarily mean that the author doesn’t actually reject person-affecting views)
Many current individuals will be worse off when resources don’t go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world’s wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we’d now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn’t change the distributive impact between groups.
Equally, many future people will be worse-off than they would have been if we don’t reduce extinction risks. The claim is about the net total impact on non-white people
Your definition of problematic injustice seems far too narrow, and I explicitly didn’t refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured—not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.
On the second point, yes I agree that there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible. If someone held a view which said that they only count the interests of white future people, I think it would be quite clear that this was bad for the interests of non-white people in a very important way. Therefore, if I ignore all future people, then I ignore all future non-white people, which is bad for their interests in a very important way
As I said above in a different comment thread, it seems clear we’re talking past one another.
Yes, being racist would be racist, and no, that’s not the criticism. You said that “there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible.” And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said “there are some popular views where AI could be a risk to humans. I just don’t think that these are plausible,” and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.
I don’t really understand your response. Most of the people who argue for a longtermist ethical standpoint have spent many many years thinking about the possibility that they are wrong and arguing against person-affecting views, during their philosophy degrees. I could talk to you for several weeks about the merits and demerits of such views and the published literature on them.
“Yes, being racist would be racist, and no, that’s not the criticism.” I don’t really understand your point here.
My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people’s views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.
And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages and inequity is not the same as calling someone racist.
The standards in the first para appear to be something like ‘you can never say that something is implausible if some philosophers believe it’. That seems like a pretty weird standard. Another way of making saying it is implausible is just saying that “I think it is probably false”.
Near-termists are also a small group talking about how to fix everything.
this is perhaps too meta, but on the second para, if that is what you meant, I don’t understand how it is a response to the comment your response was to.
I’m pointing out that you’re privileging your views over those of others—not “some philosophers,” but “most people.”
And unless you’re assuming a fairly strong version of moral realism, this isn’t a factual question, it’s a values question—so it’s strange to me to think that we should get to assume we’re correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented—and I think Stefan Schubert’s recent work implies that is not at all clear.
Any time you take a stance on anything you are privileging your view over some other people. Your argument also applies to people working on animal welfare and on global poverty. In surveys, most people don’t even seem to care about saving more lives than less.
If we are going to go down the route of saying that what EAs do should be decided by the majority opinion of the current global population, then that would be the end of EA of any kind. As I understand it, your claim is that the total view is false (or we don’t have reason to act on it) because the vast majority of the world population do not believe in the total view. Is that right?
It is difficult not to come up with examples. In 1500, most people would have believed that violence against women and slavery were permissible. Would that have made you stop campaigning to bring an end to that? These are also values, after all
I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don’t think it’s reasonable to say it’s blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. “The global poor” isn’t a uniform pool, and helping those who are part of “the global poor” in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn’t acceptable—for example, if the future benefit is discounted so heavily that it’s outweighed by the present harm.
On your first para, I was responding to this claim: “It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.” I said that most work on bio and AI was not just theory but was applied. I don’t think the things you say in the first para present any evidence against that claim, but rather they seem to grant my initial point.
I agree that there are some things in Bio and AI that are applied—though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to “I don’t think it counterfactually harms the global poor.”
This is question begging: it only counterfactually harms the poor on a person-affecting view of ethics, which longtermists reject
I’m a longtermist and I don’t reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it’s bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.
Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here’s a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn’t necessarily mean that the author doesn’t actually reject person-affecting views)
Is that true?
Many current individuals will be worse off when resources don’t go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world’s wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we’d now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn’t change the distributive impact between groups.
Equally, many future people will be worse-off than they would have been if we don’t reduce extinction risks. The claim is about the net total impact on non-white people
Your definition of problematic injustice seems far too narrow, and I explicitly didn’t refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured—not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.
On the second point, yes I agree that there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible. If someone held a view which said that they only count the interests of white future people, I think it would be quite clear that this was bad for the interests of non-white people in a very important way. Therefore, if I ignore all future people, then I ignore all future non-white people, which is bad for their interests in a very important way
As I said above in a different comment thread, it seems clear we’re talking past one another.
Yes, being racist would be racist, and no, that’s not the criticism. You said that “there are some popular views on which we would discount or ignore future people. I just don’t think that they are plausible.” And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said “there are some popular views where AI could be a risk to humans. I just don’t think that these are plausible,” and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.
I don’t really understand your response. Most of the people who argue for a longtermist ethical standpoint have spent many many years thinking about the possibility that they are wrong and arguing against person-affecting views, during their philosophy degrees. I could talk to you for several weeks about the merits and demerits of such views and the published literature on them.
“Yes, being racist would be racist, and no, that’s not the criticism.” I don’t really understand your point here.
My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people’s views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.
And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages and inequity is not the same as calling someone racist.
The standards in the first para appear to be something like ‘you can never say that something is implausible if some philosophers believe it’. That seems like a pretty weird standard. Another way of making saying it is implausible is just saying that “I think it is probably false”.
Near-termists are also a small group talking about how to fix everything.
this is perhaps too meta, but on the second para, if that is what you meant, I don’t understand how it is a response to the comment your response was to.
I’m pointing out that you’re privileging your views over those of others—not “some philosophers,” but “most people.”
And unless you’re assuming a fairly strong version of moral realism, this isn’t a factual question, it’s a values question—so it’s strange to me to think that we should get to assume we’re correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented—and I think Stefan Schubert’s recent work implies that is not at all clear.
Any time you take a stance on anything you are privileging your view over some other people. Your argument also applies to people working on animal welfare and on global poverty. In surveys, most people don’t even seem to care about saving more lives than less.
If we are going to go down the route of saying that what EAs do should be decided by the majority opinion of the current global population, then that would be the end of EA of any kind. As I understand it, your claim is that the total view is false (or we don’t have reason to act on it) because the vast majority of the world population do not believe in the total view. Is that right?
It is difficult not to come up with examples. In 1500, most people would have believed that violence against women and slavery were permissible. Would that have made you stop campaigning to bring an end to that? These are also values, after all