“The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.”
I have no particular objection to those, unlike me, interested in aggregative ethical dilemmas, but I think it at least preferable that effective altruism—a movement aspiring to ecumenical reach independent of any particular ethical presuppositions—not automatically presume some cognate of utilitarianism. The repeated posts on this forum about decidedly abstract issues of utilitarianism with little or no connection with the practice of charitable giving is, perhaps, not particularly helpful in this regard. Most basically however, I object to your equivalence of altruism and utilitarianism as a matter of form: that should not be assumed, but qualified.
The problems with extending standard total utilitarianism to the infinite case are the easiest to understand, which is why I put that in the summary, but I don’t think most of the article was about that.
For example, the fact that you can’t have intergenerational equity (Thm 3.2.1) seems pretty important no matter what your philosophical bent.
A minuscule proportion of political philosophy has concerned itself with aggregative ethics, and in my being a relatively deep hermeneutical contextualist, I take what is important to them to be what they thought to be important to them, and thus your statement—that intergenerational equity is perennially important—as patently wrong. Let alone people not formally trained in philosophy.
The fact I have to belabour that most of those interested in charitable giving are not by implication automatically interested in the ‘infinity problem’ is exactly demonstrative of my initial point, anyhow, i.e. of projecting highly controversial ethical theories, and obscure concerns internal to them, as obviously constitutive of, or setting the agenda for, effective altruism.
This seems reasonable to me. Assuming aggregative ethics only and examining niche issues within it are probably not diplomatically ideal for this site. Especially when one could feasibly get just as much attention for this kind of post on LessWrong.
That’d suggest that if people want to write more material like this, it might fit better elsewhere. What do others think?
I found the OP useful. If it were on LW, I probably wouldn’t have seen it. I don’t go on LW because there’s a lot of stuff I’m not interested in compared to what I am interested in (ethics). Is there a way to change privacy settings so that certain posts are only visible to people who sign in or something?
Sadly the forum doesn’t have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn’t be able to implement something like that in the foreseeable future.
I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there’s a couple of issues with posting stuff about consequentialism. First, it’s more abstract than seems optimal, and secondly, it’s presently not balanced with discussion about other systems of ethics. As you’re already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want.
Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?
I’m trying to use Facebook less, and I don’t check the utilitarianism group, since it seems to have fallen into disuse.
I have to disagree that consequentialism isn’t required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that’s not consequentialism, I don’t know what is.
Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
The ‘E’ relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one’s means with one’s ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed.
I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement’s technical analysis of the instrumental rationality of giving highly valuable.
You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
there’s going to be some optimal level of abstraction
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
“The universe may very well be infinite, and hence contain an infinite amount of happiness and sadness. This causes several problems for altruists; for example: we can plausibly only affect a finite subset of the universe, and an infinite quantity of happiness is unchanged by the addition or subtraction of a finite amount of happiness. This would imply that all forms of altruism are equally ineffective.”
I have no particular objection to those, unlike me, interested in aggregative ethical dilemmas, but I think it at least preferable that effective altruism—a movement aspiring to ecumenical reach independent of any particular ethical presuppositions—not automatically presume some cognate of utilitarianism. The repeated posts on this forum about decidedly abstract issues of utilitarianism with little or no connection with the practice of charitable giving is, perhaps, not particularly helpful in this regard. Most basically however, I object to your equivalence of altruism and utilitarianism as a matter of form: that should not be assumed, but qualified.
The problems with extending standard total utilitarianism to the infinite case are the easiest to understand, which is why I put that in the summary, but I don’t think most of the article was about that.
For example, the fact that you can’t have intergenerational equity (Thm 3.2.1) seems pretty important no matter what your philosophical bent.
A minuscule proportion of political philosophy has concerned itself with aggregative ethics, and in my being a relatively deep hermeneutical contextualist, I take what is important to them to be what they thought to be important to them, and thus your statement—that intergenerational equity is perennially important—as patently wrong. Let alone people not formally trained in philosophy.
The fact I have to belabour that most of those interested in charitable giving are not by implication automatically interested in the ‘infinity problem’ is exactly demonstrative of my initial point, anyhow, i.e. of projecting highly controversial ethical theories, and obscure concerns internal to them, as obviously constitutive of, or setting the agenda for, effective altruism.
This seems reasonable to me. Assuming aggregative ethics only and examining niche issues within it are probably not diplomatically ideal for this site. Especially when one could feasibly get just as much attention for this kind of post on LessWrong.
That’d suggest that if people want to write more material like this, it might fit better elsewhere. What do others think?
I found the OP useful. If it were on LW, I probably wouldn’t have seen it. I don’t go on LW because there’s a lot of stuff I’m not interested in compared to what I am interested in (ethics). Is there a way to change privacy settings so that certain posts are only visible to people who sign in or something?
Thanks for the data point!
Sadly the forum doesn’t have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn’t be able to implement something like that in the foreseeable future.
I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there’s a couple of issues with posting stuff about consequentialism. First, it’s more abstract than seems optimal, and secondly, it’s presently not balanced with discussion about other systems of ethics. As you’re already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want.
Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?
I’m trying to use Facebook less, and I don’t check the utilitarianism group, since it seems to have fallen into disuse.
I have to disagree that consequentialism isn’t required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that’s not consequentialism, I don’t know what is.
Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
The ‘E’ relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one’s means with one’s ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed.
I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement’s technical analysis of the instrumental rationality of giving highly valuable.
You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
http://effective-altruism.com/ea/b2/open_thread_5/1fe
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there’s also: 80,000 Hours, Cause prioritisation wiki and Open Philanthropy Project.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
I strongly disagree. :)
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
These issues are relevant for any ethical system that assigns non-zero weight to the consequences.