Sadly the forum doesn’t have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn’t be able to implement something like that in the foreseeable future.
I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there’s a couple of issues with posting stuff about consequentialism. First, it’s more abstract than seems optimal, and secondly, it’s presently not balanced with discussion about other systems of ethics. As you’re already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want.
Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?
I’m trying to use Facebook less, and I don’t check the utilitarianism group, since it seems to have fallen into disuse.
I have to disagree that consequentialism isn’t required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that’s not consequentialism, I don’t know what is.
Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
The ‘E’ relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one’s means with one’s ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed.
I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement’s technical analysis of the instrumental rationality of giving highly valuable.
You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
there’s going to be some optimal level of abstraction
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
Thanks for the data point!
Sadly the forum doesn’t have that kind of feature. Peter and Tom are starting to work through a few minor bugs and feature requests but wouldn’t be able to implement something like that in the foreseeable future.
I can see why it would be convenient for utilitarian EAs to read this kind of material here. But equally, there’s a couple of issues with posting stuff about consequentialism. First, it’s more abstract than seems optimal, and secondly, it’s presently not balanced with discussion about other systems of ethics. As you’re already implying with the filtering idea, if the EA Forum became an EA/Consequentialism Forum, that would be a real change that lots of people would not want.
Would you have found this post more easily if it was posted on Philosophy for Programmers and linked from the Utilitarianism Facebook group?
I’m trying to use Facebook less, and I don’t check the utilitarianism group, since it seems to have fallen into disuse.
I have to disagree that consequentialism isn’t required for EA. Certain EA views (like the shallow pond scenario) could be developed through non-consequentialist theories. But the E part of EA is about quantification and helping as many beings as possible. If that’s not consequentialism, I don’t know what is.
Maybe some non-utilitarian consequentialist theories are being neglected. But the OP could, I think, be just as easily applied to any consequentialism.
The ‘E’ relates to efficiency, usually thought of as instrumental rationality, which is to say, the ability to conform one’s means with one’s ends. That being the case, it is entirely apart from the (moral or non-moral) end by which it is possessed.
I have reasons for charitable giving independent of utilitarianism, for example, and thus find the movement’s technical analysis of the instrumental rationality of giving highly valuable.
You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
http://effective-altruism.com/ea/b2/open_thread_5/1fe
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there’s also: 80,000 Hours, Cause prioritisation wiki and Open Philanthropy Project.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
I strongly disagree. :)
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.