You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
there’s going to be some optimal level of abstraction
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.
You can believe that you want to help people a lot, and that it’s a virtue to investigate where those funds are going, so you want to be a good person by picking charities that help lots of people. Whether there’s infinite people is irrelevant to whether you’re a virtuous helper.
You might like giving to givewell just because, and not feel the need for recourse to any sense of morality.
The other problem is that there’s going to be some optimal level of abstraction that most of the conversation at the forum could be at in order to encourage people to actually get things done, and I just don’t think that philosophical analysis of consequentialism is the optimal for most people. I’ve been there and discussed those issues a lot for years, and I’d just like to move past it and actually do things, y’know :p
Still happy for Ben to think about it because he’s smart, but it’s not for everyone!
I’m curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:
http://effective-altruism.com/ea/b2/open_thread_5/1fe
Also, I know that I’d really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.
Making an expected-utilons-per-dollar calculator is an interesting project. Cause prioritisation in the broader sense can obviously fit on this forum and for that there’s also: 80,000 Hours, Cause prioritisation wiki and Open Philanthropy Project.
If you’re going for max number of years of utility per dollar, then you’ll be looking at x-risk, as it’s the cause that most credibly claims an impact that extends far in time (there aren’t yet credible “trajectory changes”). That leaves CSER, MIRI, FLI, FHI and GCRI, of which CSER is currently in a fledgling state with only tens of thousands of dollars of funding, but applying for million dollar grants, so it seems to be best-leveraged.
I strongly disagree. :)
It’s obvious that, say, the values of society may make a huge difference to the far future if (as seems likely) early AI uses goal preservation. (Even if the first version of AI doesn’t, it should soon move in that direction.)
Depending how one defines “x-risk”, many ways of shaping AI takeoffs are not work on extinction risk per se but concern the nature of the post-human world that emerges. For instance, whether the takeoff is unipolar or multipolar, what kinds of value loading is used, and how political power is divided. These can all have huge impacts on the outcome without changing the fact of whether or not the galaxy gets colonized.
I agree. I’d be clearer if I said that I think the only credible trajectory changes address the circumstances of catastrophically risky situations e.g. the period where AI takes off, and are managed my organisations that think about x-risk.