It’s a concept thrown around a lot in EA circles, but I was surprised that I couldn’t find any papers fleshing the idea out. The most cited link is a blog post from 2009: http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html
I also found these two resources:
http://users.ox.ac.uk/~mert2255/talks/parliamentary-model.pdf
Is there any better exploration of the concept out there?
Yeah, you’re not the only one noticing the gap. Hilary and Owen have a paper under review somewhere formalizing it a bit more (I see you’ve linked to some slides Hilary put together on it), so keep an eye out for that.
Here’s a preprint (archive).
Hmm, the paper doesn’t seem to address the question I described in Is the potential astronomical waste in our universe too small to care about? To me, this seems to be a central open problem in the moral parliament model, as well as in moral uncertainty in general (see this version of it which applies to MEC), and has strong practical implications, so it’s strange to see very little work from philosophers on it. It doesn’t seem to be addressed in Andrew Sepielli’s PhD thesis or Will MacAskill’s DPhil thesis, and Toby Ord told me there’s not “anything particularly relevant to that question” in his and Will’s upcoming book about moral uncertainty (although that’s because “the book doesn’t really include any post-2014 thinking at all”).
Wow, this a great point.
The standard “academic knowledge generation usually isn’t tooled towards focusing on the most important stuff” probably applies here.
How about fixing the discount rate for all the parliament members? Or treating the discount rate question as orthogonal to the altruism/egoism question, and having 4 agents with each combination of altruism/egoism and high/low discount rates? I suppose analogous problems could appear in a non-discount-rate form somehow?
We are hoping to kind-of address the issue in that post in a paper I’m working on with Anders Sandberg—I’ll let you know when we’re ready to share it, if you’d like.
Nice! Seems like a cool paper. One thing that confuses me, though, is why the authors think that their theory’s “moral risk aversion with respect to empirically expected utility” is undesirable. People just have weird intuitions about expected utility all the time, and don’t reason about it well in general. See, for instance, how people prefer (even when moral uncertainty isn’t involved) to donate to many charities rather than donating only to the one highest expected utility charity. It seems reasonable to call that preference misguided, so why can’t we just call the intuitive objection to “moral risk aversion with respect to empirically expected utility” misguided?
fwiw when I donate to many charities in the same cycle, a lot of the reason is for the fuzzies. Probably a similar dynamic is at play for lots of other people too.
I imagine so, but if that’s the reason it seems out of place in a paper on theoretical ethics.
Ah cool, thanks