definitely resonates with the empirical EA view of power-law returns, which I was surprised you didn’t mention.
Thanks, I meant to mention this but I think it got cut when I revised the second section for clarity, alas.
1. The version of non-naturalist moral realism that the divergence seems both very strong, and strange to me. It assumes that the true moral code is unlike mathematical realism, where it’s accessible with reflection and would be a natural conclusion for those who cared.
Thanks for this. I’d love to hear non-naturalist moral realists talk about how they think moral facts are epistemically accessible, if it’s not just luck. (Some philosophers do explicitly assume it’s luck.) I think the problem here is extremely hard, including for mathematics, and my own view on mathematics is closest to Millian empiricism (we learn basic math e.g. arithmetic by observing physics and for more advanced mathematics we freely choose axioms that we test against reality for usefulness and explanatory power).
The best philosopher writing on the epistemology of both mathematics and ethics is Justin Clarke-Doane, who combines a form of pluralist realism about mathematics with expressivism about ethics.
However we access moral facts, I’d expect my points about dependency on initial conditions to generalise.
2. “You could accept diminishing returns to value in utility… but you’re unlikely to be a longtermist, laser focused on extinction risk if you do.” I think this is false under the view of near-term extinction risk that is held by most of those who seem concerned about AI extinction risk, or even varieties of the hinge-of-history view whereby we are affected in the near term by longtermist concerns.
True, you could accept this moral view and also accept that:
X-risk is so high and so tractable that reducing it is better than GiveWell on relatively NT grounds
Animals don’t matter much, so don’t beat NT x-risk
And then you’d avoid the objection.
Or you could think utility doesn’t matter much after you’ve got, say, 10^12 humans, and then x-risk still looks good but making sure the right future happens looks less important.
In general I think I was too quick here, good catch.
I think averageists may actually also care about the long term future a lot, and it may still have a MPL if they don’t hold (rapid) diminish returns to utility WITHIN lives (ie it is possible for the average life to be a lot worse or a lot better than today). Indeed, given (potentially) plausible views on interspecies welfare comparisons, and how bad the lvies of lots of non-humans seem today, this just does seem to be true.
Now, its not clear they shouldn’t be at least a little more sympathetic to us converging on the ‘right’ world (since it seems easier), but it doesn’t seem like they get out of much of the argument either
Nice point. I shouldn’t have picked averageism as the most extreme version of this view. It would have been more apt to pick a “capped” model where the value on additional utility (or utility of a specific type) becomes zero after enough of it has been achieved.
I’d love to hear non-naturalist moral realists talk about how they think moral facts are epistemically accessible...
The lack of an answer to that is a lot of the reason I discount the view as either irrelevant or not effectively different from moral non-realism.
True, you could accept this moral view....
Thanks!
And as I noted on the other post, I think there’s a coherent argument that if we care about distinct moral experiences in some way, rather than just the sum, we get something like a limited effective utility, not at 10^12 people specifically, but plausibly somewhere far less than a galaxy full.
Thanks a lot @Davidmanheim!
Thanks, I meant to mention this but I think it got cut when I revised the second section for clarity, alas.
Thanks for this. I’d love to hear non-naturalist moral realists talk about how they think moral facts are epistemically accessible, if it’s not just luck. (Some philosophers do explicitly assume it’s luck.) I think the problem here is extremely hard, including for mathematics, and my own view on mathematics is closest to Millian empiricism (we learn basic math e.g. arithmetic by observing physics and for more advanced mathematics we freely choose axioms that we test against reality for usefulness and explanatory power).
The best philosopher writing on the epistemology of both mathematics and ethics is Justin Clarke-Doane, who combines a form of pluralist realism about mathematics with expressivism about ethics.
However we access moral facts, I’d expect my points about dependency on initial conditions to generalise.
True, you could accept this moral view and also accept that:
X-risk is so high and so tractable that reducing it is better than GiveWell on relatively NT grounds
Animals don’t matter much, so don’t beat NT x-risk
And then you’d avoid the objection.
Or you could think utility doesn’t matter much after you’ve got, say, 10^12 humans, and then x-risk still looks good but making sure the right future happens looks less important.
In general I think I was too quick here, good catch.
I think averageists may actually also care about the long term future a lot, and it may still have a MPL if they don’t hold (rapid) diminish returns to utility WITHIN lives (ie it is possible for the average life to be a lot worse or a lot better than today). Indeed, given (potentially) plausible views on interspecies welfare comparisons, and how bad the lvies of lots of non-humans seem today, this just does seem to be true. Now, its not clear they shouldn’t be at least a little more sympathetic to us converging on the ‘right’ world (since it seems easier), but it doesn’t seem like they get out of much of the argument either
Nice point. I shouldn’t have picked averageism as the most extreme version of this view. It would have been more apt to pick a “capped” model where the value on additional utility (or utility of a specific type) becomes zero after enough of it has been achieved.
Ye, I might be wrong, but something like Larry Temkin’s model might work best here (been a while since I read it so may be getting it wrong)
The lack of an answer to that is a lot of the reason I discount the view as either irrelevant or not effectively different from moral non-realism.
Thanks!
And as I noted on the other post, I think there’s a coherent argument that if we care about distinct moral experiences in some way, rather than just the sum, we get something like a limited effective utility, not at 10^12 people specifically, but plausibly somewhere far less than a galaxy full.