The keywords in the academic discussion of this issue are the “Archimedean principle” (I forget if Archimedes was applying it to weight or distance or something else, but it’s the general term for the assumption that for any two quantities you’re interested in, a finite number of one is sufficient to exceed the other—there are also various non-Archimedean number systems, non-Archimedean measurement systems, and non-Archimedean value theories) and “lexicographic” preference (the idea is that when you are alphabetizing things like in a dictionary/lexicon, any word that begins with an M comes before any word that begins with a N, no matter how many Y’s and Z’s the M word has later and how many A’s and B’s the N word has later—similarly, some people argue that when you are comparing two states of affairs, any state of affairs where there are 1,000,001 living people is better than any state of affairs where there are 1,000,000 living people, no matter how impoverished the people in the first situation are and how wealthy the people in the second situation are). I’m very interested in non-Archimedean measurement systems formally, though I’m skeptical that they are relevant for value theory, and of the arguments for any lexicographic preference for one value over another, but if you’re interested in these questions, those are the terms you should search for. (And you might check out PhilPapers.org for these searches—it indexes all of the philosophy journals that I’m aware of, and many publications that aren’t primarily philosophy.)
Thanks Kenny! I think it is the main bias in EAs—we so easily add up things in our minds (e.g., summing happiness across individuals) that we don’t stop to realize that there is no “cosmic” place where all that happiness is occurring. There are just individual minds.
I appreciate Kenny’s comments pointing toward potentially relevant literature, and agree that you could be a utilitarian without fully biting this bullet … but as far as I can tell, attempts to do so have enough weird consequences of their own that I’d rather just bite the bullet. This dialogue gives some of the intuition for being skeptical of some things being infinitely more valuable than others.
>In theory, any harm can be outweighed by something that benefits a large enough number of persons, even if it benefits them in a minor way.
Holden, do you know of any discussion that doesn’t rest on that assumption? It is where I get off the train:
https://www.mattball.org/2021/09/why-i-am-not-utilitarian-repost-from.html
Thanks
The keywords in the academic discussion of this issue are the “Archimedean principle” (I forget if Archimedes was applying it to weight or distance or something else, but it’s the general term for the assumption that for any two quantities you’re interested in, a finite number of one is sufficient to exceed the other—there are also various non-Archimedean number systems, non-Archimedean measurement systems, and non-Archimedean value theories) and “lexicographic” preference (the idea is that when you are alphabetizing things like in a dictionary/lexicon, any word that begins with an M comes before any word that begins with a N, no matter how many Y’s and Z’s the M word has later and how many A’s and B’s the N word has later—similarly, some people argue that when you are comparing two states of affairs, any state of affairs where there are 1,000,001 living people is better than any state of affairs where there are 1,000,000 living people, no matter how impoverished the people in the first situation are and how wealthy the people in the second situation are). I’m very interested in non-Archimedean measurement systems formally, though I’m skeptical that they are relevant for value theory, and of the arguments for any lexicographic preference for one value over another, but if you’re interested in these questions, those are the terms you should search for. (And you might check out PhilPapers.org for these searches—it indexes all of the philosophy journals that I’m aware of, and many publications that aren’t primarily philosophy.)
Thanks Kenny!
I think it is the main bias in EAs—we so easily add up things in our minds (e.g., summing happiness across individuals) that we don’t stop to realize that there is no “cosmic” place where all that happiness is occurring. There are just individual minds.