Love this!
My recollection is that when you create a Goodreads account, you are asked whether you have read a few classics (like 1984, and possibly Thinking Fast Thinking Slow).
This can create a selection bias, as the user will say ‘yes’ even if they read the book years ago. They’re less likely to go back, however, and add other books they read years ago.
Possible that this explains the ‘most read’ books?
Putting aside any debate over the relative values you’ve assigned here, I think you might be making a error by the way that you try to translate relative moral harms into a dollar value, using the cost of extending a person’s life through donation to GiveWell’s charities.
To give an absurd example, the ‘harm’ caused if I were to punch a stranger in the face (assuming that I hurt them, but don’t otherwise cause any permanent damage) is a fraction of the harm caused if I were to take a year off that person’s life (which you have said can be valued at $100). Let’s say it’s at most 1/10th as bad as to punch someone in the face than to prematurely end their life.
However, even if I were to get more than $10 of enjoyment out of punching that person, I don’t think it’s right that I’m morally permitted to do so.
One reason is that although, at the margin, the cheapest available method for extending human lives by a year is $100, I don’t think that necessarily reflects the true value of a year of human life for these purposes. The price is likely to be a product of market inefficiencies (noting, for example, that in the developed world, people regularly spending many times that amount in order to extend life by a year). Also, I would certainly pay more than $100 to extend my life by year, and no doubt so would the person who is being punched. It just happens that GiveWell have identified some unusually efficient programs for extending human life. Those programs do not reflect the market price, at equilibrium, for a year of human life.
I’d like to put more thought into this, but I’m presently convinced you’re making a mistake with this move.
Secondly, I think that it’s wrong to come to the conclusion that something is not a ‘serious’ moral wrong, just because the harm caused is a fraction of the harm caused by ending a human life. Perhaps ending a human life prematurely is very high on the moral spectrum, such that something 1/100th as bad, as still quite a bad thing from a moral and utilitarian perspective.
Anyway, it’s a good debate to be having, even if I don’t reach the same conclusions you do.
(P.S. First post on EA forums, so apologies if I’m getting any etiquitte wrong or rehashing ideas that have previously been debated and resolved).