I’m curating this post. I see this and @NickLaing’s post as the best in class on the topic of moral weights from the AW vs GH debate week, so I’m curating them as a pair[1].
I was impressed by titotal doing the fairly laborious work of replicating everyone’s calculations and finding the points where they diverged. As discussed by the man himself, there were lots of different numbers flying around, and even if AW always comes out on top it matters by how much:
You might think this doesn’t matter, but the difference between 1500 times and 35 times is actually quite important: if you’re at 1500 times, you can disagree with a few assumptions a little and still be comfortable in the superiority of AW. But if it’s 35 times, this is no longer the case, and as we shall see, there are some quite controversial assumptions that these models share.
I’m curating this post. I see this and @NickLaing’s post as the best in class on the topic of moral weights from the AW vs GH debate week, so I’m curating them as a pair[1].
I was impressed by titotal doing the fairly laborious work of replicating everyone’s calculations and finding the points where they diverged. As discussed by the man himself, there were lots of different numbers flying around, and even if AW always comes out on top it matters by how much:
See also the other curation comment