This article was the “Classic Forum post” in the EA Forum Digest today. An excellent choice. Though an old post (in EA terms, 2017 is ancient history!), it asks a question that is fundamental to EA. If we want to measure and compare the impact (effectiveness) of two interventions quantitatively, we necessarily must multiply the objective impact measured on a group by some factor quantifying the relative value of that group—be it insects or future generations or chickens.
Since joining EA, I’ve been impressed by how many people in this community do this, and how often it leads to surprising conclusions, for example in longtermism or animal rights.
At the same time, I would hazard that the vast majority of people in the world today would essentially give “humans who are alive today” an infinitely larger value than animals or future generations. They wouldn’t use those words, but that’s how they’d view it. As in, they may be all in favour of animal rights, but would they be willing to sacrifice one human life to save one million cows? Most would not. Would they agree to sacrifice 100 people today to save 100 billion people who will live in the 24th century? Many would not.
I struggle with questions like this—it seems to require a massive amount of confidence that I’m right, and I’m not sure I have that.
So it’s great that we look for opportunities (reducing x-risks, alternative protein, biosecurity, …) which are win/win, but sometimes we’ll be forced to choose. When I think of radical empathy, I don’t just think of the “easy” part where we recognise the potential for suffering and the importance of quality of life, but also of the difficult part where we may have to make choices where one side of the balance has the lives of real, living human beings and the other side does not.
This article was the “Classic Forum post” in the EA Forum Digest today. An excellent choice. Though an old post (in EA terms, 2017 is ancient history!), it asks a question that is fundamental to EA. If we want to measure and compare the impact (effectiveness) of two interventions quantitatively, we necessarily must multiply the objective impact measured on a group by some factor quantifying the relative value of that group—be it insects or future generations or chickens.
Since joining EA, I’ve been impressed by how many people in this community do this, and how often it leads to surprising conclusions, for example in longtermism or animal rights.
At the same time, I would hazard that the vast majority of people in the world today would essentially give “humans who are alive today” an infinitely larger value than animals or future generations. They wouldn’t use those words, but that’s how they’d view it. As in, they may be all in favour of animal rights, but would they be willing to sacrifice one human life to save one million cows? Most would not. Would they agree to sacrifice 100 people today to save 100 billion people who will live in the 24th century? Many would not.
I struggle with questions like this—it seems to require a massive amount of confidence that I’m right, and I’m not sure I have that.
So it’s great that we look for opportunities (reducing x-risks, alternative protein, biosecurity, …) which are win/win, but sometimes we’ll be forced to choose. When I think of radical empathy, I don’t just think of the “easy” part where we recognise the potential for suffering and the importance of quality of life, but also of the difficult part where we may have to make choices where one side of the balance has the lives of real, living human beings and the other side does not.