This feels like nitpicking that gives the impression of undermining Singer’s original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).
I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, Singer’s purpose at TED isn’t to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.. While those of us already in EA might find a statement like “We prevent 1 year of blindness for every 3 surgeries done which on average cost...” perfectly compelling the audience members who aren’t yet persuaded simply tune out. After all it’s just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.
Now it’s important that we don’t simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.
This isn’t to say that your point here isn’t important regarding people inside EA making estimates or givewell analysis or the like. I’m just pointing out that it’s important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it’s a victory.
I think there is truth in what you said. But I also have disagreements:
“The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact”
That’s a dangerous line of reasoning. If we can’t make a point with honest numbers, we shouldn’t make the point at all. We might fail to notice when we are wrong when we use bogus numbers to prove whatever opinion we already hold.
What is more, many people who become EAs after hearing such TED talks already think in numbers. They continue in believing the same numbers afterwards and are more likely to dismiss other cause areas because of it. I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That’s wrong in any case, but I was probably wrong in my reasoning too because of numbers.
Also, I’m afraid that some doctor will stand up during an EA presentation and say
You kids pretend to be visionaries, but in reality you don’t have the slightest idea what you are talking about. Firstly, it’s impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems
Also, I’m afraid that the doctor might be partially right
If we’re ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term “effective”. This sort of reasoning is why epistemics around dogooding are so bad in the first place.
I hate to admit it, but I think there does exist a utilitarian trade-off between marketability and accuracy. Although I’m thrilled that the EA movement prides itself on being as factually accurate as possible and I believe the core EA movement absolutely needs to stick with that, there is a case to be made that an exaggerated truth may be an important teaching tool in helping non-EAs understand why EAs do what they do.
It seems likely that Peter Singer’s example has had a net-positive impact, despite the inaccuracies. Even I was originally drawn to EA by this example, among a few of his others. I’ve since been donating at least 10% and been active in EA projects. I’m sure I’m not the only one.
We just have to be careful that the integrity of the EA movement isn’t compromised due to inaccurate examples like this. But I think anyone who goes far enough with EA to learn that this example is inaccurate, or even cares to do so, will most likely already have converted into an EA mindset, which is Mr. Singer’s end-goal.
This feels like nitpicking that gives the impression of undermining Singer’s original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).
I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, Singer’s purpose at TED isn’t to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.. While those of us already in EA might find a statement like “We prevent 1 year of blindness for every 3 surgeries done which on average cost...” perfectly compelling the audience members who aren’t yet persuaded simply tune out. After all it’s just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.
Now it’s important that we don’t simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.
This isn’t to say that your point here isn’t important regarding people inside EA making estimates or givewell analysis or the like. I’m just pointing out that it’s important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it’s a victory.
I think there is truth in what you said. But I also have disagreements:
That’s a dangerous line of reasoning. If we can’t make a point with honest numbers, we shouldn’t make the point at all. We might fail to notice when we are wrong when we use bogus numbers to prove whatever opinion we already hold.
What is more, many people who become EAs after hearing such TED talks already think in numbers. They continue in believing the same numbers afterwards and are more likely to dismiss other cause areas because of it. I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That’s wrong in any case, but I was probably wrong in my reasoning too because of numbers.
Also, I’m afraid that some doctor will stand up during an EA presentation and say
Also, I’m afraid that the doctor might be partially right
If we’re ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term “effective”. This sort of reasoning is why epistemics around dogooding are so bad in the first place.
I hate to admit it, but I think there does exist a utilitarian trade-off between marketability and accuracy. Although I’m thrilled that the EA movement prides itself on being as factually accurate as possible and I believe the core EA movement absolutely needs to stick with that, there is a case to be made that an exaggerated truth may be an important teaching tool in helping non-EAs understand why EAs do what they do.
It seems likely that Peter Singer’s example has had a net-positive impact, despite the inaccuracies. Even I was originally drawn to EA by this example, among a few of his others. I’ve since been donating at least 10% and been active in EA projects. I’m sure I’m not the only one.
We just have to be careful that the integrity of the EA movement isn’t compromised due to inaccurate examples like this. But I think anyone who goes far enough with EA to learn that this example is inaccurate, or even cares to do so, will most likely already have converted into an EA mindset, which is Mr. Singer’s end-goal.