I haven’t read Will’s book, so I’m not entirely sure what your background knowledge is.
Are you unsure about how to compare two different cause areas? For instance, do you accept that it’s better to save the lives of 10 children than to fund a $30,000 art museum renovation project, but are unsure whether saving the lives of 10 children or de-worming 4,500 children is better?
In this case I suggest looking at QUALYs and DALYs which try to quantify the number of healthy years of lives saved given estimates for how bad various diseases and disabilities are. GiveWell has a few reservations about DALYs, and uses their own weighting/cost-effectiveness model. On the linked page, you can look at the spreadsheet they use to analyze different charities and interventions, and change the weights to fit your own moral weights. Although I would recommend doing some research before you just choose a number out of a hat.
If it’s more similar to “Deworm the World says it’s cost-effectiveness is $0.45 per child dewormed. How do I know this is actually an accurate estimate?” In this example, we can just go to GiveWell and see their documentation. The reason why GiveWell is so useful is because they are both transparent, and very evidence focused. In this case, GiveWell provides a summary of the sources for their review, and in-depth information on exactly what those sources gave them. Including transcripts of interviews/conversations with staff, and general notes on site visits. All of this can be found from a series of links from their main charity page. This heavy transparency means they can likely be trusted for facts. See the above paragraph for information sources on their analysis.
If your confusion is more along the lines of “Ok, I understand, intellectually it’s better to save the lives of 10 children then to give $30,000 for a kid’s wish via the Make a Wish Foundation, but my gut disagrees, and I am unable to emotionally conceptualize that saving the 10 children is at least 10x better than fulfilling one child’s wish.” In this case, understand that this is a pretty common experience, and you are not alone. It takes a lot of empathy, and a lot of experience with numbers to even get close to Derek Parfit levels of caring about abstract suffering [1]. Tackling this problem will be different for everyone, so I can’t give any advice except to say that while your gut is good for fast and simple decisions (for instance, swerving out of the way before you crash into an old lady while driving your car), it is not so good for figuring out complex decisions.
It is easy to aim and throw a baseball using only your gut, but it is near impossible to land a rocket on the moon using only your gut. We need theories of gravity to figure out that. Some smart people who’ve been researching and living in astrophysics for their entire adult lives (or have played Kerbal Space Program) will be able to understand theories of gravitation intuitively, but even they will still revert to numbers when given the option. In the same way, it’s easy to gut-level understand that you should save a kid from drowning, but much harder to gut-level understand that saving the lives of 10 children is better than making one very happy. But we can set down moral theories to help us, and we can try to get an intuitive feel for why we should listen to those theories.
Personally, I gained a lot of gut understanding from the Sequences on “Mere Goodness”. Fake Preferences, Value Theory, and Quantified Humanism. But not everyone likes the Sequences, and they may require some greater amount of background if you haven’t read the preceding sequences.
[1] Derek Parft reportedly broke down into tears in the middle of an interview for seemingly no reason. When asked why, it was the very idea of suffering which made him cry.
I originally thought this was Toby Ord, but Thomas Kwa corrected me in the below comment.
As for his various eccentricities, I don’t think they add anything to an understanding of his philosophy, but I find him very moving as a person. When I was interviewing him for the first time, for instance, we were in the middle of a conversation and suddenly he burst into tears. It was completely unexpected, because we were not talking about anything emotional or personal, as I would define those things. I was quite startled, and as he cried I sat there rewinding our conversation in my head, trying to figure out what had upset him. Later, I asked him about it. It turned out that what had made him cry was the idea of suffering. We had been talking about suffering in the abstract.
I haven’t read Will’s book, so I’m not entirely sure what your background knowledge is.
Are you unsure about how to compare two different cause areas? For instance, do you accept that it’s better to save the lives of 10 children than to fund a $30,000 art museum renovation project, but are unsure whether saving the lives of 10 children or de-worming 4,500 children is better?
In this case I suggest looking at QUALYs and DALYs which try to quantify the number of healthy years of lives saved given estimates for how bad various diseases and disabilities are. GiveWell has a few reservations about DALYs, and uses their own weighting/cost-effectiveness model. On the linked page, you can look at the spreadsheet they use to analyze different charities and interventions, and change the weights to fit your own moral weights. Although I would recommend doing some research before you just choose a number out of a hat.
If it’s more similar to “Deworm the World says it’s cost-effectiveness is $0.45 per child dewormed. How do I know this is actually an accurate estimate?” In this example, we can just go to GiveWell and see their documentation. The reason why GiveWell is so useful is because they are both transparent, and very evidence focused. In this case, GiveWell provides a summary of the sources for their review, and in-depth information on exactly what those sources gave them. Including transcripts of interviews/conversations with staff, and general notes on site visits. All of this can be found from a series of links from their main charity page. This heavy transparency means they can likely be trusted for facts. See the above paragraph for information sources on their analysis.
If your confusion is more along the lines of “Ok, I understand, intellectually it’s better to save the lives of 10 children then to give $30,000 for a kid’s wish via the Make a Wish Foundation, but my gut disagrees, and I am unable to emotionally conceptualize that saving the 10 children is at least 10x better than fulfilling one child’s wish.” In this case, understand that this is a pretty common experience, and you are not alone. It takes a lot of empathy, and a lot of experience with numbers to even get close to Derek Parfit levels of caring about abstract suffering [1]. Tackling this problem will be different for everyone, so I can’t give any advice except to say that while your gut is good for fast and simple decisions (for instance, swerving out of the way before you crash into an old lady while driving your car), it is not so good for figuring out complex decisions.
It is easy to aim and throw a baseball using only your gut, but it is near impossible to land a rocket on the moon using only your gut. We need theories of gravity to figure out that. Some smart people who’ve been researching and living in astrophysics for their entire adult lives (or have played Kerbal Space Program) will be able to understand theories of gravitation intuitively, but even they will still revert to numbers when given the option. In the same way, it’s easy to gut-level understand that you should save a kid from drowning, but much harder to gut-level understand that saving the lives of 10 children is better than making one very happy. But we can set down moral theories to help us, and we can try to get an intuitive feel for why we should listen to those theories.
Personally, I gained a lot of gut understanding from the Sequences on “Mere Goodness”. Fake Preferences, Value Theory, and Quantified Humanism. But not everyone likes the Sequences, and they may require some greater amount of background if you haven’t read the preceding sequences.
[1] Derek Parft reportedly broke down into tears in the middle of an interview for seemingly no reason. When asked why, it was the very idea of suffering which made him cry.
I originally thought this was Toby Ord, but Thomas Kwa corrected me in the below comment.
The person who broke down in tears during an interview is actually Derek Parfit, also an effective altruist. Source: http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints
Thanks for the correction. Idk why I thought it was Toby Ord.