Suppose I have a prior belief that it is valuable to convert people to Christianity. I then get an evidence-backed estimate saying that the Christian teachings are probably false. That contradicts my belief that I should try to convert people to Christianity. Should that count as a reason to reject that estimate? Obviously, no. To reject the estimate you need evidence, not merely stating that it is incompatible with your moral views. The same holds for the veganism question.
“Beliefs about facts, such as these estimates, should be based on evidence”
“Beliefs about facts aren’t just based on evidence, because you have priors before you have evidence.”
“To reject the estimate you need evidence”
I’m not sure from this sequence what in my comment you actually disagree with; you appear to have simply chosen a different analogy and then restated your position. If the evidence comes in way against your prior, do you disagree that that is on its own a reason to be sceptical?
I also think you’re drawing a false dichotomy between ‘moral views’ and ‘facts’. Believing animals matter is generally considered a moral view, but many animal welfare advocates believe it almost directly follows from factual observations. Believing it’s valuable to convert people to x is mostly a factual view, but tends to be grounded in moral views around what makes x good in the first place.
Someone once presented me with a new study on the effects of intercessory prayer (that is, people praying for patients who are not told about the prayer), which showed 50% of the prayed-for patients achieving success at in-vitro fertilization, versus 25% of the control group. I liked this claim. It had a nice large effect size. Claims of blatant impossible effects are much more pleasant to deal with than claims of small impossible effects that are “statistically significant”.
So I cheerfully said: “I defy the data.”
...
Oh, and the prayer study? Soon enough we heard that it had been retracted and was probably fraudulent. But I didn’t say fraud. I didn’t speculate on how the results might have been obtained. That would have been dismissive. I just stuck my neck out, and nakedly, boldly, without excuses, defied the data.
Yes, the analogy is a re-statement of my position, using another example. I hoped that by using the Christian example, it would become clear that it’s not right to reject evidence because one is morally uncomfortable with it. In my view, that amounts to political bias or wishful thinking.
I definitely do believe that there is a dichotomy betwen moral views and facts, and think that is an integral part of the scientific world-view. But leaving that huge philosophical issue aside, I think that if Effective altruists accept these kinds of arguments, we will be far less effective. Faced with our cost-effective estimates of, e.g. Against Malaria Foundation vs ALS, non-EAs could always say that those estimates contradict their preference to give to ALS, or whatever charity they feel like giving to, and that they therefore choose to reject the EA cost-effectiveness estimates.
Another pragmatic argument for why we should strive to be as objective as possible is that unbiased, objective science, seems to have been very effective at acquiring new knowledge, whereas politicized, biased science has not, as Noah Smith points out.
This is actually an important question. In my view, the notion that you can’t reject facts that you feel uncomfortable with for moral or political reasons is an important tenet of Effective altruism.
You’re mixing several claims here. One is about moral views versus facts, which I agree is a large philosophical discussion. Let’s put that to one side.
Another claim is about the proper role of priors. If I’m reading you correctly you think there is no role for priors in evaluating claims whatsoever. That’s a surprising position, to put it mildly, so I want to make sure I am reading you correctly before engaging with it. I still don’t know whether I am.
Then there’s a number of claims about how bad political bias is and why we should try to avoid it, which I agree with and consider at best tangential to the discussion at hand. Striving for objectivity is not the same as ignoring common sense.
No, I’m not ignoring priors. If I have a strong prior belief that my dice is unbiased, clearly I shouldn’t give that up because I roll three sixes in a row (although you should adjust it slightly downwards). Factual priors should influence your factual beliefs.
What I am saying is that you shouldn’t let your moral views influence your factual beliefs, and that doing so amounts to bias. Hence the whole bias/objectivity issue is very relevant here.
Suppose I have a prior belief that it is valuable to convert people to Christianity. I then get an evidence-backed estimate saying that the Christian teachings are probably false. That contradicts my belief that I should try to convert people to Christianity. Should that count as a reason to reject that estimate? Obviously, no. To reject the estimate you need evidence, not merely stating that it is incompatible with your moral views. The same holds for the veganism question.
“Beliefs about facts, such as these estimates, should be based on evidence”
“Beliefs about facts aren’t just based on evidence, because you have priors before you have evidence.”
“To reject the estimate you need evidence”
I’m not sure from this sequence what in my comment you actually disagree with; you appear to have simply chosen a different analogy and then restated your position. If the evidence comes in way against your prior, do you disagree that that is on its own a reason to be sceptical?
I also think you’re drawing a false dichotomy between ‘moral views’ and ‘facts’. Believing animals matter is generally considered a moral view, but many animal welfare advocates believe it almost directly follows from factual observations. Believing it’s valuable to convert people to x is mostly a factual view, but tends to be grounded in moral views around what makes x good in the first place.
http://lesswrong.com/lw/ig/i_defy_the_data/
Someone once presented me with a new study on the effects of intercessory prayer (that is, people praying for patients who are not told about the prayer), which showed 50% of the prayed-for patients achieving success at in-vitro fertilization, versus 25% of the control group. I liked this claim. It had a nice large effect size. Claims of blatant impossible effects are much more pleasant to deal with than claims of small impossible effects that are “statistically significant”.
So I cheerfully said: “I defy the data.”
...
Oh, and the prayer study? Soon enough we heard that it had been retracted and was probably fraudulent. But I didn’t say fraud. I didn’t speculate on how the results might have been obtained. That would have been dismissive. I just stuck my neck out, and nakedly, boldly, without excuses, defied the data.
Yes, the analogy is a re-statement of my position, using another example. I hoped that by using the Christian example, it would become clear that it’s not right to reject evidence because one is morally uncomfortable with it. In my view, that amounts to political bias or wishful thinking.
I definitely do believe that there is a dichotomy betwen moral views and facts, and think that is an integral part of the scientific world-view. But leaving that huge philosophical issue aside, I think that if Effective altruists accept these kinds of arguments, we will be far less effective. Faced with our cost-effective estimates of, e.g. Against Malaria Foundation vs ALS, non-EAs could always say that those estimates contradict their preference to give to ALS, or whatever charity they feel like giving to, and that they therefore choose to reject the EA cost-effectiveness estimates.
Another pragmatic argument for why we should strive to be as objective as possible is that unbiased, objective science, seems to have been very effective at acquiring new knowledge, whereas politicized, biased science has not, as Noah Smith points out.
This is actually an important question. In my view, the notion that you can’t reject facts that you feel uncomfortable with for moral or political reasons is an important tenet of Effective altruism.
You’re mixing several claims here. One is about moral views versus facts, which I agree is a large philosophical discussion. Let’s put that to one side.
Another claim is about the proper role of priors. If I’m reading you correctly you think there is no role for priors in evaluating claims whatsoever. That’s a surprising position, to put it mildly, so I want to make sure I am reading you correctly before engaging with it. I still don’t know whether I am.
Then there’s a number of claims about how bad political bias is and why we should try to avoid it, which I agree with and consider at best tangential to the discussion at hand. Striving for objectivity is not the same as ignoring common sense.
No, I’m not ignoring priors. If I have a strong prior belief that my dice is unbiased, clearly I shouldn’t give that up because I roll three sixes in a row (although you should adjust it slightly downwards). Factual priors should influence your factual beliefs.
What I am saying is that you shouldn’t let your moral views influence your factual beliefs, and that doing so amounts to bias. Hence the whole bias/objectivity issue is very relevant here.