How confident in your analysis and conclusion do you have to be in order to publish a recommendation? For example, do you believe “better wrong than vague”?
I’m very confident in the conclusions of the research for campaigns and the bar for publication is substantially higher than for what I post on the EA forum. I usually also ask many people to review my research for campaigns (see acknowledgment sections in the reports).
Do you try to caveat to show your degree of confidence?
Yes, I use sensitivity analysis and careful language throughout.
For instance, in my cost-effectiveness analysis I caveat:
“Below we present a very rough, simple, back-on-the-envelope cost-effectiveness analysis (“Fermi estimate”). This model is crude and should not be taken literally. Rather than leaving our assumptions unarticulated and fuzzy, we think it is better to be wrong than vague. By stating assumptions explicitly that can be questioned and falsified (as the common aphorisms in statistics go: “Truth will sooner come out of error than from confusion” and “All models are wrong, but some are useful”). It also helps us think through relevant considerations and formalizes our intuitions. If you disagree with any of the inputs to our model, then you can create a copy of our spreadsheet and plug in your own parameters.”
I also use the word “might” about 90 times in the Clean Energy campaign.
But there are some statements even in the report where I’m intentionally wrong for clarity’s sake. For instance, when I write:
“The focus of advanced economies like EU countries to prioritize reducing their own domestic emissions is a natural impulse (‘clean up your own backyard first’). But 75% of all emissions will come from emerging economies such as China and India by 2040. Only if advanced economies’ climate policies reduce emissions in all countries, will we prevent dangerous climate change. We call this the cool rule: only if all countries reduce their emissions will the planet stay cool.”
The bolded sentence is clearly wrong on some level, because we can perhaps use geoengineering to cool the planet or maybe emerging economies such as China will solve the issue. However, I feel this is less important to emphasize because it’s somewhat unlikely and writing all that out would distract from the central message. By making strong statements such as “Only if” you’re making your writing and central claims really clear so that they can be more easily falsified. But some people might disagree and like to hedge more.
How easy would it be to find a demonstrably incorrect statement or paragraph in your work?
I’m quite careful I think but given the length of the report I cannot rule out that there are errors in somewhere. But I’d be somewhat surprised if they were easy to find. So I’ll pay a bug bounty of $20 for any statement that is demonstrably incorrect.
Really interesting questions—thank you!
I’m very confident in the conclusions of the research for campaigns and the bar for publication is substantially higher than for what I post on the EA forum. I usually also ask many people to review my research for campaigns (see acknowledgment sections in the reports).
On the EA forum, I sometimes don’t excessively hedge my claims for clarity’s sake. And I have sometimes epistemic status disclaimers which you’re referring (‘better wrong than vague’, “say wrong things”, “Big, if true”, “Strong stances’)
Yes, I use sensitivity analysis and careful language throughout.
For instance, in my cost-effectiveness analysis I caveat:
“Below we present a very rough, simple, back-on-the-envelope cost-effectiveness analysis (“Fermi estimate”). This model is crude and should not be taken literally. Rather than leaving our assumptions unarticulated and fuzzy, we think it is better to be wrong than vague. By stating assumptions explicitly that can be questioned and falsified (as the common aphorisms in statistics go: “Truth will sooner come out of error than from confusion” and “All models are wrong, but some are useful”). It also helps us think through relevant considerations and formalizes our intuitions. If you disagree with any of the inputs to our model, then you can create a copy of our spreadsheet and plug in your own parameters.”
I also use the word “might” about 90 times in the Clean Energy campaign.
But there are some statements even in the report where I’m intentionally wrong for clarity’s sake. For instance, when I write:
“The focus of advanced economies like EU countries to prioritize reducing their own domestic emissions is a natural impulse (‘clean up your own backyard first’). But 75% of all emissions will come from emerging economies such as China and India by 2040. Only if advanced economies’ climate policies reduce emissions in all countries, will we prevent dangerous climate change. We call this the cool rule: only if all countries reduce their emissions will the planet stay cool.”
The bolded sentence is clearly wrong on some level, because we can perhaps use geoengineering to cool the planet or maybe emerging economies such as China will solve the issue. However, I feel this is less important to emphasize because it’s somewhat unlikely and writing all that out would distract from the central message. By making strong statements such as “Only if” you’re making your writing and central claims really clear so that they can be more easily falsified. But some people might disagree and like to hedge more.
I’m quite careful I think but given the length of the report I cannot rule out that there are errors in somewhere. But I’d be somewhat surprised if they were easy to find. So I’ll pay a bug bounty of $20 for any statement that is demonstrably incorrect.
However, the central claims I’m very confident in, because I’m trying to triangulate with multiple lines of evidence, so that the conclusions do not depend on a single piece of data (https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/ ).
Thanks very much Hauke, really interesting! I’ll keep an eye out for any bugs in future work ;)