Mid-career climate science researcher in academia
Previously used display name “Pagw”
Mid-career climate science researcher in academia
Previously used display name “Pagw”
Whilst policymakers have a substantial role in drafting the SPM, I’ve not generally heard scientists complain about political interference in writing it. Some heavy fossil fuel-producing countries have tried removing text they don’t like, but didn’t come close to succeeding. The SPM has to be based on the underlying report, so there’s quite a bit of constraint. I don’t see anything to suggest the SPM differs substantially from researchers’ consensus. The initial drafts by scientists should be available online, so it could be checked what changes were made by the rounds of review.
When people say things are “politicized”, it indicates to me that they have been made inaccurate. I think it’s a term that should be used with great care re the IPCC, since giving people the impression that the reports are inaccurate or political gives people reason to disregard them.
I can believe the no adaptation thing does reflect the literature, because impacts studies do very often assume no adaptation, and there could well be too few studies that credibly account for adaptation to do a synthesis. The thing to do would be to check the full report to see if there is a discrepancy before presuming political influence. Maybe you think the WGII authors are politicised—that I have no particular knowledge of, but again climate impacts researchers I know don’t seem concerned by it.
“IPCC reports are famously politicized documents”
Why do you say that? It’s not my impression when it comes to physical changes and impacts. (Not so sure about the economics and mitigation side.)
Though I find the “burning embers” diagrams like the one you show hard to interpret as what “high” risk/impact means doesn’t seem well-defined and it’s not clear to me it’s being kept consistent between reports (though most others seem to love them for some reason...).
Thanks. OK, so currently the situation is one of arguing for legislation to be proposed rather than there being anything to vote on yet?
Are there particular “key legislative changes” that this could help achieve, or are they hypothetical at present?
“At a certain point, we just have to trust the peer-review process”
Coming here late, found it an interesting comment overall, but just thought I’d say something re interpreting the peer reviewed literature as an academic, as I think people often misunderstand what peer review does. It’s pretty weak and you don’t just trust what comes out! Instead, look for consistent results being produced by at least a few independent groups, without there being contradictory research (researchers will rarely publish replications of results, but if a set of results don’t corroborate a single plausible theoretical picture, then something is iffy). (Note it can happen for whole communities of researchers to go down the wrong path, though—it’s just less likely than for an individual study.) Also, talk to people in the field about it! So there are fairly low cost ways to make better judgements than believing what one researcher tells you. The scientific fraud cases that I know involved results from just one researcher or group, and sensible people would have had a fair degree of scepticism without future corroboration. Just in case anyone reading this is ever in the position of deciding whether to allocate significant funding based on published research.
“Science relies on trust, so it’s relatively vulnerable to intentionally bad, deceptive actors”
I don’t think science does rely on trust particularly highly, as you can have research groups corroborating or casting doubt on others’ research. “Relatively” compared to what? I don’t see why it would be more vulnerable to be actors than most other things humans do.
A very interesting summary, thanks.
However I’d like to echo Richard Chappell’s unease at the praising of the use of short-term contracts in the report. These likely cause a lot of mental health problems and will dissuade people who might have a lot to contribute but can’t cope with worrying about whether they will need to find a new job or even career in a couple of years’ time. It could be read as a way of avoiding dealing with university processes for firing people—but then the lesson for future organisations may be to set up outside a university structure, and have a sensible degree of job security.
Thanks, it’s good to know it’s had input from multiple knowledgable people. I agree that this looks like a good thing even if it’s implemented imperfectly!
Thanks for putting together the doc.
For the suggested responses, are they informed by expertise or based on a personal view? This would be useful to know where I’m not sure about them. E.g. for the question on including images, I wondered if they could be misleading if they show animals (as disease and other health problems aren’t very visible, perhaps leading people to erroneously think “those animals look OK to me” or similar).
I also wonder if there’s a risk from this that products get labelled as “high” welfare when the animals still suffer overall, reducing impetus for further reform. I think the scheme would still be good, but I wonder if there’s scope to add an argument that labels like “high” should be reserved only for cases where welfare is independently assessed to indeed be probably positive and high.
the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).
I take that to be uncontroversial. Would you be willing to say so?
I asked because I’m interested—what makes you think most experts don’t think biorisk is such a big threat, beyond a couple of papers?
I guess it depends on what the “correct direction” is thought to be. From the reasoning quoted in my first post, it could be the case that as the study result becomes larger the posterior expectation should actually reduce. It’s not inconceivable that as we saw the estimate go to infinity, we should start reasoning that the study is so ridiculous as to be uninformative and so not the posterior update becomes smaller. But I don’t know. What you say seems to suggest that Bayesian reasoning could only do that for rather specific choices of likelihood functions, which is interesting.
It’s a potential solution, but I think it requires the prior to decrease quickly enough with increasing cost effectiveness, and this isn’t guaranteed. So I’m wondering is there any analysis to show that the methods being used are actually robust to this problem e.g. exploring sensitivity to how answers would look if the deworming RCT results had been higher or lower and that they change sensibly?
A document that looks to give more info on the method used for deworming looks to be here, so perhaps that can be built on—but from a quick look it doesn’t seem to say exactly what shape is being used for the priors in all cases, though they look quite Gaussian from the plots.
Hmm it’s not very clear to me that it would be effective at addressing the problem—it seems a bit abstract as described. And addressing Pascal’s mugging issues seems like it potentially requires modifying how cost effectiveness estimates are done ie modifying one component of the “cluster” rather than it just being a cluster vs sequence thinking matter. It would be good to hear more about how this kind of thinking is influencing decisions about giving grants in actual cases like deworming if it is being used.
Something I’ve wondered is whether GiveWell has looked at whether its methods are robust against “Pascal’s mugging” type situations, where a very high estimate of expected value of an intervention leads to it being chosen even when it seems very implausible a priori. The deworming case seems to fit this mould to me somewhat—an RCT finding a high expected impact despite no clear large near term health benefits and no reason to think there’s another mechanism to getting income improvements (as I understand it) does seem a bit like the hypothetical mugger promising to give a high reward despite limited reason to expect it to be true (though not as extreme as in the philosophical thought experiments).
Actually, doing a bit of searching turned up that Pascal’s mugging has been discussed in an old 2011 post on the GiveWell blog here, but only abstractly and not in the context of any real decisions. The post seems to argue that past some point, based on Bayesian reasoning, “the greater [the ‘explicit expected-value’ estimate] is, the lower the expected value of Action A”. So by that logic, it’s potentially the case that had the deworming RCT turned up a higher, even harder to believe estimate of the effect on income, a good evaluation could have given a lower estimate of expected value. Discounting the RCT expected value by a constant factor that is independent of the RCT result doesn’t capture this. (But I’ve not gone through the maths of the post to tell how general the result is.)
The post goes on to say ‘The point at which a threat or proposal starts to be called “Pascal’s Mugging” can be thought of as the point at which the claimed value of Action A is wildly outside the prior set by life experience (which may cause the feeling that common sense is being violated)’. Maybe it’s not common sense being violated in the case of deworming, but it does seem quite hard to think of a good explanation for the results (for an amateur reader like me anyway). Has any analysis been done on whether the deworming trial results should be considered past this point? It seems to me that that would require coming up with a prior estimate and checking that the posterior expectation does behave sensibly as hypothetical RCT results go beyond what seems plausible a priori. Of course, thinking may have evolved a lot since that post, but it seems to pick up on some key points to me.
It looks like >$10M were given by GiveWell to deworming programs in 2023, and from what I can tell it looks like a large proportion of funds given to the “All Grants” fund went to this cause area, so it does seem quite important to get the reasoning here correct. Since learning about the issues with the deworming studies, I’ve wondered whether donations to this cause can currently make sense—as an academic, my life experience tells me not to take big actions based on results from individual published studies! And this acts as a barrier to feeling comfortable with donating to the “All Grants” fund for me, even though I’d like to handover more of the decision-making to GiveWell otherwise.
What good solutions are there for EAs leaving money to charity in wills, in terms of getting them legally correct but not incurring large costs?
I’ve found this 2014 forum post that looks to have good info but many of the links no longer work—for example, it has a broken link to a form for getting a free will—does a resource like that still exist somewhere?
There’s also the GWWC bequests page. When I tried their “tool”, it directed me to an organisation called FareWill—has anyone used them and found it to give a good result?
I get the impression that the low-cost will services out there are based on templates for leaving assets to family and friends and aren’t so well suited to having charities as the main beneficiaries—in particular, including clauses for what to do if the charities no longer exist and some broader instruction needs to be given (I tried freewills.co.uk, but it didn’t produce something suitable). Has anyone found a will-writing service that worked well at a reasonable cost? Or is using a solicitor the recommended way in these cases, and am I wrong to think that would cost hundreds of pounds? [Edit to add—I live in England, so info relevant for there is particularly welcome.]
Edit to add some keywords for searching, as someone pointed out to me that searching for “will” brings up lots of other things!: testament, writing will, leave money to charity.
We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim
How is it being decided that “most experts” think this? I took a look and part 10 referenced two different papers with a total of 7 authors and a panel of four experts brought together by one of those authors—it doesn’t seem clear to me from this that this view is representative of the majority of experts in the space.
Harvard Health says that avoiding infection is part of strengthening one’s immune system
I was intrigued so looked at the link. It has heading “Healthy ways to strengthen your immune system” and says in one bullet point under this “Take steps to avoid infection, such as washing your hands frequently and cooking meats thoroughly”, but doesn’t say anything about why this would help strengthen the immune system (it just links to a page with steps for reducing infection risk). A possible alternative interpretation is that this is meant as advice for not getting sick rather than making the immune system more effective, and this seems more likely to me. But it’s not clear.
A minor thing on the CO2 emissions reductions is it should probably be considered whether the trees would be cut down anyway if they weren’t used for wood. I think you’d want to know the net deforestation due to collecting firewood, presuming that forest expansion would be cut back anyway for other reasons.
Just thought I’d note that I checked again and the CAF DAF’s minimum balance has gone up to £25k and has a minimum fee of £600/ann.: https://www.cafonline.org/individual-trust-supporting-documents
The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs
This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.
Yeah I think that it’s just that, to me at least, “politicized” has strong connotations of a process being captured by a particular non-broad political constituency or where the outcomes are closely related to alignment with certain political groups or similar. The term “political”, as in “the IPCC SPMs are political documents”, seems not to give such an impression. “Value-laden” is perhaps another possibility. The article you link to also seems to use “political” to refer to IPCC processes rather than “politicized”—it’s a subtle difference but there you go. (Edit—though I do notice I said not to use “political” in my previous comment. I don’t know, maybe it depends on how it’s written too. It doesn’t seem like an unreasonable word to use to me now.)
Re point 1 - I guess we can’t know the intentions of the authors re the decision to not discuss climate adaptation there.
Re 2 - I’m not aware of the IPCC concluding that “we also have now expectations of much lower warming”. So a plausible reason for it not being in the SPM is that it’s not in the main report. As I understand it, there’s not a consensus that we can place likelihoods on future emissions scenarios and hence on future warming, and then there’s not a way to have consensus about future expectations about that. One line of thought seems to be that it’s emission scenario designers’ and the IPCC’s job to say what is required to meet certain scenarios and what the implications of doing so are, and then the likelihood of the emissions scenarios are determined by governments’ choices. Then, a plausible reason why the IPCC did not report on changes in expectations of warming is that it’s largely about reporting consensus positions, and there isn’t one here. The choice to report consensus positions and not to put likelihoods on emissions scenarios is political in a sense, but not in a way that a priori seems to favour arguments for action over those against. (Though the IPCC did go as far as to say we are likely to exceed 1.5C warming, but didn’t comment further as far as I’m aware.)
So I don’t think we could be very confident that it is politicized/political in the way you say, in that there seem to be other plausible explanations.
Furthermore, if the IPCC wanted to motivate action better, it could make clear the full range of risks and not just focus so much on “likely” ranges etc.! So if it’s aiming to present evidence in a way to motivate more action, it doesn’t seem that competent at it! (Though I do agree that in a lot of other places in the SYR SPM, the presentational choices do seem to be encouraging of taking greater action.)