Here are my high-level thoughts around the comments so far of this report:
This is a detailed report, where a lot of work has been put in, by one of EA’s foremost scholars on the intersection of climate change and other global priorities.
So it’d potentially be quite valuable for people with either substantial domain expertise or solid generalist judgement to weigh in here on object-level issues, critiques, and cruxes, to help collective decision-making.
Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
Compare, for example, thefollowingcomments to one of RP’s cultured meat reports.
After saying that, I will hypocritically continue to follow the streak of being meta while not having read the full report.
I think I’m confused about the quality of the review process so far. Both the number and quality of the reviewers John contacted for this book seemed high. However, I couldn’t figure out what the methodology for seeking reviews is here.
To be clear, I’m aware that this is an isolated demand for rigor. My impression is that very few other EA research organizations have a very formal and legible process for prepublication reviews.
However, I think for a report of this scope, it might be valuable to have a fairly good researcher sit down and review the report very carefully, in a lot of detail.
If this is not already done, for people who are not satisfied (or at least queasy) with the report, it might be helpful to commission 1-3 months of research critiquing and redteaming this report in a lot of detail.
I think John is understandably frustrated by the lack of technical engagement and the overly metaness (and possibly personal acrimoniousness? ) of the comments so far.
I think John has acted badly in accusing A.C.Skraeling of being a sock puppet account.
I think every loosely organized emerging research field has “duds,” or low-quality work that in a poorly filtered ecosystem will suck up a lot of the oxygen.
For both politeness and practical time-constraints reasons, people tend to ignore the bad work and only engage with the good/mediocre work.
Larks is one of the few people who point out this explicitly (for AI alignment). I think his summary of the situation (including why people do this, and downsides) is a worthwhile read, will recommend!
My impression is that academia and other relatively formal systems deal with this via more explicit notions of prestige, conference/paper rejections, etc.
However, this comes with its own dysfunctions and risks ossification.
Taking a step back, I’m a bit confused about why longtermist researchers continue to spend such significant intellectual resources on studying climate in this fashion. My guess is that anyone who was convinced by earlier research by John, 80k, etc, plus the relevant research and communication into other, riskier, domains, are unlikely to change their minds by critiques of this deeper report. And anybody who wasn’t convinced by the earlier reports are unlikely to be convinced by this one.
So I don’t really know the value-add of more intellectual attention into this.
I looked into climate change for myself for several weeks in 2019 (back when I was substantially worse at both research and judgement). I think I was fairly satisfied that it should not be one of EA’s top priorities, even ignoring neglectedness. My reasoning then was fairly simple.
My main reason for thinking that climate change was a significant existential risk (or at least a GCR) was that I had the vague impression that this is a position in academia.
My main reason for not believing that was because a) weak inside views about historical resilience to temperature variability and b) other smart and thoughtful EAs didn’t believe this.
Thus, the appropriate way to evaluate the evidence is to try to look at the climate change literature, in a way that does not reference past EA work (so as to not be clouded as much by biases/information cascades).
I looked at it for a while, and concluded that the literature mostly did not say that climate change was a significant existential risk (of course it was not framed in those terms).
I thought that climate risk experts should be predisposed to overestimate rather than estimate risks, so there is additional evidence beyond just their conclusions.
Given that my reasons for thinking climate change was a significant existential risk weren’t very strong or robust to begin with, I didn’t need that many bits to be convinced that the risks are low.
I think my research quality/rigor at the time was pretty meh. Still I don’t think it should swing the overall conclusion.
I’m more worried about motivated reasoning/selection effects. Motivated reasoning because I felt like part of the reason for doing that research was convincing myself and others, rather than because I fully impassionately wanted to know the truth. Selection effects because I went into it using an EA methodology and ways of thinking about the world, and of course I’m selected to be one of the people who think in that way.
Since then, I had 3 updates re: how high climate risk should intuitively “feel”, in my head, in different directions:
Jan-March 2020 Covid was a large update for me against a) trusting the object-level of a lot of expert modeling and b) trusting that experts will uniformly overestimate rather than underestimate risk (as opposed to “don’t panic” justifications and conservatism bias). This caused me to worry more about climate stuff.
I did not look into or investigate their reasoning, and generally have not prioritized looking into climate much since 2019.
Compared to myself in 2019, I placed relatively more trust in the relative judgement of EAs over that of external people. This caused me to worry less about climate stuff.
I think the first update is the largest.
My understanding is that a lot of the purported value of engaging on the object-level details of climate, and of WWOTF’s aims in general, is to draw in the climate change people as allies.
I’m a bit confused about this reasoning. I think:
the arguments for working on AI risk/biorisk over climate risks seems fairly simple to me, and resilient to a practical range of empirical disagreements within climate.
to the extent the arguments are bad or poorly communicated, I strongly suspect that they come in almost entirely because AI risk/biorisk arguments are insufficiently grounded, rather than because the climate arguments are insufficiently sophisticated.
from a persuasion perspective, I think this is maybe a bad move anyway. Maybe better to be “yes, and” than to explicitly and immediately tell people that the thing they’ve devoted their lives to is >100x less important.
Given that it might not be realistic to find (and may not be very valuable) a senior-ish EA researcher to critique and red-team John’s report, I’d be excited to have several junior EAs try this.
Roughly, a summer-long project for someone of the calibre of CERI/SERI might be to go line by line in the report (or pick a few chapters) to find the relevant potential mistakes/disagreements/cruxes (from an LT perspective).
This helps train critical thinking abilities and thoughtfulness.
To be clear, this is premised on domain-general critical thinking abilities and thoughtfulness being among the most valuable things for junior EAs to train.
I’m genuinely confused about how much we want less thoughtful people as allies. I think an increasingly common position in longtermist EA nowadays is that the things that matter is AI risk and biorisk (sometimes just AI risk), and what we most/only need are people who are either extremely technically competent (for solving the technical problems) or who are very good at politics (for policy/persuasion).
Whereas I feel like I sort of represent the older guard of people who still think that thoughtfulness is a huge asset.
But I acknowledge that this is something I have a (large?) bias to believing is true (because I’m relatively worse at being very technical or social, compared to being thoughtful).
Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
I think this is mainly because of the length of the report which makes it hard to make meaningful critiques without investing a bunch of time
Yes, and I note that as/after I wrote my comment, there are more thoughtful object-level comments. So perhaps I commented too early and should’ve just waited for people to have time to read the report first and then provide object-level comments!
Here are my high-level thoughts around the comments so far of this report:
This is a detailed report, where a lot of work has been put in, by one of EA’s foremost scholars on the intersection of climate change and other global priorities.
So it’d potentially be quite valuable for people with either substantial domain expertise or solid generalist judgement to weigh in here on object-level issues, critiques, and cruxes, to help collective decision-making.
Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
The 0.5 in question is this comment by Karthik.
(EDIT: I think Noah’s comment here qualifies)
Compare, for example, the following comments to one of RP’s cultured meat reports.
After saying that, I will hypocritically continue to follow the streak of being meta while not having read the full report.
I think I’m confused about the quality of the review process so far. Both the number and quality of the reviewers John contacted for this book seemed high. However, I couldn’t figure out what the methodology for seeking reviews is here.
To be clear, I’m aware that this is an isolated demand for rigor. My impression is that very few other EA research organizations have a very formal and legible process for prepublication reviews.
However, I think for a report of this scope, it might be valuable to have a fairly good researcher sit down and review the report very carefully, in a lot of detail.
If this is not already done, for people who are not satisfied (or at least queasy) with the report, it might be helpful to commission 1-3 months of research critiquing and redteaming this report in a lot of detail.
I think John is understandably frustrated by the lack of technical engagement and the overly metaness (and possibly personal acrimoniousness? ) of the comments so far.
I think John has acted badly in accusing A.C.Skraeling of being a sock puppet account.
I think every loosely organized emerging research field has “duds,” or low-quality work that in a poorly filtered ecosystem will suck up a lot of the oxygen.
For both politeness and practical time-constraints reasons, people tend to ignore the bad work and only engage with the good/mediocre work.
Larks is one of the few people who point out this explicitly (for AI alignment). I think his summary of the situation (including why people do this, and downsides) is a worthwhile read, will recommend!
My impression is that academia and other relatively formal systems deal with this via more explicit notions of prestige, conference/paper rejections, etc.
However, this comes with its own dysfunctions and risks ossification.
Taking a step back, I’m a bit confused about why longtermist researchers continue to spend such significant intellectual resources on studying climate in this fashion. My guess is that anyone who was convinced by earlier research by John, 80k, etc, plus the relevant research and communication into other, riskier, domains, are unlikely to change their minds by critiques of this deeper report. And anybody who wasn’t convinced by the earlier reports are unlikely to be convinced by this one.
So I don’t really know the value-add of more intellectual attention into this.
I looked into climate change for myself for several weeks in 2019 (back when I was substantially worse at both research and judgement). I think I was fairly satisfied that it should not be one of EA’s top priorities, even ignoring neglectedness. My reasoning then was fairly simple.
My main reason for thinking that climate change was a significant existential risk (or at least a GCR) was that I had the vague impression that this is a position in academia.
My main reason for not believing that was because a) weak inside views about historical resilience to temperature variability and b) other smart and thoughtful EAs didn’t believe this.
Thus, the appropriate way to evaluate the evidence is to try to look at the climate change literature, in a way that does not reference past EA work (so as to not be clouded as much by biases/information cascades).
I looked at it for a while, and concluded that the literature mostly did not say that climate change was a significant existential risk (of course it was not framed in those terms).
I thought that climate risk experts should be predisposed to overestimate rather than estimate risks, so there is additional evidence beyond just their conclusions.
Given that my reasons for thinking climate change was a significant existential risk weren’t very strong or robust to begin with, I didn’t need that many bits to be convinced that the risks are low.
I think my research quality/rigor at the time was pretty meh. Still I don’t think it should swing the overall conclusion.
I’m more worried about motivated reasoning/selection effects. Motivated reasoning because I felt like part of the reason for doing that research was convincing myself and others, rather than because I fully impassionately wanted to know the truth. Selection effects because I went into it using an EA methodology and ways of thinking about the world, and of course I’m selected to be one of the people who think in that way.
Since then, I had 3 updates re: how high climate risk should intuitively “feel”, in my head, in different directions:
Jan-March 2020 Covid was a large update for me against a) trusting the object-level of a lot of expert modeling and b) trusting that experts will uniformly overestimate rather than underestimate risk (as opposed to “don’t panic” justifications and conservatism bias). This caused me to worry more about climate stuff.
John/Johannes looked into and updated in believing in lower overall climate risk, with reasonable-sounding object-level reasons. This caused me to worry less about climate stuff.
I did not look into or investigate their reasoning, and generally have not prioritized looking into climate much since 2019.
Compared to myself in 2019, I placed relatively more trust in the relative judgement of EAs over that of external people. This caused me to worry less about climate stuff.
I think the first update is the largest.
My understanding is that a lot of the purported value of engaging on the object-level details of climate, and of WWOTF’s aims in general, is to draw in the climate change people as allies.
I’m a bit confused about this reasoning. I think:
the arguments for working on AI risk/biorisk over climate risks seems fairly simple to me, and resilient to a practical range of empirical disagreements within climate.
to the extent the arguments are bad or poorly communicated, I strongly suspect that they come in almost entirely because AI risk/biorisk arguments are insufficiently grounded, rather than because the climate arguments are insufficiently sophisticated.
from a persuasion perspective, I think this is maybe a bad move anyway. Maybe better to be “yes, and” than to explicitly and immediately tell people that the thing they’ve devoted their lives to is >100x less important.
Given that it might not be realistic to find (and may not be very valuable) a senior-ish EA researcher to critique and red-team John’s report, I’d be excited to have several junior EAs try this.
Roughly, a summer-long project for someone of the calibre of CERI/SERI might be to go line by line in the report (or pick a few chapters) to find the relevant potential mistakes/disagreements/cruxes (from an LT perspective).
This helps train critical thinking abilities and thoughtfulness.
To be clear, this is premised on domain-general critical thinking abilities and thoughtfulness being among the most valuable things for junior EAs to train.
I’m genuinely confused about how much we want less thoughtful people as allies. I think an increasingly common position in longtermist EA nowadays is that the things that matter is AI risk and biorisk (sometimes just AI risk), and what we most/only need are people who are either extremely technically competent (for solving the technical problems) or who are very good at politics (for policy/persuasion).
Whereas I feel like I sort of represent the older guard of people who still think that thoughtfulness is a huge asset.
But I acknowledge that this is something I have a (large?) bias to believing is true (because I’m relatively worse at being very technical or social, compared to being thoughtful).
I think this is mainly because of the length of the report which makes it hard to make meaningful critiques without investing a bunch of time
Yes, and I note that as/after I wrote my comment, there are more thoughtful object-level comments. So perhaps I commented too early and should’ve just waited for people to have time to read the report first and then provide object-level comments!
Not sure why this was downvoted. I really appreciate comments where people publicly acknowledge they may have made an error and update their views.
I think this is a good place to have discussions about claims in specific sections (rather than the whole report) if people would like