This is a brave thing to publish in the current political climate and I am grateful you did!
jackva
I think we are misunderstanding each other a bit.
I am in no way trying to imply that you shouldn’t be mad about environmentalism’s failings—in fact, I am mad about them on a daily basis. I think if being mad about environmentalism’s failing is the main point than what Ezra Klein and Derek Thompson are currently doing with Abundance is a good example of communicating many of your criticisms in a way optimized to land with those that need to hear it.
My point was merely that by framing the example in such extreme terms it will lose a lot of people despite being only very tangentially related to the main points you are trying to make. Maybe that’s okay, but it didn’t seem like your goal overall to make a point about environmentalism, so losing people on an example that is stated in such an extreme fashion did not seem worth it to me.
I find it pretty difficult to see how to get broad engagement on this when being so obviously polemical / unbalanced in the examples.
As someone who is publicly quite critical of mainstream environmentalism, I still find the description here so extreme that it is hard to take seriously as more than a deeply partisan talking point.
The “Environmental Protection Act” doesn’t exist, do you mean the “National Environmental Policy Act” (NEPA)?
Neither is it true that environmentalists are single-handedly responsible for nuclear declining and clearly modern environmentalism has done a huge amount of good by reducing water and air pollution.
I think your basic point—that environmentalism had a lot more negative effects than commonly realized and that we should expect similar degrees of unintended negative effects for other issues—is probably true (I certainly believe it).
But this point can be made with nuance and attention to detail that makes it something that people with different ideological priors can read and engage with constructively. I think the current framing comes across as “owning the libs” or “owning the enviros” in a way that makes it very difficult for those arguments to get uptake anywhere that is not quite right-coded.
It would be great if there was a better prediction market version of this question, unfortunately others I found are even worse.
Yet, I don’t think it’s worth dismissing entirely.
If criteria are stricter now, this should mean that an increase in the probability between November and today is underestimated by this question.
Thanks for clarifying this!
I think ultimately we seem to have quite different intuitions on the trade-offs, but that seems unresolvable. Most of my intuitions there come from advising non-EA HNWs (and from spending time around advisors specialized in advising these), so this is quite different from mostly advising EAs.
Have Wikipedia policies changed recently, though? The key thing here is the time trend so unless Wikipedia policies have changed, it seems reasonable to interpret the change over time as reflecting the underlying substantive interest.
Clearly one needs some media source to define resolution criteria for a question like this.
[Linkpost] Will the US experience a constitutional crisis before 2030?
Thanks for laying out your view in such detail, Patrick!
I find it hard to grasp how the EA Forum can be so narrow—given there are no Fora / equivalents for the other brands you mention.
E.g. I still expect the EA Forum is widely perceived as the main place where community discussion happens beyond the narrow mandate you outline so which attentional priorities will be set here will be seen as a broader reflection of the movement than what I think you intend.
I think that’s fine—we just have different views on what a desirable size of the potential size of the movement would be.
To clarify—my point is not so much that this discussion is outside the Overton Window, but that it is deeply inside-looking / insular. It was good to be early on AI risk and shrimp welfare and all of the other things we have been early on as a community, but I do think these issues have a higher tractability in mobilizing larger movements / having an impact outside our community than this debate week has.
This comment really makes me appreciate the nuanced way to give feedback with disagree and karma—I think it is quite useful to incentivize critical feedback that the two can be, and are, distinguished.
Thanks for engaging and for giving me the chance to outline more clearly and with more nuance what my take is.
I covered some of this in my reply to Ollie, but basically (a) I do think that Forum weeks are significant attentional devices signaling what we see as priorities, (b) the Forum has appeared in detail in many EA-critical pieces and (c) there are many Forum weeks we could be running right now that would be much better both from a point of action guiding and perception in the wider world.
I take as given—I am not the right person to evaluate this—that there are some interventions that some EA funders might decide along those considerations.
But I am pretty confident it won’t matter to the wider philanthropic world, almost no one is thinking about philanthropic interventions saying “does this make a world better where we survive v does this mostly affect probability of extinction?”
If EA were ascendant and we’d be a significant share of philanthropy maybe that’d be a good question to ask.
But in a world where our key longtermist priorities are not well funded and where most of the things we can be doing to broadly reduce risks are not clearly alignable to either side of the crux here, I think making this a key attentional priority seems to have, at least, significant opportunity cost.
EDIT: I am mostly trying to give a consistent and clearly articulated perspective here, I am surely overlooking things and you have information on this that I do not have. I hope this is useful to you, but I don’t want to imply I am able to have an all-things-considered view.
This is not what I am saying, my point is about attentional highlighting.
I am all for discussing everything on the Forum, but I do think when we set attentional priorities—as those weeks do—we could reflect whether we are targeting things that are high value to be discussed and how they land with and how they affect the broader world could be a consideration here.
I think messaging to the broader world that we focus our attention on a question that will only have effects for the small set of funders that are hardcore EA-aligned makes ourselves small.
By crude analogy it’s like having a whole Forum week on welfare weights at the opportunity cost of a week focused on how to improve animal funding generally.
We could have discussion weeks right now on key EA priorities in the news, from the future of effective development aid, to great power war and nuclear risk, to how to manage AI risk under new political realities, that all would seem to affect a much larger set of resourcing and, crucially, also signal to the wider world that we are a community engaging on some of the most salient issues of the day.
I think setting a debate week on a topic that has essentially no chance of affecting non-EA funders is a lost opportunity and I don’t think it would come out on top as a topic in a prioritization of debate weeks topic in the spirit of “how can we do the most good?”
On a more personal level, but I think this is useful to report here, because I don’t think I am the only one with this reaction: I’ve been part of this community for a decade and have built my professional life around it—and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I’ve previously seen as bad faith—that we are too much armchair philosophers.
jackva’s Quick takes
I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.
There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.
And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world (“academic” in the bad sense of the word).
I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.
Is this question really worth discussing and publicly highlighting when really getting more buy in into existential risk prevention work broadly construed would be extremely desirable and naturally, in the main, both reduce extinction risk and increase the quality of futures where we survive?
Very grateful for the amount of discussion here.
I wanted to write a summary comment to close this out and clarify a bit more what I am trying to (not) get at I (still hope to be able to address all detailed comments, probably on the weekend, as I am doing this in personal capacity):
1. With re-examining work on systemic attributes I don’t mean “systems change, not climate change” style work, but rather something small-c conservative—protecting/strengthening the basic liberal norms and institutions such as rule of law, checks and balances, etc. at home and the rule-based international post WW2-order and a basic commitment/ norm to a positive sum view of the world globally.
2. My basic contention is that—when many of those institutions are under much more threat and are much more fluid than before—working on them is relatively more important, both because greater volatility and more downside risk but also because more surgical interventions are affected by this.
Somewhat crudely, all work that flows through influencing Congress to spend more money on priority X, requires a continued respect for Congress’s “power of the purse” (no impoundment). Similarly, the promisingness of much GCR work also seems heavily affected by macro-level variables on the international scale.
3. It would be good to examine this more thoroughly and see whether there are things we can do that are highly effective on the margin and doing so would require a serious analytical and research effort, not relying on cached priors on system level v surgical interventions debates of days past.To be clear, I am fairly agnostic to whether this would lead to an actual reprioritizing or whether the conclusion would be that engaging on system-level factors is not promising. I do not know.
Insofar as I am criticizing, I am criticizing the lack of serious engagement with these questions as a community, a de facto conclusion on this question—do > 95% work surgical work—that rests on little serious analysis and a lack of grappling with a changing situation that, at the very least, should affect the balance of considerations.
4. In terms of taking action, I would be surprised if the conclusion from this would be—if more action is warranted—to simply increase the effort of existing EA(-adjacent) efforts on those topics such as around advocating for electoral reforms. It is obviously important to advocate for changes to electoral systems and other institutional incentive structures, in particular if those have properties that would address some of the existing problems.
However, it seems clear to me that this cannot be everything EAs would consider doing on this. By crude analogy, much of these discussions feel like spirited discussions about which colors to paint the walls in the kitchen while there is an unattended fire in the living room. In the same way that our primary strategies on engaging on AI risk are not 30-year strategies to change how technology is governed, seriously engaging on preserving desirable system level attributes / institutions cannot only be about very long-run plays in a time where prediction markets predict a 3⁄4 chance of a constitutional crisis in the US over the next couple of years and the international situation is similarly fluid.
5. I also do have “this is not neglected” and “this is intractable” in my head as the primary reasons why we should not do this. However, I (and I think many others), have also become a lot more skeptical of using these considerations lazily and heuristically to discredit looking into entire fields of action that are important.
It is certainly true that the average intervention on vaguely improving institutions in a way that is salient with the public already will have a low impact. But it would not shock me at all if a serious research effort found many interventions that are surprisingly neglected and quite plausibly tractable.
I think the analytically vibrant community we’d ideally like to be would dive deeper into those issues at this point in time.
I’ve now tried to clarify what I mean in my post, Nick.
I agree with you that concrete suggestions are lacking, my claim is that this is—at least partially—due to too little effort on this angle and that this seems worth re-examining in a change of rapid and profound system-level changes.
That seems true for most things EAs fund apart from direct service delivery interventions such as distributing malaria nets.
I.e. it is a valid consideration but it is not a justification to work on surgical instead of systemic interventions in areas where all interventions are operating uncertainly over multi-year indirect theories of change (the majority of what EAs do outside GiveWell-style GHD work).
Yes, I saw this and was happy for it to exist.
What I am trying to say is that this being one of the longest treatments on this to exist feels like a failure / blind spot of the community.
We’re in the midst of very severe systemic changes, domestic and international, and—ideally—there’d be lots of thorough analysis on the forum and elsewhere.
Thanks! I don’t think that hard-to-measure explanation is quite right—lots of other similarly speculative / hard-to-measure interventions that EAs have been traditionally very excited about.
I think it has more to do with priors of low neglectedness and low tractability and a certain aversion to act in ways that could be seen as political.
That said, my goal here is not to re-litigate the whole “surgical v systemic change” debate, but rather to say that current changes seem to suggest that systemic work should be relatively more important and it’s something that seems (vastly) under-discussed and not systemically explored.
report link is wrong