This comment really makes me appreciate the nuanced way to give feedback with disagree and karma—I think it is quite useful to incentivize critical feedback that the two can be, and are, distinguished.
jackva
Thanks for engaging and for giving me the chance to outline more clearly and with more nuance what my take is.
I covered some of this in my reply to Ollie, but basically (a) I do think that Forum weeks are significant attentional devices signaling what we see as priorities, (b) the Forum has appeared in detail in many EA-critical pieces and (c) there are many Forum weeks we could be running right now that would be much better both from a point of action guiding and perception in the wider world.
I take as given—I am not the right person to evaluate this—that there are some interventions that some EA funders might decide along those considerations.
But I am pretty confident it won’t matter to the wider philanthropic world, almost no one is thinking about philanthropic interventions saying “does this make a world better where we survive v does this mostly affect probability of extinction?”
If EA were ascendant and we’d be a significant share of philanthropy maybe that’d be a good question to ask.
But in a world where our key longtermist priorities are not well funded and where most of the things we can be doing to broadly reduce risks are not clearly alignable to either side of the crux here, I think making this a key attentional priority seems to have, at least, significant opportunity cost.
EDIT: I am mostly trying to give a consistent and clearly articulated perspective here, I am surely overlooking things and you have information on this that I do not have. I hope this is useful to you, but I don’t want to imply I am able to have an all-things-considered view.
This is not what I am saying, my point is about attentional highlighting.
I am all for discussing everything on the Forum, but I do think when we set attentional priorities—as those weeks do—we could reflect whether we are targeting things that are high value to be discussed and how they land with and how they affect the broader world could be a consideration here.
I think messaging to the broader world that we focus our attention on a question that will only have effects for the small set of funders that are hardcore EA-aligned makes ourselves small.
By crude analogy it’s like having a whole Forum week on welfare weights at the opportunity cost of a week focused on how to improve animal funding generally.
We could have discussion weeks right now on key EA priorities in the news, from the future of effective development aid, to great power war and nuclear risk, to how to manage AI risk under new political realities, that all would seem to affect a much larger set of resourcing and, crucially, also signal to the wider world that we are a community engaging on some of the most salient issues of the day.
I think setting a debate week on a topic that has essentially no chance of affecting non-EA funders is a lost opportunity and I don’t think it would come out on top as a topic in a prioritization of debate weeks topic in the spirit of “how can we do the most good?”
On a more personal level, but I think this is useful to report here, because I don’t think I am the only one with this reaction: I’ve been part of this community for a decade and have built my professional life around it—and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I’ve previously seen as bad faith—that we are too much armchair philosophers.
jackva’s Quick takes
I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.
There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.
And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world (“academic” in the bad sense of the word).
I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.
Is this question really worth discussing and publicly highlighting when really getting more buy in into existential risk prevention work broadly construed would be extremely desirable and naturally, in the main, both reduce extinction risk and increase the quality of futures where we survive?
Very grateful for the amount of discussion here.
I wanted to write a summary comment to close this out and clarify a bit more what I am trying to (not) get at I (still hope to be able to address all detailed comments, probably on the weekend, as I am doing this in personal capacity):
1. With re-examining work on systemic attributes I don’t mean “systems change, not climate change” style work, but rather something small-c conservative—protecting/strengthening the basic liberal norms and institutions such as rule of law, checks and balances, etc. at home and the rule-based international post WW2-order and a basic commitment/ norm to a positive sum view of the world globally.
2. My basic contention is that—when many of those institutions are under much more threat and are much more fluid than before—working on them is relatively more important, both because greater volatility and more downside risk but also because more surgical interventions are affected by this.
Somewhat crudely, all work that flows through influencing Congress to spend more money on priority X, requires a continued respect for Congress’s “power of the purse” (no impoundment). Similarly, the promisingness of much GCR work also seems heavily affected by macro-level variables on the international scale.
3. It would be good to examine this more thoroughly and see whether there are things we can do that are highly effective on the margin and doing so would require a serious analytical and research effort, not relying on cached priors on system level v surgical interventions debates of days past.To be clear, I am fairly agnostic to whether this would lead to an actual reprioritizing or whether the conclusion would be that engaging on system-level factors is not promising. I do not know.
Insofar as I am criticizing, I am criticizing the lack of serious engagement with these questions as a community, a de facto conclusion on this question—do > 95% work surgical work—that rests on little serious analysis and a lack of grappling with a changing situation that, at the very least, should affect the balance of considerations.
4. In terms of taking action, I would be surprised if the conclusion from this would be—if more action is warranted—to simply increase the effort of existing EA(-adjacent) efforts on those topics such as around advocating for electoral reforms. It is obviously important to advocate for changes to electoral systems and other institutional incentive structures, in particular if those have properties that would address some of the existing problems.
However, it seems clear to me that this cannot be everything EAs would consider doing on this. By crude analogy, much of these discussions feel like spirited discussions about which colors to paint the walls in the kitchen while there is an unattended fire in the living room. In the same way that our primary strategies on engaging on AI risk are not 30-year strategies to change how technology is governed, seriously engaging on preserving desirable system level attributes / institutions cannot only be about very long-run plays in a time where prediction markets predict a 3⁄4 chance of a constitutional crisis in the US over the next couple of years and the international situation is similarly fluid.
5. I also do have “this is not neglected” and “this is intractable” in my head as the primary reasons why we should not do this. However, I (and I think many others), have also become a lot more skeptical of using these considerations lazily and heuristically to discredit looking into entire fields of action that are important.
It is certainly true that the average intervention on vaguely improving institutions in a way that is salient with the public already will have a low impact. But it would not shock me at all if a serious research effort found many interventions that are surprisingly neglected and quite plausibly tractable.
I think the analytically vibrant community we’d ideally like to be would dive deeper into those issues at this point in time.
I’ve now tried to clarify what I mean in my post, Nick.
I agree with you that concrete suggestions are lacking, my claim is that this is—at least partially—due to too little effort on this angle and that this seems worth re-examining in a change of rapid and profound system-level changes.
That seems true for most things EAs fund apart from direct service delivery interventions such as distributing malaria nets.
I.e. it is a valid consideration but it is not a justification to work on surgical instead of systemic interventions in areas where all interventions are operating uncertainly over multi-year indirect theories of change (the majority of what EAs do outside GiveWell-style GHD work).
Yes, I saw this and was happy for it to exist.
What I am trying to say is that this being one of the longest treatments on this to exist feels like a failure / blind spot of the community.
We’re in the midst of very severe systemic changes, domestic and international, and—ideally—there’d be lots of thorough analysis on the forum and elsewhere.
Thanks! I don’t think that hard-to-measure explanation is quite right—lots of other similarly speculative / hard-to-measure interventions that EAs have been traditionally very excited about.
I think it has more to do with priors of low neglectedness and low tractability and a certain aversion to act in ways that could be seen as political.
That said, my goal here is not to re-litigate the whole “surgical v systemic change” debate, but rather to say that current changes seem to suggest that systemic work should be relatively more important and it’s something that seems (vastly) under-discussed and not systemically explored.
In a time of rapid change, we should re-examine system-level interventions
NYT—What if Charity Shouldn’t be Optimized
Great piece!
You can always replace chicken meat with plant-based foods for health reasons, or if you are very concerned about GHG emissions. I am not
I think signaling that you don’t think GHG emissions are important does not help your message here / makes this less convincing that it would otherwise be!
I think we are relatively close and at the risk of misunderstanding.
I am not saying psychology isn’t part of this and that this work isn’t extremely valuable, I am a big fan of what you and Stefan are doing.
I would just say it is a fairly small part of the question of collective decision making / societal outcomes, e.g. if one wanted to start a program on understanding decision making in key GCR areas better then what I would expect in the next sentence would be something like “we are assembling a team of historians, political scientists, economists, social psychologists, etc.” not “here is a research agenda focused on psychology and behavioral science.” Maybe psychology and behavioral science were 5-20% of such an effort.
The reason I react strongly here is because I think EA has a tendency to underappreciate social sciences outside economics and we do so at our own peril, e.g. it seems likely that having more people trained in policy and social sciences would have avoided the blindspot of being late on AI governance, for example.
Critical decisions about advanced technologies, including artificial intelligence, pandemic preparedness, or nuclear conflict, as well as policies shaping safety, leadership, and long-term wellbeing, depend on human psychology.
I am surprised by this. Ultimately, almost all of these decisions primarily happen in social and institutional contexts where most of the variance in outcomes is, arguably, not the result of individual psychology but of differences in institutional structures, culture, politics, economics, etc.
E.g. if one wanted to understand the context of these decisions better (which I think is critical!) shouldn’t this primarily motivate a social science research agenda focused on questions such as, for example, “how do get decisions about advanced technologies made?”, “what are the best leverage points?” etc.
Put somewhat differently, insofar as it a key insight of the social sciences (including economics) that societal outcomes cannot be reduced to individual-level psychology because they emerge from the (strategic) interaction and complex dynamics of billions of actors, I am surprised about this focus, at least insofar as the motivation is better understanding collective decision-making and actions taken in key GCR-areas.
Worth pointing out that FP staff who could reply on this is on Thanksgiving break, so will probably take until next week.
So, he did some bad things but it was around expectation and nothing yet in the tails and thus I shouldn’t update in the direction of totalitarianism.
No one speaks of totalitarianism here, but a risk of authoritarian drift.
Over the past two weeks, the President-Elect has indicated he wants to appoint extreme loyalists without substantive qualifications to positions most relevant for democracy working well (or not) (DOJ, DOD, etc.). He is also trying to weaken the power of the Republican Senate Majority, both via the threat of recess appointments plus generally by pushing the Senate to confirm unqualified candidates.
I don’t think anyone knows what will happen, but I think being confident that he is not doing anything in the tails seems overconfident, what he is doing now is exactly what one would be doing if one wanted to move towards more authoritarianism.
I was convinced of this for the election and for election integrity efforts around the election. I am less convinced this holds for the work now that is comparatively longer, less flashy, and costlier.
The worry that people have with Gabbard is that her sympathies with Russia and Syria would essentially make it hard for her own and allied intelligence services to trust her severely undermining the function she is to serve.
I think that’s fine—we just have different views on what a desirable size of the potential size of the movement would be.
To clarify—my point is not so much that this discussion is outside the Overton Window, but that it is deeply inside-looking / insular. It was good to be early on AI risk and shrimp welfare and all of the other things we have been early on as a community, but I do think these issues have a higher tractability in mobilizing larger movements / having an impact outside our community than this debate week has.