Some EA Forum Posts I’d like to write

I decided to write a list of posts I’d like to write, on the hypothesis that perhaps I can crowdsource interest or pre-emptively get people’s takes on goodness to a) prioritize my writings better and b) to develop better intuitions for which systems/​processes can preemptively determine what research/​writings are valuable. Note that I’m currently quite unlikely to write >2 of these posts unless I get substantive feedback otherwise.

Unless explicitly stated otherwise, names/​links/​quotes of other people are referenced for partial attribution. They should not be construed as endorsements by those people, and on base rates it should be reasonable to assume that in this post I misrepresented someone at least once.

This post is written in my own capacity, and does not represent the position or output of my employer (RP) or past employers.

After the Apocalypse: Why Personal Survival in GCR Scenarios Should Be Unusually High Priority For Altruists/​Consequentialists

  • I think if your ensemble of beliefs include substantial credence in both urgent and patient longtermism, this should lead to a fairly high credence in the importance of the survival and proliferation of certain key ideas

  • One way to ensure the survival of those ideas is through the survival of individuals with those ideas

  • This is especially relevant if you have high credence in the probability of large-scale non-existential GCRs, particularly ones with a population fatality rate of closer to 99.99% than 50%.

  • An alternative way to frame this is to consider the analogy to the Hinge of History hypothesis.

    • All else equal, individuals are more likely to live at the hinge of history if there’s 6.5 million other humans than 6.5 billion.

Shelters MVP: A Concrete Proposal to Robustly Mitigate Global Catastrophes

  • I haven’t yet seen shelter designs with all the desiderata that I’d like to see.

  • I’ve seen preliminary discussions that analyzed shelter intervention viability/​cost-effectiveness in the abstract, but not stuff that looked at a specific well-defined target to aim for (or decide it’s not worth doing).

  • I think that while reducing GC risks is of utmost importance, for longtermist goals, a potentially important part of the web-of-protection/​defense-in-depth story should involve substantial work on catastrophe mitigations.

    • I claim for a number of GCRs (with the notable exception of AI or other agent-heavy risks), certain shelter designs should robustly reduce the overall harm.

  • I suspect (without yet having done the numbers) that this may not end up being worthwhile to implement at the current margin, however it is still worthwhile to have a blueprint ready as a robust baseline for GCR mitigation, so we have a direct comparison class/​bar for what marginal Open Phil/​EA longtermist dollars necessarily must beat.

    • (I’m currently more excited about this as a baseline for “last longtermist dollar,” akin to GiveDirectly for global poverty, than the clean energy funding that others in EA propose).

Moral Circle Expansion: Is it Highly Overrated?

  • Many EAs (myself somewhat included) believe in some form of moral circle expansion as something that a) descriptively happened/​is happening, b) is worthwhile to have, and c) is plausibly worth EA effort to ensure happening.

  • I think I (used to?) believe in some version of this, at the risk of simplifying too much:

    • The story of moral progress is in large part a story of the expansion of who we choose to care about. From only individuals to family members, trible, nation, race, etc, and expanding outwards to people of other races, locations, sexualities, and mental architectures. Future moral progress may come from us caring about more entities worthy of moral consideration (“moral circle expansion”). However, this expansion of concern is not automatic, and may require dedicated effort from EAs.

  • However, there’s a number of pertinent critiques on this from a number of different angles, that I think is a) AFAICT not collected in one place and b) underexplored.:

    • Critique from history of moral psychology:

      • https://​​www.gwern.net/​​The-Narrowing-Circle

      • Essentially, many things that used to be in our moral circle (ancestors, plants, spirits, even animals) are no longer in modern WEIRD moral circles.

      • Thus, to the extent that the shifting moral circle/​sphere of concern is good, this is less due to an overall expansion of concern, and more due to us having precisely more accurate understandings of whom/​what to care about/​for.

    • Critique from political history of expanding rights:

    • Critique from empirical moral psychology: Do people descriptively actually have a real moral circle?

      • For example, who or what people evince concern for may be a highly variable, contextually specific unstable thing, rather than a single circle that we can general expand

      • Plausible that broad moral circles matter a lot for people who are (intuitive) consequentialists, and not many others, suggesting a ceiling of value in MCE

    • Evaluative critique from empirical moral psychology: Is MCE (in the relevant dimensions) even a net good?

      • This is skipped in lots of conversations about MCE.

      • There’s at least two reasons to think otherwise:

        • Moral purity frequently linked to bad outcomes

          1. Moral outrage, etc, not known to be unusually truth-tracking

          2. Morality-based reasoning often leads to black/​white thinking, large-scale harms, etc.

        • Increasing the moral circle of concern in practice may also lead to increasing the moral circle of judgment

          1. Intuitively (to consequentialist/​EA types), moral patients are not necessarily moral agents

          2. However most people don’t believe this in practice.

          3. Moral judgment may lead us to not just be more concerned along the “care” dimension, but also more willing to punish along the “retributive” dimension.

  • (I got a lot of ideas about this from discussions with my coworker David Moss, particularly the empirical moral psychology stuff. The ideas are mostly not original to me).

  • A version of moral circle expansion can plausibly be rescued from all these critiques, but it may end up looking very different

    • Even so, it might still end up being fake/​not worthwhile for above or other reasons.

How to Get Good At Forecasting: Lessons from Interviews With Over 100 Top Forecasters

  • There appears to be broad interest within EA, particularly the EA ∩ forecasting subcluster, at getting much better at forecasting.

  • I’m interested in writing a list of ideas on how people can get much better at forecasting, but 2 major problems:

    • 1. I’m not the best forecaster in the world

      • Thus, it seems unlikely that someone can follow my advice and become much better at forecasting than me.

      • For various reasons, this is not impossible, but on base rates it still seems rather unlikely.

    • 2. my own style of reasoning/​forecasting might be sufficiently idiosyncratic that generalizing from it is less useful than averaging across lots of ideas.

  • My initial solution to this was to interview a lot of forecasters.

  • But then I realized that I know people who have interviewed far more top forecasters than I have. So a potentially better process is to interview them, to leverage more aggregated wisdom by aggregating aggregators, rather than doing the aggregating myself.

  • I’m also conceptually interested in this idea because for various reasons, including research amplification and AI safety, solid ways to do meta-aggregation in various domains seems underexplored and valuable.

  • The structure of the post would look like a list of ideas sourced from interviews, maybe including anecdotes or worked-out-processes for easier digestion.

What Are Good Humanities Research Ideas for Longtermism?

  • As a continuation/​ extension of this post on history research ideas by my coworker MichaelA, I’m interested in tabulating a list of humanities research ideas that are potentially very important.

    • I’m arbitrarily excluding “philosophy” research topics from this list because I think this has already been explored somewhat, plus there are enough philosophers in our movement that they can do their own thing so the marginal value for me trying to do a tabulation is lower.

  • Some of these grew out of conversations with MichaelA, Howie Lempel, Daniel Filan and others.

  • Some things I’m particularly interested in:

    • Humanities that are adjacent to fields that we already care about

      • Eg anthropology of people doing work in scientific labs/​critical institutions

    • Comparative lit studies of whether ambitious science fiction (might not be well operationalized) is correlated with ambitious science fact.

    • General question of utopianism/​definite optimism/​futuristic inclinations of cultures and microcultures.

      • Can be studied from various social science and humanities angles

    • If we have specific tales we’d like to see (e.g. something that makes longtermism or consequentialism S1 visceral) what are the insights we can learn from past work to scope this out in advance?

  • I’m also interested in starting the seed of/​encouraging someone better positioned than me to develop a broader framework/​ontology for assessing which humanities research funding/​work in general, or specific humanities research projects in particular, are worth devoting marginal $s or researcher-hours into.

Acknowledgements

Thanks to conversations with Jake Mckinnon, Adam Gleave, Amanda Ngo, David Moss, Michael Aird, Peter Hurford, Dave Bernard and I’m sure many others for conversations that helped inspire or crystallize some of these ideas. Thanks also to Salius Simcikas, David Moss, Janique Belman, and especially Michael Aird for many constructive comments on an earlier draft of this post.

All mistakes and inaccuracies are, naturally, the fault of a) the boundary conditions of the universe and b) the Big Bang. Please do comment if/​when you identify mistakes, so I can sigh resignedly at the predetermined nature of such mistakes.