This is the second round of a questionI asked last year about what posts you are planning on writing, so that people can share progress and get community feedback and support.
Questions
If you answered the post last year and/âor had posting plans, how did they go? If you didnât end up posting, what happened?
What are some open questions or uncertainties you have about the planned post?
If you have plans for multiple posts, which are you most excited about, and why?
Posting Resources
Community feedback from the EA Editing and Review Facebook group
Aaron Gertler is also available for editing EA Forum draft posts
I have about 60 EA-related ideas right now. This list includes some of the most promising ones, broken down by category. I am interested in feedback on which ideas people like the best.
Plus signs indicate how well thought-out an idea is:
+
= idea seems interesting, but I have no idea what to say about it++ = partially formed concept, but still a bit fuzzy
+++ = fully-formed concept, just need to figure out the details/âactually do it
Fundamental problems
âPascalâs Bayesian Prior Muggingâ: Under âlongtermist-friendlyâ priors, if a mugger asks for $5 in exchange for an unspecified reward, you should give the $5 ++
If causes differ astronomically in EV, then personal fit in career choice is unimportant ++
EAs should focus on fundamental problems that are only relevant to altruists (e.g., infinity ethics yes, explore/âexploit no) +++
The case for prioritizing âphilosophy of priorsâ ++
How quickly do forecasting estimates converge on reality? (use Metaculus API) +++
Investing for altruists
Alternate version of How Much Leverage Should Altruists Use? that assumes EMH +++
How risk-averse should altruists be (and how does it vary by cause)? +
Can patient philanthropists take advantage of investorsâ impatience? +
Giving now vs. later
Reverse-engineering the philanthropic discount rate from observed market rates +++
Optimal behavior in extended Ramsey model that allows spending on cash transfers or x-risk reduction +++
If giving later > now, what does that imply for talent vs. funding constraints? +
Is movement-building an expenditure or an investment? +
Fermi estimate of the cost-effectiveness of improving the EA spending rate +++
Prioritization research might need to happen now, not later ++
Long-term future
If technological growth linearly increases x-risk but logarithmically increases well-being, then we should stop growing at some point ++
Estimating P(existential catastrophe) from a list of near-catastrophes +++
Thoughts on doomsday argument +
Value of the future is dominated by worlds where we are wrong about the laws of physics ++
If x-risk reduction is permanent and people arenât longtermist, we should give later +++
Other
How should we expect future EA funding to look? +
Can we use prediction markets to enfranchise future generations? (Predict what future people will want, and then the government has to follow the predictions) +
Altruistic research might have increasing marginal utility ++
âSuspicious convergenceâ is not that suspicious because people seek out actions that look good across multiple assumptions +++
Iâd really like to see âIf causes differ astronomically in EV, then personal fit in career choice is unimportantâ
I like that these generally seem quite clear and focused.
In terms of decision relevance and benefit, I get the impression that several funders and meta EA orgs feel a crunch in not having great prioritization, and if better work emerges, they may change funding fairly quickly. Iâm less optimistic about career change type work, mainly because it seems like it would take several more years to apply (it would take some time from convincing someone to having them start producing research).
Iâm skeptical of how much research into investments will change investments in the next 2-10 years. I donât get the impression OpenPhil or other big donors are closely listening to these topics here.
Therefore Iâm more excited about the Giving Now/âLater and Long-Term Future work.
Another way of phrasing this is that I think we have a decent discount rate (maybe 10% a year), plus I think that high-level research prioritization is a particularly useful field if done well.
A few years back a relatively small amount of investigation into AI safety (maybe 20 person years?) led to a huge change from OpenPhil and a bunch of EA talent.
I would be curious to hear directly from them. I think that work that influences the big donors is the highest leverage at this point, and I also get the impression that there is a lot of work that could change their minds. But I could be wrong.
Iâd be interested in basically all of the Giving Now vs Later but especially:
A bunch of posts related to The Precipice
I recently finished Toby Ordâs The Precipice, and thought it was an excellent and very important book. I plan to write a bunch of posts that summarise, comment on, or take inspiration from various parts of it. Most are currently very early-stage, but the working titles are below.
Key uncertainties/âquestions:
Is there anyone whoâs already planning to write similar things? I probably wonât have time to write all the things Iâve planned. So if someone else is already likely to pursue ideas similar to some of these, we could potentially collaborate, or I could share my notes and thoughts, let you take that particular topic from there, and allocate my time to other things.
Working titles:
Defining existential risks and existential catastrophes
My thoughts on Toby Ordâs policy & research recommendations
Existential security
Civilizational collapse and recovery: Toby Ordâs views and my doubts
The Terrible Funnel: Estimating odds of each step on the x-risk causal path (this title is especially âworkingâ)
The idea here would be to adapt something like the âGreat Filterâ or âDrake Equationâ reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or couldâve passed certain âstepsâ on certain causal chains to catastrophe.
E.g., even though weâve never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each âstepâ to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
This idea seems sort of implicit in the Precipice, but isnât really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks donât apply to natural pandemics. Or that might be a separate post.
Developingâbut not deployingâdrastic backup plans
âMacrostrategyâ: Attempted definitions and related concepts
This would relate in part to Ordâs concept of âgrand strategy for humanityâ
Collection of notes
A post summarising the ideas of existential risk factors and existential security factors?
I suspect I wonât end up writing this, but I think someone should. For one thing, itâd be good to have something people can reference/âlink to that explains that idea (sort of like the role EA Concepts serves).
Local Career Advice Bottlenecks
Status: First draft, series of 3 posts.
The Local Career Advice Network ran a group organisersâ survey to evaluate overall career advice bottlenecks in the community. There will likely be 3 write-ups on the following topics:
the main bottlenecks group organisersâ observe their membersâ facing
the main bottlenecks group organisers face when trying to give high quality careers advice
evaluation of career advice events and activities run by groups
Key uncertainties/âquestions
Nothing as of now. Iâll add to this comment as thoughts arise.
Running self-directed projects
Status: early stage
Iâve run a number of projects over the last few months and thought it might be useful to share my experiences and successes/âfailures and lessons learnt. I may also be presenting the insights from these projects at some point.
Key uncertainties/âquestions
How valuable would people find this?
Update: I ended up writing & publishing this post.
Career Change Interviews
Status: First Draft
This is a writeup on qualitative research I and Benjamin Skubi did in summer 2019 on 20 EAs at various stages of a career change process. Itâll cover:
stages of our intervieweeâs EA journeys (an alternative perspective to the funnel model which will focus on an indivdualâs journey)
what inspires a career change, what the change process looks like, commonly mentioned bottlenecks and useful resources
recommendations/âuseful tips for career changers and group organisers
Key uncertainties/âquestions
Iâm not sure whether to keep the recommendations in the same writeup in a separate section or create a new post with the recommendations.
You can see the updates from my previous posts here
This sounds interesting!
Do you mean career change from ânon-EA-influencedâ paths to âEA-influencedâ paths, career changes between âEA-influencedâ paths, or career changes by in general?
Re having one post vs splitting recommendations out: I often use something like the following heuristic: âIf the post contains multiple sets of ideas/âpoints, which are relatively easy to understand without each other, which may offer value by themselves (i.e., without the other set), and which may be valuable/âinteresting to slightly different sets of people, itâs probably worth splitting the post into multiple, more bitesized chunks.â
So Iâd guess that it may be best to split the recommendations out, if they can be understood out of context and if theyâre decently long (something like âat least 500 words, pretty confidently if over 1000 wordsâ).
One counterpoint is that this may be a bad idea when a set of ideas could be âunderstoodâ out of context, but maybe in a distorted form, or with too little emphasis on other considerations. (Like how it might be possible to get people to perfectly understand the earning-to-give concept without other context, but this could lead to it being emphasised too strongly such that 80k/âEA more broadly is misunderstood, as discussed in the fidelity model).