Woof! Thanks for noting this Stefan! As you say, cause neutrality is used in the exact opposite way (to denote that we select causes based on impartial estimates of impact, not that we are neutral about where another person gives their money/time). I’ve edited my post slightly to reflect this. Thanks!
Boom, thanks! Dig the push back here. I generally agree with Scott Alexander’s comment at the bottom: “I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.”
(Though I also believe there are some “macro systemic” reasons for believing that offsetting is a crucial piece to moving more folks to an EA-based non-accumulation mindset. More detailed explanation of this later!)
Awesome resource, thanks for the link! (Also, I had never heard of Pigouvian taxes before—thanks!)
Given your list, I’d group the “categories” of externalities into:
Environment (driving, emitting carbon, agriculture, municipal waste)
Public health (driving, obesity, alcohol, smoking, antibiotic use, gun ownership)
And, if I understand it correctly, it’s tough for me to offset some of these. This is because:
Luckily, I just happen to not do many of them (e.g. driving, obesity, alcohol, smoking, debt).
But even if I did, it’s not clear to me how to offset. i.e. Given your research in this area, could you help me answer this question—if I (or people in the developed world generally) were to offset the externalities our actions, what should we offset? 1st clear answer is paying to offset our carbon emissions. What would be “#2”, and how would we “pay” to offset it? (e.g. If I was obese, who would I pay to offset that?)
Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):
Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
Probabilities have incentives (e.g. Stern’s discounting incentive)
Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)
I’ll ping you as I get closer to a editable draft of my book, so we can ensure I’m painting an appropriate picture. Thanks again!
Hey Simon! Thanks writing up this paper. The final 1⁄3 is exactly what I was looking for!
Could you give us a bit more texture on why you think it’s “best not to put this kind of number on risks”?
Thanks! Here are my other favorite bear/skeptical/reasonable takes:
(From a cultural perspective) https://www.nytimes.com/2018/01/13/style/bitcoin-millionaires.html
Love this exercise (I read a non-fiction book a week, so I think about this a lot!). I’d definitely put an EA book in the top 5, but I think we get more differentiated advantage by adding non-EA books too. My list:
On Direction and Measuring Your Impact—Doing Good Better
On Past-Facing Pattern Matching from History—Sapiens
On Future-Facing Tech Trends—Machine, Platform, Crowd
On Prioritization and Process—Running Lean
On Communication—An Everyone Culture
Influence/Hooked/Thinking Fast and Slow (on behavioral psychology)
World After Capital/Homo Deus/The Inevitable (more macro trends)
Designing Your Life (process)
Nonviolent Communication (communication)
I’m interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a “root cause incentive” perspective, from a “breaking game theory” perspective, and from a “change how money works” perspective. I’ll have a more full post about this soon, but here’s some of my initial thoughts on the subject:
I’d be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It’s clearly not neglected!)
Great question. https://gnosis.pm and https://augur.net are building decentralized prediction markets on the Ethereum blockchain. Their goal is to “match the global liquidity pool to the global knowledge pool.”
I’ve asked them how they’re thinking about hedgehogs to form a collective fox-y model (and then segmenting the data by hedgehog type).
But yeah, I think they will allow you to do what you want above: “Questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z.”
I’m super into this! I’d be happy to check out your rough sketch. A couple thoughts:
I think we should not bucket all of our time into a general time bucket. In fact, some of our time needs to be “fun creative working time”. e.g. Sometimes I work on EA things, and sometimes I make music. “Designing an EA board game” could be part of that “fun bucket”.
A game like Pandemic (https://boardgamegeek.com/boardgame/30549/pandemic) could be a good starting point for designing the game (or to work with them on designing it). Essentially, use Pandemic as the MVP game for this, then expand to other cause areas (or to EA as a whole). Also, see 80,000 Hours most recent podcast on pandemics (the concept, not the oard game :) https://80000hours.org/2017/08/podcast-we-are-not-worried-enough-about-the-next-pandemic/
Here’s my favorite piece on game design (by Magic the Gathering’s head designer) http://magic.wizards.com/en/articles/archive/making-magic/ten-things-every-game-needs-part-1-part-2-2011-12-19
My instinct is that this should be a collaborative game (or, as William Macaskgill would say, a “shared aims community”).
Nice link! I think there’s worthwhile research to be done here to get a more textured ITN.
On Impact—Here’s a small example of x-risk (nuclear threat coming from inside the White House): https://www.vanityfair.com/news/2017/07/department-of-energy-risks-michael-lewis.
On Neglectedness—Thus far it seems highly neglected, at least at a system-level. hifromtheotherside.com is one of the only projects I know in the space (but the founder is not contributing much time to it)
On Tractability—I have no clue. Many of these “bottom up”/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).
There’s a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I’m tempted by pushing an EA mindset “outcome-izing/RCT-ing” the efforts in the space. So even if it doesn’t score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
This is highly related to the timewellspent.io movement that Tristan Harris (who was at EAGlobal) is pushing.
I feel like we need to differentiate between the “political-level” and the “community-level”.
I’m tempted to think about this from the “communities connect with communities” perspective. i.e The EA community is the “starting node/community” and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here http://blog.ncase.me/the-other-side/).
Another version of this could be “scale the CFAR community”.
I think this could be related to Land Use Reform (https://80000hours.org/problem-profiles/land-use-reform/) and how we construct empathetic communities with a variety of people. (Again, see Nicky Case — http://ncase.me/polygons/)
Awesome. Thanks Richenda—I’m looking into Secular Student Alliance now!
Yep yep, happy to! A couple things come to mind:
We could track the “stage” of a given problem/cause area, in a similar way that startups are tracked by Seed, Series A, etc. In other words, EA prioritization would be categorized w.r.t. stages/gates. I’m not sure if there’s an agreed on “stage terminology” in the EA community yet. (I know GiveWell’s Incubation Grants http://www.givewell.org/research/incubation-grants and EAGrants https://www.effectivealtruism.org/grants/ are examples of recent “early stage” investment.) Here would be some example stages:
Stage 1) Medium dive into the problem area to determine ITN.
Stage 2) Experiment with MVP solutions to the problem.
Stage 3) Move up the hierarchy of evidence for those solutions—RCTs, etc.
Stage 4) For top solutions with robust cost-effectiveness data, begin to scale.
(You could create something like a “Lean Canvas for EA Impact” that could map the prioritized derisking of these stages.)
From the “future macro trends” perspective, I feel like there could be more overlap between EA and VC models that are designed to predict the future. I’m imagining this like the current co-evolving work environment with “profit-focused AI” (DeepMind, etc.) and “EA-focused AI” (OpenAI, etc.). In this area, both groups are helping each other pursue their goals. We could imagine a similar system, but for any given macro trend. i.e. That macro trend is viewed from a profit perspective and an impact/EA perspective.
In other words, this is a way for the EA community to say “The VC world has [x technological trend] high on their prioritization list. How should we take part from an EA perspective?” (And vice versa.)
(fwiw, I see two main ways the EA community interacts in this space—pursuing projects that either a) leverage or b) counteract the negative externalities of new technologies. Using VR for animal empathy is an example of leverage. AI alignment is an example of counteracting a negative externality.)
Do those examples help give a bit of specificity for how the EA + VC communities could co-evolve in “future uncertainty prediction”?
This isn’t a unique thought, but I just want to make sure the EA community knows about Gnosis and Augur, decentralized prediction markets built on Ethereum.