Dig it! Juan Benet from Protocol Labs and Matt Goldenberg are also working on this. Ping ’em!
rhys_lindmark
Link to an ongoing Twitter discussion with Rob Wiblin, Vitalik Buterin, etc. here: https://twitter.com/glenweyl/status/1163522777644748801
I like this style of thinking. A couple quick notes:
1. Various U.S. presidential candidates have proposals for “democracy dollars”, which are similar to philanthropy vouchers, but scoped to political giving. AFAICT, they have a different macro goal as well: to decentralize campaign financing. See https://www.yang2020.com/policies/democracydollars/ and https://www.vox.com/policy-and-politics/2019/5/4/18526808/kirsten-gillibrand-democracy-dollars-2020-campaign-finance-reform
2. I agree that non-politics can be systemic. See this post that expands on your idea of “what if everyone tithed 10%?” https://forum.effectivealtruism.org/posts/N4KSLXgr6J7Z9mByG/an-argument-to-prioritize-tithing-to-catalyze-a-paradigm
3. It would be interesting to see philanthropic vouchers tested in the EA community. Kind of like a reverse EA Funds/donor lottery, where an EA donor gives lots of EAs vouchers (money) and then the EAs donate it.
Woof! Thanks for noting this Stefan! As you say, cause neutrality is used in the exact opposite way (to denote that we select causes based on impartial estimates of impact, not that we are neutral about where another person gives their money/time). I’ve edited my post slightly to reflect this. Thanks!
Boom, thanks! Dig the push back here. I generally agree with Scott Alexander’s comment at the bottom: “I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.”
(Though I also believe there are some “macro systemic” reasons for believing that offsetting is a crucial piece to moving more folks to an EA-based non-accumulation mindset. More detailed explanation of this later!)
Awesome resource, thanks for the link! (Also, I had never heard of Pigouvian taxes before—thanks!)
Given your list, I’d group the “categories” of externalities into:
Environment (driving, emitting carbon, agriculture, municipal waste)
Public health (driving, obesity, alcohol, smoking, antibiotic use, gun ownership)
Financial (debt)
And, if I understand it correctly, it’s tough for me to offset some of these. This is because:
Luckily, I just happen to not do many of them (e.g. driving, obesity, alcohol, smoking, debt).
But even if I did, it’s not clear to me how to offset. i.e. Given your research in this area, could you help me answer this question—if I (or people in the developed world generally) were to offset the externalities our actions, what should we offset? 1st clear answer is paying to offset our carbon emissions. What would be “#2”, and how would we “pay” to offset it? (e.g. If I was obese, who would I pay to offset that?)
Thanks!
Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):
Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
Probabilities have incentives (e.g. Stern’s discounting incentive)
Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)
I’ll ping you as I get closer to a editable draft of my book, so we can ensure I’m painting an appropriate picture. Thanks again!
Hey Simon! Thanks writing up this paper. The final 1⁄3 is exactly what I was looking for!
Could you give us a bit more texture on why you think it’s “best not to put this kind of number on risks”?
Thanks! Here are my other favorite bear/skeptical/reasonable takes:
https://medium.com/john-pfeffer/an-institutional-investors-take-on-cryptoassets-690421158904
https://blog.chain.com/a-letter-to-jamie-dimon-de89d417cb80
https://prestonbyrne.com/2017/12/10/stablecoins-are-doomed-to-fail/
https://medium.com/@Melt_Dem/drowning-in-tokens-184ccfa1641a
(From a cultural perspective) https://www.nytimes.com/2018/01/13/style/bitcoin-millionaires.html
Others?
Love this exercise (I read a non-fiction book a week, so I think about this a lot!). I’d definitely put an EA book in the top 5, but I think we get more differentiated advantage by adding non-EA books too. My list:
On Direction and Measuring Your Impact—Doing Good Better
On Past-Facing Pattern Matching from History—Sapiens
On Future-Facing Tech Trends—Machine, Platform, Crowd
On Prioritization and Process—Running Lean
On Communication—An Everyone Culture
Honorable Mentions:
Influence/Hooked/Thinking Fast and Slow (on behavioral psychology)
World After Capital/Homo Deus/The Inevitable (more macro trends)
Designing Your Life (process)
Nonviolent Communication (communication)
I’m interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a “root cause incentive” perspective, from a “breaking game theory” perspective, and from a “change how money works” perspective. I’ll have a more full post about this soon, but here’s some of my initial thoughts on the subject:
I’d be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It’s clearly not neglected!)
Great question. https://gnosis.pm and https://augur.net are building decentralized prediction markets on the Ethereum blockchain. Their goal is to “match the global liquidity pool to the global knowledge pool.”
I’ve asked them how they’re thinking about hedgehogs to form a collective fox-y model (and then segmenting the data by hedgehog type).
But yeah, I think they will allow you to do what you want above: “Questions of the form: if intervention Y occurs what is the expected magnitude of outcome Z.”
I’m super into this! I’d be happy to check out your rough sketch. A couple thoughts:
I think we should not bucket all of our time into a general time bucket. In fact, some of our time needs to be “fun creative working time”. e.g. Sometimes I work on EA things, and sometimes I make music. “Designing an EA board game” could be part of that “fun bucket”.
A game like Pandemic (https://boardgamegeek.com/boardgame/30549/pandemic) could be a good starting point for designing the game (or to work with them on designing it). Essentially, use Pandemic as the MVP game for this, then expand to other cause areas (or to EA as a whole). Also, see 80,000 Hours most recent podcast on pandemics (the concept, not the oard game :) https://80000hours.org/2017/08/podcast-we-are-not-worried-enough-about-the-next-pandemic/
Here’s my favorite piece on game design (by Magic the Gathering’s head designer) http://magic.wizards.com/en/articles/archive/making-magic/ten-things-every-game-needs-part-1-part-2-2011-12-19
My instinct is that this should be a collaborative game (or, as William Macaskgill would say, a “shared aims community”).
Nice link! I think there’s worthwhile research to be done here to get a more textured ITN.
On Impact—Here’s a small example of x-risk (nuclear threat coming from inside the White House): https://www.vanityfair.com/news/2017/07/department-of-energy-risks-michael-lewis.
On Neglectedness—Thus far it seems highly neglected, at least at a system-level. hifromtheotherside.com is one of the only projects I know in the space (but the founder is not contributing much time to it)
On Tractability—I have no clue. Many of these “bottom up”/individual-level solution spaces seem difficult and organic (though we would pattern match from the spread of the EA movement).
There’s a lot of momentum in this direction (the public is super aware of the problem). Whenever this happens, I’m tempted by pushing an EA mindset “outcome-izing/RCT-ing” the efforts in the space. So even if it doesn’t score highly on Neglectedness, we could attempt to move the solutions towards more cost-effective/consequentialist solutions.
This is highly related to the timewellspent.io movement that Tristan Harris (who was at EAGlobal) is pushing.
I feel like we need to differentiate between the “political-level” and the “community-level”.
I’m tempted to think about this from the “communities connect with communities” perspective. i.e The EA community is the “starting node/community” and then we start more explicitly collaborating/connecting with other adjacent communities. Then we can begin to scale a community connection program through adjacent nodes (likely defined by n-dimensional space seen here http://blog.ncase.me/the-other-side/).
Another version of this could be “scale the CFAR community”.
I think this could be related to Land Use Reform (https://80000hours.org/problem-profiles/land-use-reform/) and how we construct empathetic communities with a variety of people. (Again, see Nicky Case — http://ncase.me/polygons/)
Awesome. Thanks Richenda—I’m looking into Secular Student Alliance now!
Rhys (also from Roote) here. Agree with Brendon that there isn’t too much literature evaluating the “efficacy of various governance models”. Some links you may want to look into, Holden:
(This is less about academic research and more about IRL experiments.)
Lots of governance experiments are happening with DAOs in crypto. See Vitalik’s back and forth here: https://twitter.com/VitalikButerin/status/1442039126606311427
Or my response here. I find it helpful to visualize these systems: https://twitter.com/RhysLindmark/status/1446276859109335040 and https://www.rhyslindmark.com/popper-criterion-for-politics/ . Those pieces contain lots of political economy books like The Dictator’s Handbook. https://www.goodreads.com/en/book/show/11612989
More crypto stuff: https://gnosisguild.mirror.xyz/OuhG5s2X5uSVBx1EK4tKPhnUc91Wh9YM0fwSnC8UNcg. These are interchangeable “Modules” that DAOs can use like DeGov. https://otherinter.net/research/ is doing research on DAO governance as well.
On the non-crypto side, Rob Reich has great thoughts on this. I found this convo between him and Stuart Russell re legitimacy and AI governance helpful. (49:30)
Worth differentiating how much groups disagree on what should be (goals) vs. what is (current state). https://twitter.com/RhysLindmark/status/1294107741246517248
This feels close to the work Ian David-Moss et al are doing here https://forum.effectivealtruism.org/tag/effective-institutions-project
Many of the governance issues take the form of one of Meadow’s “system traps” https://bytepawn.com/systems-thinking.html#:~:text=Thinking%20in%20Systems%2C%20written%20by,furnace%20to%20a%20social%20system.
In the spirit of your final experimental point: Long term, I do think a lot of this will just be understood (and computationally modeled) as social groups (bounded by a Markov Blanket) abiding by the Free Energy Principle / Active Inference with Bayesian generative models, co-evolving into evolutionarily stable strategies. But we’re not there yet! 🙂
Beyond social choice theory, not sure there’s a better field you’re looking for. Maybe Political Economy, Public Choice Theory, or Game Theory? ¯\_(ツ)_/¯
Anywho, good luck and excited to see what you unearth!