Dig it! Juan Benet from Protocol Labs and Matt Goldenberg are also working on this. Ping ’em!
rhys_lindmark
FTX Future Fund and Longtermism
Link to an ongoing Twitter discussion with Rob Wiblin, Vitalik Buterin, etc. here: https://twitter.com/glenweyl/status/1163522777644748801
I like this style of thinking. A couple quick notes:
1. Various U.S. presidential candidates have proposals for “democracy dollars”, which are similar to philanthropy vouchers, but scoped to political giving. AFAICT, they have a different macro goal as well: to decentralize campaign financing. See https://www.yang2020.com/policies/democracydollars/ and https://www.vox.com/policy-and-politics/2019/5/4/18526808/kirsten-gillibrand-democracy-dollars-2020-campaign-finance-reform
2. I agree that non-politics can be systemic. See this post that expands on your idea of “what if everyone tithed 10%?” https://forum.effectivealtruism.org/posts/N4KSLXgr6J7Z9mByG/an-argument-to-prioritize-tithing-to-catalyze-a-paradigm
3. It would be interesting to see philanthropic vouchers tested in the EA community. Kind of like a reverse EA Funds/donor lottery, where an EA donor gives lots of EAs vouchers (money) and then the EAs donate it.
Defining Meta Existential Risk
Woof! Thanks for noting this Stefan! As you say, cause neutrality is used in the exact opposite way (to denote that we select causes based on impartial estimates of impact, not that we are neutral about where another person gives their money/time). I’ve edited my post slightly to reflect this. Thanks!
An Argument to Prioritize “Tithing to Catalyze a Paradigm Shift and Negate Meta Existential Risk”
Boom, thanks! Dig the push back here. I generally agree with Scott Alexander’s comment at the bottom: “I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.”
(Though I also believe there are some “macro systemic” reasons for believing that offsetting is a crucial piece to moving more folks to an EA-based non-accumulation mindset. More detailed explanation of this later!)
Awesome resource, thanks for the link! (Also, I had never heard of Pigouvian taxes before—thanks!)
Given your list, I’d group the “categories” of externalities into:
Environment (driving, emitting carbon, agriculture, municipal waste)
Public health (driving, obesity, alcohol, smoking, antibiotic use, gun ownership)
Financial (debt)
And, if I understand it correctly, it’s tough for me to offset some of these. This is because:
Luckily, I just happen to not do many of them (e.g. driving, obesity, alcohol, smoking, debt).
But even if I did, it’s not clear to me how to offset. i.e. Given your research in this area, could you help me answer this question—if I (or people in the developed world generally) were to offset the externalities our actions, what should we offset? 1st clear answer is paying to offset our carbon emissions. What would be “#2”, and how would we “pay” to offset it? (e.g. If I was obese, who would I pay to offset that?)
Thanks!
How can I internalize my most impactful negative externalities?
Memetic Tribes and Culture War 2.0 (Article Discussion)
What is the Most Helpful Categorical Breakdown of Normative Ethics?
Perfect, thanks! I agree with most of your points (and just writing them here for my own understanding/others):
Uncertainty hard (long time scale, humans adaptable, risks systemically interdependent so we get zero or double counting)
Probabilities have incentives (e.g. Stern’s discounting incentive)
Probabilities get simplified (0-10% can turn into 5% or 0% or 10%)
I’ll ping you as I get closer to a editable draft of my book, so we can ensure I’m painting an appropriate picture. Thanks again!
Hey Simon! Thanks writing up this paper. The final 1⁄3 is exactly what I was looking for!
Could you give us a bit more texture on why you think it’s “best not to put this kind of number on risks”?
Current Estimates for Likelihood of X-Risk?
Thanks! Here are my other favorite bear/skeptical/reasonable takes:
https://medium.com/john-pfeffer/an-institutional-investors-take-on-cryptoassets-690421158904
https://blog.chain.com/a-letter-to-jamie-dimon-de89d417cb80
https://prestonbyrne.com/2017/12/10/stablecoins-are-doomed-to-fail/
https://medium.com/@Melt_Dem/drowning-in-tokens-184ccfa1641a
(From a cultural perspective) https://www.nytimes.com/2018/01/13/style/bitcoin-millionaires.html
Others?
An Argument To Prioritize “Positively Shaping the Development of Crypto-assets”
Love this exercise (I read a non-fiction book a week, so I think about this a lot!). I’d definitely put an EA book in the top 5, but I think we get more differentiated advantage by adding non-EA books too. My list:
On Direction and Measuring Your Impact—Doing Good Better
On Past-Facing Pattern Matching from History—Sapiens
On Future-Facing Tech Trends—Machine, Platform, Crowd
On Prioritization and Process—Running Lean
On Communication—An Everyone Culture
Honorable Mentions:
Influence/Hooked/Thinking Fast and Slow (on behavioral psychology)
World After Capital/Homo Deus/The Inevitable (more macro trends)
Designing Your Life (process)
Nonviolent Communication (communication)
I’m interested in quantifying the impact of blockchain and cryptocurrency from a ITN perspective. My instinct is that the technology could be powerful from a “root cause incentive” perspective, from a “breaking game theory” perspective, and from a “change how money works” perspective. I’ll have a more full post about this soon, but here’s some of my initial thoughts on the subject:
I’d be especially interested in hearing from people who think blockchain/crypto should NOT be a focus of the EA community! (e.g. It’s clearly not neglected!)
Rhys (also from Roote) here. Agree with Brendon that there isn’t too much literature evaluating the “efficacy of various governance models”. Some links you may want to look into, Holden:
(This is less about academic research and more about IRL experiments.)
Lots of governance experiments are happening with DAOs in crypto. See Vitalik’s back and forth here: https://twitter.com/VitalikButerin/status/1442039126606311427
Or my response here. I find it helpful to visualize these systems: https://twitter.com/RhysLindmark/status/1446276859109335040 and https://www.rhyslindmark.com/popper-criterion-for-politics/ . Those pieces contain lots of political economy books like The Dictator’s Handbook. https://www.goodreads.com/en/book/show/11612989
More crypto stuff: https://gnosisguild.mirror.xyz/OuhG5s2X5uSVBx1EK4tKPhnUc91Wh9YM0fwSnC8UNcg. These are interchangeable “Modules” that DAOs can use like DeGov. https://otherinter.net/research/ is doing research on DAO governance as well.
On the non-crypto side, Rob Reich has great thoughts on this. I found this convo between him and Stuart Russell re legitimacy and AI governance helpful. (49:30)
Worth differentiating how much groups disagree on what should be (goals) vs. what is (current state). https://twitter.com/RhysLindmark/status/1294107741246517248
This feels close to the work Ian David-Moss et al are doing here https://forum.effectivealtruism.org/tag/effective-institutions-project
Many of the governance issues take the form of one of Meadow’s “system traps” https://bytepawn.com/systems-thinking.html#:~:text=Thinking%20in%20Systems%2C%20written%20by,furnace%20to%20a%20social%20system.
In the spirit of your final experimental point: Long term, I do think a lot of this will just be understood (and computationally modeled) as social groups (bounded by a Markov Blanket) abiding by the Free Energy Principle / Active Inference with Bayesian generative models, co-evolving into evolutionarily stable strategies. But we’re not there yet! 🙂
Beyond social choice theory, not sure there’s a better field you’re looking for. Maybe Political Economy, Public Choice Theory, or Game Theory? ¯\_(ツ)_/¯
Anywho, good luck and excited to see what you unearth!