Doing nothing might be better than trying to help in a counterproductive way. Not living is better than a bad life. No future (extinction) is better than a bad future (dystopia).
More anarchist than archist, more technocrat than populist, I have Solarpunk-ish sentiments.
I believe that cyberpunk (techno-dystopia) and solarpunk (green-techno-utopia) futures will likely coexist. I prefer to focus on building the later than trying to redeem the first one.
Szewek
Hi! I’ve been following EA for more than six years now. I am not sure how it started, but I think the first contact was a TED talk from Peter Singer. I joined some local and online events, I took the founders’ pledge, and I follow it—though I never accepted Give Well’s monopoly on deciding where to donate. I love the conceptual frameworks of EA—such as the importance, traceability, and neglectedness framework, or the idea of guiding young people on what impact their job might have before they make the choice. I hope this forum to be a still open space for a discussion on where we go forward with those ideas!
Hey! I’ve been at a lecture by a group of people who work on biodiversity within EA. Fill up the form to get updates on your email: https://docs.google.com/forms/d/e/1FAIpQLSf0oqFB5wbu-I5nLSEvJTlwLILulGntHbaEdtvVxcW4sGYV6w/viewform
Interesting framework to address some of the key problems for the consequentialist train of thought. Your framework sounds like a good way forward. Thanks for sharing!
One practical question I have revolves around uncertainty. We usually don’t know the exact “rules” of the moral “betting games” we are playing. How do we distinguish a cumulative problem from a multiplicative one?
Also, similar problems exist in biology, and the results from there might have interesting implications for effective strategies in the face of multiplicative risks, especially ruin problems. In brief, when you model population dynamics, you can do it in a deterministic or stochastic* manner. The deterministic models are believed to be a mean of the stochastic ones, kinda like the expected value problem. But this only works if we have big populations. If the population is small, with random effects it is likely to die out. So, whether averaging works, depends on the population size, which is often given in numbers of individuals but is, in reality, much more complex than that and relates to internal diversity.
How does that connect with ergodicity and ruin problems? Consider it in the context of value lock-in (as in Willam MacAskill’s “What Do We Owe the Future”) and general homogenization of Earth’s social (and environmental) systems. If we all make the same bets, even if they have the best, ergodicity-corrected expected values, once we lose, we lose it all (Willam MacAskill gives an interesting example of a worryingly uniform response to COVID-19, see “What Do We Owe the Future” Ch. 4). This is equivalent to a small population in the biological systems. A diverse response means that a ruin is not a total ruin. This is equivalent to large populations in biological systems. You can think about it as having multiple AGIs, instead of one “singularity”, if this is your kind of thing. Either way, while social diversity by no means solves all the problems, and there are many possible systems that I would like to keep outside the scope of possibilities, there seems to be a value in persistently choosing different ways. It might well be that a world that does not collapse into ruin anytime soon would be “a world in which many worlds fit”.
* stochastic means it includes random effects.