I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book draft I was calling “Ways to Save The World” or “Paths to Utopia” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.
Upon discovering Effective Altruism in January 2022, while preparing to start a Master’s of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.
I am now researching “Deep Reflection,” processes for determining how to get to our best achievable future, including interventions such as “The Long Reflection,” “Coherent Extrapolated Volition,” and “Good Reflective Governance.”
Hey Will, very excited to see you posting more on viatopia, couldn’t agree more that some conception of viatopia might be an ideal north star for navigating the intelligence explosion.
As crazy as this seems, I just last night wrote a draft of a piece on what I have been calling primary and secondary cruxes/crucial considerations, (in previous work I also used a perhaps even more closely related concept of “robust viatopia proxy targets”) which seems closely related to your “societal version of Rawls’ primary goods,” though I had not been previously aware of this work by Rawls. I continue to be quite literally shocked at the convergence of our research, in this case profoundly (if you happen to be as incredulous as I am, I do by chance have my work on this time-stamped through a few separate modalities I’d be happy to share.)
I believe figuring out primary goods and primary cruxes should be a key priority of macrostrategy research, we don’t need to figure out everything, we just need to get the right processes and intermediate conditions in order to move us progressively in the right direction.
I think what is ultimately most important is that we reach a state of what I have been calling “deep reflection”; a state in which we have both comprehensively reflected to determine how to achieve a high value future, and simultaneously are in such a state in which society is likely to act on that knowledge. This is not quite the same as viatopia, as it’s more of an end state that would occur right before we actualize our potential, hence I think it can act as another useful handle as the kind of thing we should hope viatopia is ultimately moving us toward.
I’m really looking forward to seeing more essays in your series!