I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book draft I was calling “Ways to Save The World” or “Paths to Utopia” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.
Upon discovering Effective Altruism in January 2022, while preparing to start a Master’s of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.
I am now researching “Deep Reflection,” processes for determining how to get to our best achievable future, including interventions such as “The Long Reflection,” “Coherent Extrapolated Volition,” and “Good Reflective Governance.”
Thanks Niki, this was great!
Personally I’m quite concerned about futures where things get decided this quickly, or at least where they get decided on an object level very quickly.
I think to get anywhere near most value achievable in the future we need to have a relatively comprehensive reflection process before making lasting decisions. This could be achieved through some kind of bootstrapping process, or some other kind of slow down period, but I do worry that something like this may not be very likely by default,it seems possible humanity may not be nearly patient enough for this kind of thing, even if superintelligence can speed up the process quite a bit.
I did a lot of research on the importance of a comprehensive reflection process last year which you may find interesting, I’m hoping to finish editing soon, but if you’d like to see the current draft and/or have feedback here it is.