My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing various existential risks, making no assumptions about what risks were most likely.
Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at AI X-risk, and moved to Berkeley to do longtermist community building work.
I am now looking to close down a small business I have been running to research AI safety and other longtermist crucial considerations full time. If any of my work is relevant to open lines of research I am open to offers of employment as a researcher or research assistant.
Thanks Tyler! I think this is spot on. I am nearing the end of writing a very long report on this type of work so I don’t have time at the moment to write a more detailed reply (and what I’m writing is attempting to answer these questions). One thing that really caught my eye was when you mentioned:
I am deeply interested in this field, but not actually sure what is meant by “the field.” Could you point me to what search terms to use and perhaps the primary authors or research organizations who have published work on this type of thing?”