RSS

Jordan Arel

Karma: 563

I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book draft I was calling “Ways to Save The World” or “Paths to Utopia” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022, while preparing to start a Master’s of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.

I am now researching “Deep Reflection,” processes for determining how to get to our best achievable future, including interventions such as “The Long Reflection,” “Coherent Extrapolated Volition,” and “Good Reflective Governance.”

Launch­ing: The “Hu­man-AI Sym­bio­sis Move­ment” (HAISM)

Jordan Arel1 Apr 2026 18:01 UTC
7 points
1 comment4 min readEA link

If we get pri­mary cruxes right, sec­ondary cruxes will be solved automatically

Jordan Arel14 Jan 2026 22:44 UTC
8 points
1 comment4 min readEA link

If re­searchers shared their #1 idea daily, we’d nav­i­gate ex­is­ten­tial challenges far more effectively

Jordan Arel14 Jan 2026 6:25 UTC
11 points
1 comment2 min readEA link