RSS

Jordan Arel

Karma: 563

I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book draft I was calling “Ways to Save The World” or “Paths to Utopia” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022, while preparing to start a Master’s of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.

I am now researching “Deep Reflection,” processes for determining how to get to our best achievable future, including interventions such as “The Long Reflection,” “Coherent Extrapolated Volition,” and “Good Reflective Governance.”

Launch­ing: The “Hu­man-AI Sym­bio­sis Move­ment” (HAISM)

Jordan Arel1 Apr 2026 18:01 UTC
7 points
1 comment4 min readEA link

If we get pri­mary cruxes right, sec­ondary cruxes will be solved automatically

Jordan Arel14 Jan 2026 22:44 UTC
8 points
1 comment4 min readEA link

If re­searchers shared their #1 idea daily, we’d nav­i­gate ex­is­ten­tial challenges far more effectively

Jordan Arel14 Jan 2026 6:25 UTC
11 points
1 comment2 min readEA link

Shortlist of Vi­atopia Interventions

Jordan Arel31 Oct 2025 3:00 UTC
10 points
1 comment33 min readEA link

Vi­atopia and Buy-In

Jordan Arel31 Oct 2025 2:59 UTC
7 points
0 comments19 min readEA link

Why Vi­atopia is Important

Jordan Arel31 Oct 2025 2:59 UTC
5 points
0 comments20 min readEA link

In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel31 Oct 2025 2:58 UTC
6 points
0 comments19 min readEA link

(out­dated ver­sion) Shortlist of Longter­mist Interventions

Jordan Arel21 Oct 2025 11:59 UTC
4 points
0 comments14 min readEA link

(out­dated ver­sion) Vi­atopia and Buy-In

Jordan Arel21 Oct 2025 11:39 UTC
6 points
0 comments20 min readEA link

(out­dated ver­sion) Why Vi­atopia is Important

Jordan Arel21 Oct 2025 11:33 UTC
4 points
0 comments18 min readEA link

(out­dated ver­sion) In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel21 Oct 2025 11:26 UTC
6 points
0 comments18 min readEA link

In defense of the good­ness of ideas

Jordan Arel18 Oct 2025 22:00 UTC
6 points
1 comment4 min readEA link

“Mo­men­tism”: Ethics for Boltz­mann Brains

Jordan Arel5 Aug 2025 2:12 UTC
8 points
0 comments1 min readEA link

Prag­matic de­ci­sion the­ory, causal one-box­ing, and how to liter­ally save the world

Jordan Arel28 Jul 2025 2:21 UTC
4 points
1 comment5 min readEA link

Bill Gates, Charles Koch, et al. Are Giv­ing $1 Billion To Boost Eco­nomic Mo­bil­ity Us­ing A.I. (Link-Post)

Jordan Arel19 Jul 2025 21:37 UTC
11 points
1 comment1 min readEA link

Is Op­ti­mal Reflec­tion Com­pet­i­tive with Ex­tinc­tion Risk Re­duc­tion? - Re­quest­ing Reviewers

Jordan Arel29 Jun 2025 5:13 UTC
18 points
1 comment11 min readEA link

Is there any fund­ing available for (non x-risk) work on im­prov­ing tra­jec­to­ries of the long-term fu­ture?

Jordan Arel29 May 2025 3:10 UTC
6 points
1 comment1 min readEA link

[Question] To what ex­tent is AI safety work try­ing to get AI to re­li­ably and safely do what the user asks vs. do what is best in some ul­ti­mate sense?

Jordan Arel23 May 2025 21:09 UTC
12 points
0 comments1 min readEA link

A crux against ar­tifi­cial sen­tience work for the long-term future

Jordan Arel18 May 2025 21:40 UTC
11 points
0 comments2 min readEA link

An­nounc­ing “sEAd The Fu­ture”: Effec­tive Sperm and Egg Bank

Jordan Arel1 Apr 2025 16:35 UTC
4 points
0 comments1 min readEA link