RSS

Jordan Arel

Karma: 459

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.

Is there any fund­ing available for (non x-risk) work on im­prov­ing tra­jec­to­ries of the long-term fu­ture?

Jordan ArelMay 29, 2025, 3:10 AM
6 points
1 comment1 min readEA link

[Question] To what ex­tent is AI safety work try­ing to get AI to re­li­ably and safely do what the user asks vs. do what is best in some ul­ti­mate sense?

Jordan ArelMay 23, 2025, 9:09 PM
12 points
0 comments1 min readEA link

A crux against ar­tifi­cial sen­tience work for the long-term future

Jordan ArelMay 18, 2025, 9:40 PM
11 points
0 comments2 min readEA link

An­nounc­ing “sEAd The Fu­ture”: Effec­tive Sperm and Egg Bank

Jordan ArelApr 1, 2025, 4:35 PM
3 points
0 comments1 min readEA link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan Arel17 Aug 2024 22:09 UTC
18 points
12 comments4 min readEA link