RSS

Jordan Arel

Karma: 449

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.

A crux against ar­tifi­cial sen­tience work for the long-term future

Jordan ArelMay 18, 2025, 9:40 PM
11 points
0 comments2 min readEA link

An­nounc­ing “sEAd The Fu­ture”: Effec­tive Sperm and Egg Bank

Jordan ArelApr 1, 2025, 4:35 PM
3 points
0 comments1 min readEA link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan ArelAug 17, 2024, 10:09 PM
18 points
12 comments4 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan ArelJul 14, 2024, 5:10 AM
5 points
1 comment6 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan ArelJul 13, 2024, 12:04 AM
6 points
1 comment7 min readEA link