RSS

Jordan Arel

Karma: 415

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing various existential risks, making no assumptions about what risks were most likely.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at AI X-risk, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI safety and other longtermist crucial considerations full time. If any of my work is relevant to open lines of research I am open to offers of employment as a researcher or research assistant.

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan Arel17 Aug 2024 22:09 UTC
15 points
12 comments4 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan Arel14 Jul 2024 5:10 UTC
5 points
1 comment6 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan Arel13 Jul 2024 0:04 UTC
6 points
1 comment7 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: The Wise Work­flow Re­search Organization

Jordan Arel12 Jul 2024 6:57 UTC
14 points
1 comment9 min readEA link

On Ar­tifi­cial Wisdom

Jordan Arel11 Jul 2024 7:14 UTC
22 points
1 comment14 min readEA link

10 Cruxes of Ar­tifi­cial Sentience

Jordan Arel1 Jul 2024 2:46 UTC
31 points
0 comments3 min readEA link

[Question] What is the eas­iest/​funnest way to build up a com­pre­hen­sive un­der­stand­ing of AI and AI Safety?

Jordan Arel30 Apr 2024 18:39 UTC
14 points
0 comments1 min readEA link

What if do­ing the most good = benev­olent AI takeover and hu­man ex­tinc­tion?

Jordan Arel22 Mar 2024 19:56 UTC
2 points
4 comments3 min readEA link