RSS

Jordan Arel

Karma: 439

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.

10 Cruxes of Ar­tifi­cial Sentience

Jordan ArelJul 1, 2024, 2:46 AM
31 points
0 comments3 min readEA link

[Question] What is the eas­iest/​funnest way to build up a com­pre­hen­sive un­der­stand­ing of AI and AI Safety?

Jordan ArelApr 30, 2024, 6:39 PM
14 points
0 comments1 min readEA link

What if do­ing the most good = benev­olent AI takeover and hu­man ex­tinc­tion?

Jordan ArelMar 22, 2024, 7:56 PM
2 points
4 comments3 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan ArelJul 11, 2023, 8:32 PM
4 points
7 comments1 min readEA link

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan ArelJan 10, 2023, 8:30 AM
11 points
8 comments1 min readEA link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:44 PM
34 points
12 comments14 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:36 PM
5 points
4 comments3 min readEA link