RSS

Jordan Arel

Karma: 449

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.

A crux against ar­tifi­cial sen­tience work for the long-term future

Jordan ArelMay 18, 2025, 9:40 PM
11 points
0 comments2 min readEA link

An­nounc­ing “sEAd The Fu­ture”: Effec­tive Sperm and Egg Bank

Jordan ArelApr 1, 2025, 4:35 PM
3 points
0 comments1 min readEA link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan ArelAug 17, 2024, 10:09 PM
18 points
12 comments4 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan ArelJul 14, 2024, 5:10 AM
5 points
1 comment6 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan ArelJul 13, 2024, 12:04 AM
6 points
1 comment7 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: The Wise Work­flow Re­search Organization

Jordan ArelJul 12, 2024, 6:57 AM
14 points
1 comment9 min readEA link

On Ar­tifi­cial Wisdom

Jordan ArelJul 11, 2024, 7:14 AM
22 points
1 comment14 min readEA link

10 Cruxes of Ar­tifi­cial Sentience

Jordan ArelJul 1, 2024, 2:46 AM
31 points
0 comments3 min readEA link

[Question] What is the eas­iest/​funnest way to build up a com­pre­hen­sive un­der­stand­ing of AI and AI Safety?

Jordan ArelApr 30, 2024, 6:39 PM
14 points
0 comments1 min readEA link

What if do­ing the most good = benev­olent AI takeover and hu­man ex­tinc­tion?

Jordan ArelMar 22, 2024, 7:56 PM
2 points
4 comments3 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan ArelJul 11, 2023, 8:32 PM
4 points
7 comments1 min readEA link

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan ArelJan 10, 2023, 8:30 AM
11 points
8 comments1 min readEA link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:44 PM
34 points
12 comments14 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:36 PM
5 points
4 comments3 min readEA link

Maybe Utili­tar­i­anism Is More Use­fully A The­ory For De­cid­ing Between Other Eth­i­cal Theories

Jordan ArelNov 17, 2022, 5:31 PM
6 points
3 comments2 min readEA link

[Question] Will AI Wor­ld­view Prize Fund­ing Be Re­placed?

Jordan ArelNov 13, 2022, 5:10 PM
26 points
4 comments1 min readEA link

Jor­dan Arel’s Quick takes

Jordan ArelNov 9, 2022, 1:14 AM
2 points
14 comments1 min readEA link

[Question] What Cri­te­ria Deter­mines Who Gets Into EAG & EAGx?

Jordan ArelSep 26, 2022, 10:03 PM
10 points
2 comments1 min readEA link

Fine-Grained Karma Voting

Jordan ArelSep 26, 2022, 6:58 PM
5 points
1 comment1 min readEA link

Why Wast­ing EA Money is Bad

Jordan ArelSep 22, 2022, 1:45 AM
47 points
20 comments5 min readEA link