RSS

Jordan Arel

Karma: 417

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing various existential risks, making no assumptions about what risks were most likely.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at AI X-risk, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI safety and other longtermist crucial considerations full time. If any of my work is relevant to open lines of research I am open to offers of employment as a researcher or research assistant.

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan Arel17 Aug 2024 22:09 UTC
15 points
12 comments4 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan Arel14 Jul 2024 5:10 UTC
5 points
1 comment6 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan Arel13 Jul 2024 0:04 UTC
6 points
1 comment7 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: The Wise Work­flow Re­search Organization

Jordan Arel12 Jul 2024 6:57 UTC
14 points
1 comment9 min readEA link

On Ar­tifi­cial Wisdom

Jordan Arel11 Jul 2024 7:14 UTC
22 points
1 comment14 min readEA link

10 Cruxes of Ar­tifi­cial Sentience

Jordan Arel1 Jul 2024 2:46 UTC
31 points
0 comments3 min readEA link

[Question] What is the eas­iest/​funnest way to build up a com­pre­hen­sive un­der­stand­ing of AI and AI Safety?

Jordan Arel30 Apr 2024 18:39 UTC
14 points
0 comments1 min readEA link

What if do­ing the most good = benev­olent AI takeover and hu­man ex­tinc­tion?

Jordan Arel22 Mar 2024 19:56 UTC
2 points
4 comments3 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan Arel11 Jul 2023 20:32 UTC
4 points
7 comments1 min readEA link

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan Arel10 Jan 2023 8:30 UTC
11 points
8 comments1 min readEA link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan Arel8 Dec 2022 21:44 UTC
34 points
12 comments14 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:36 UTC
5 points
4 comments3 min readEA link

Maybe Utili­tar­i­anism Is More Use­fully A The­ory For De­cid­ing Between Other Eth­i­cal Theories

Jordan Arel17 Nov 2022 17:31 UTC
6 points
3 comments2 min readEA link

[Question] Will AI Wor­ld­view Prize Fund­ing Be Re­placed?

Jordan Arel13 Nov 2022 17:10 UTC
26 points
4 comments1 min readEA link

Jor­dan Arel’s Quick takes

Jordan Arel9 Nov 2022 1:14 UTC
2 points
14 comments1 min readEA link

[Question] What Cri­te­ria Deter­mines Who Gets Into EAG & EAGx?

Jordan Arel26 Sep 2022 22:03 UTC
10 points
2 comments1 min readEA link

Fine-Grained Karma Voting

Jordan Arel26 Sep 2022 18:58 UTC
5 points
1 comment1 min readEA link

Why Wast­ing EA Money is Bad

Jordan Arel22 Sep 2022 1:45 UTC
47 points
20 comments5 min readEA link

How To Ac­tu­ally Succeed

Jordan Arel12 Sep 2022 22:33 UTC
11 points
0 comments5 min readEA link

[Question] How have nu­clear win­ter mod­els evolved?

Jordan Arel11 Sep 2022 22:40 UTC
14 points
3 comments1 min readEA link