RSS

Jordan Arel

Karma: 458

My goal has been to help as many sentient beings as possible as much as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book “Ways to Save The World” which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022 while studying social entrepreneurship at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist community building work.

I am now looking to close down a small business I have been running to research AI enabled safety research and longtermist trajectory change research, including concrete mechanisms, full time. I welcome offers of employment or funding as a researcher on these areas.

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:44 PM
34 points
12 comments14 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:36 PM
5 points
4 comments3 min readEA link

Maybe Utili­tar­i­anism Is More Use­fully A The­ory For De­cid­ing Between Other Eth­i­cal Theories

Jordan ArelNov 17, 2022, 5:31 PM
6 points
3 comments2 min readEA link

[Question] Will AI Wor­ld­view Prize Fund­ing Be Re­placed?

Jordan ArelNov 13, 2022, 5:10 PM
26 points
4 comments1 min readEA link

Jor­dan Arel’s Quick takes

Jordan ArelNov 9, 2022, 1:14 AM
2 points
14 comments1 min readEA link