RSS

Akash

Karma: 2,089

Longtermist movement-builder. How can we find and mentor talented people to reduce existential risk?

Interested in community-building, management, entrepreneurship, communication, and AI Alignment.

Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.

Pod­cast: Shoshan­nah Tekofsky on skil­ling up in AI safety, vis­it­ing Berkeley, and de­vel­op­ing novel re­search ideas

Akash25 Nov 2022 20:47 UTC
8 points
0 comments1 min readEA link

Win­ners of the com­mu­nity-build­ing writ­ing contest

Akash25 Nov 2022 16:33 UTC
19 points
0 comments3 min readEA link

An­nounc­ing AI Align­ment Awards: $100k re­search con­tests about goal mis­gen­er­al­iza­tion & corrigibility

Akash22 Nov 2022 22:19 UTC
51 points
1 comment1 min readEA link

Ways to buy time

Akash12 Nov 2022 19:31 UTC
45 points
1 comment1 min readEA link

Ap­ply to at­tend an AI safety work­shop in Berkeley (Nov 18-21)

Akash6 Nov 2022 18:06 UTC
19 points
0 comments1 min readEA link

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

Akash5 Nov 2022 20:43 UTC
96 points
32 comments1 min readEA link

Re­sources that (I think) new al­ign­ment re­searchers should know about

Akash28 Oct 2022 22:13 UTC
19 points
2 comments1 min readEA link

Con­sider try­ing Vivek Heb­bar’s al­ign­ment exercises

Akash24 Oct 2022 19:46 UTC
16 points
0 comments1 min readEA link

Pos­si­ble miracles

Akash9 Oct 2022 18:17 UTC
37 points
2 comments1 min readEA link

7 traps that (we think) new al­ign­ment re­searchers of­ten fall into

Akash27 Sep 2022 23:13 UTC
72 points
13 comments1 min readEA link

Ap­ply for men­tor­ship in AI Safety field-building

Akash17 Sep 2022 19:03 UTC
18 points
0 comments1 min readEA link

AI Safety field-build­ing pro­jects I’d like to see

Akash11 Sep 2022 23:45 UTC
18 points
4 comments7 min readEA link
(www.lesswrong.com)

13 back­ground claims about EA

Akash7 Sep 2022 3:54 UTC
69 points
16 comments3 min readEA link

Crit­i­cism of EA Crit­i­cisms: Is the real dis­agree­ment about cause prio?

Akash2 Sep 2022 12:15 UTC
29 points
5 comments3 min readEA link

Four ques­tions I ask AI safety researchers

Akash17 Jul 2022 17:25 UTC
30 points
3 comments1 min readEA link

A sum­mary of ev­ery “High­lights from the Se­quences” post

Akash15 Jul 2022 23:05 UTC
47 points
3 comments16 min readEA link