RSS

Akash

Karma: 4,166

AI safety governance/​strategy research & field-building.

Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.

Ver­ifi­ca­tion meth­ods for in­ter­na­tional AI agreements

Akash31 Aug 2024 14:58 UTC
20 points
0 comments1 min readEA link
(arxiv.org)

Ad­vice to ju­nior AI gov­er­nance researchers

Akash8 Jul 2024 19:19 UTC
38 points
3 comments1 min readEA link

Miti­gat­ing ex­treme AI risks amid rapid progress [Linkpost]

Akash21 May 2024 20:04 UTC
36 points
1 comment1 min readEA link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

Chi24 Feb 2024 22:58 UTC
53 points
9 comments20 min readEA link

OpenAI’s Pre­pared­ness Frame­work: Praise & Recommendations

Akash2 Jan 2024 16:20 UTC
16 points
1 comment1 min readEA link

Nav­i­gat­ing emo­tions in an un­cer­tain & con­fus­ing world

Akash20 Nov 2023 18:16 UTC
33 points
0 comments1 min readEA link

Chi­nese sci­en­tists ac­knowl­edge xrisk & call for in­ter­na­tional reg­u­la­tory body [Linkpost]

Akash1 Nov 2023 13:28 UTC
31 points
0 comments1 min readEA link
(www.ft.com)

Win­ners of AI Align­ment Awards Re­search Contest

Akash13 Jul 2023 16:14 UTC
49 points
1 comment1 min readEA link

AI Safety Newslet­ter #8: Rogue AIs, how to screen for AI risks, and grants for re­search on demo­cratic gov­er­nance of AI

Center for AI Safety30 May 2023 11:44 UTC
16 points
3 comments6 min readEA link
(newsletter.safe.ai)

AI Safety Newslet­ter #7: Dis­in­for­ma­tion, Gover­nance Recom­men­da­tions for AI labs, and Se­nate Hear­ings on AI

Center for AI Safety23 May 2023 21:42 UTC
23 points
0 comments6 min readEA link
(newsletter.safe.ai)

Eisen­hower’s Atoms for Peace Speech

Akash17 May 2023 16:10 UTC
17 points
1 comment1 min readEA link

AI Safety Newslet­ter #6: Ex­am­ples of AI safety progress, Yoshua Ben­gio pro­poses a ban on AI agents, and les­sons from nu­clear arms control

Center for AI Safety16 May 2023 15:14 UTC
32 points
1 comment6 min readEA link
(newsletter.safe.ai)

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

Center for AI Safety9 May 2023 15:26 UTC
60 points
0 comments4 min readEA link
(newsletter.safe.ai)

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

Center for AI Safety2 May 2023 16:51 UTC
35 points
2 comments5 min readEA link
(newsletter.safe.ai)

Dis­cus­sion about AI Safety fund­ing (FB tran­script)

Akash30 Apr 2023 19:05 UTC
104 points
10 comments6 min readEA link

Refram­ing the bur­den of proof: Com­pa­nies should prove that mod­els are safe (rather than ex­pect­ing au­di­tors to prove that mod­els are dan­ger­ous)

Akash25 Apr 2023 18:49 UTC
35 points
1 comment1 min readEA link

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

Oliver Z25 Apr 2023 16:15 UTC
35 points
1 comment4 min readEA link
(newsletter.safe.ai)

Deep­Mind and Google Brain are merg­ing [Linkpost]

Akash20 Apr 2023 18:47 UTC
32 points
1 comment1 min readEA link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Oliver Z18 Apr 2023 18:36 UTC
56 points
1 comment4 min readEA link
(newsletter.safe.ai)

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

Akash11 Apr 2023 17:30 UTC
85 points
1 comment1 min readEA link