RSS

Xander123

Karma: 437

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

PhosphorousJan 28, 2023, 12:45 AM
15 points
0 comments1 min readEA link
(www.lesswrong.com)

AGISF adap­ta­tion for in-per­son groups

Sam MarksJan 17, 2023, 6:33 PM
30 points
0 comments3 min readEA link
(www.lesswrong.com)

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Xander1232 Dec 2022 6:09 UTC
71 points
3 commentsEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanj2 Dec 2022 1:07 UTC
83 points
0 comments1 min readEA link

Ap­ply to the Red­wood Re­search Mechanis­tic In­ter­pretabil­ity Ex­per­i­ment (REMIX), a re­search pro­gram in Berkeley

Max Nadeau27 Oct 2022 1:39 UTC
95 points
5 comments12 min readEA link

An­nounc­ing the Har­vard AI Safety Team

Xander12330 Jun 2022 18:34 UTC
128 points
4 comments5 min readEA link