RSS

Xander123

Karma: 437

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

Phosphorous28 Jan 2023 0:45 UTC
15 points
0 comments1 min readEA link
(www.lesswrong.com)

AGISF adap­ta­tion for in-per­son groups

Sam Marks17 Jan 2023 18:33 UTC
30 points
0 comments3 min readEA link
(www.lesswrong.com)

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Xander1232 Dec 2022 6:09 UTC
71 points
3 comments1 min readEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanj2 Dec 2022 1:07 UTC
83 points
0 comments1 min readEA link

Ap­ply to the Red­wood Re­search Mechanis­tic In­ter­pretabil­ity Ex­per­i­ment (REMIX), a re­search pro­gram in Berkeley

Max Nadeau27 Oct 2022 1:39 UTC
95 points
5 comments12 min readEA link

An­nounc­ing the Har­vard AI Safety Team

Xander12330 Jun 2022 18:34 UTC
128 points
4 comments5 min readEA link