RSS

mic

Karma: 1,942

En­hanc­ing biose­cu­rity with lan­guage mod­els: defin­ing re­search directions

mic26 Mar 2024 12:30 UTC
11 points
1 comment13 min readEA link
(papers.ssrn.com)

Solv­ing al­ign­ment isn’t enough for a flour­ish­ing future

mic2 Feb 2024 18:22 UTC
26 points
0 comments22 min readEA link
(papers.ssrn.com)

SPAR seeks ad­vi­sors and stu­dents for AI safety pro­jects (Se­cond Wave)

mic14 Sep 2023 23:09 UTC
14 points
0 comments1 min readEA link

Ideas for im­prov­ing epistemics in AI safety outreach

mic21 Aug 2023 19:56 UTC
31 points
0 comments3 min readEA link
(www.lesswrong.com)

Su­per­vised Pro­gram for Align­ment Re­search (SPAR) at UC Berkeley: Spring 2023 summary

mic19 Aug 2023 2:32 UTC
18 points
0 comments6 min readEA link
(www.lesswrong.com)

EU’s AI am­bi­tions at risk as US pushes to wa­ter down in­ter­na­tional treaty (linkpost)

mic31 Jul 2023 0:34 UTC
9 points
0 comments1 min readEA link
(www.euractiv.com)

OpenAI in­tro­duces func­tion call­ing for GPT-4

mic20 Jun 2023 1:58 UTC
26 points
0 comments1 min readEA link

Nav­i­gat­ing the Open-Source AI Land­scape: Data, Fund­ing, and Safety

AndreFerretti12 Apr 2023 10:30 UTC
23 points
3 comments11 min readEA link