RSS

Eleni_A

Karma: 389

The In­ten­tional Stance, LLMs Edition

Eleni_A1 May 2024 15:22 UTC
8 points
2 comments8 min readEA link

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_A23 Sep 2023 21:56 UTC
27 points
3 comments2 min readEA link

Con­fu­sions and up­dates on STEM AI

Eleni_A19 May 2023 21:34 UTC
7 points
0 comments1 min readEA link

AI Align­ment in The New Yorker

Eleni_A17 May 2023 21:19 UTC
23 points
0 comments1 min readEA link
(www.newyorker.com)

A Study of AI Science Models

Eleni_A13 May 2023 19:14 UTC
12 points
4 comments24 min readEA link

A Guide to Fore­cast­ing AI Science Capabilities

Eleni_A29 Apr 2023 6:51 UTC
19 points
1 comment4 min readEA link

On tak­ing AI risk se­ri­ously

Eleni_A13 Mar 2023 5:44 UTC
51 points
4 comments1 min readEA link
(www.nytimes.com)

Every­thing’s nor­mal un­til it’s not

Eleni_A10 Mar 2023 1:42 UTC
6 points
0 comments3 min readEA link

Ques­tions about AI that bother me

Eleni_A31 Jan 2023 6:50 UTC
33 points
6 comments2 min readEA link

Emerg­ing Paradigms: The Case of Ar­tifi­cial In­tel­li­gence Safety

Eleni_A18 Jan 2023 5:59 UTC
16 points
0 comments19 min readEA link

[Question] Should AI writ­ers be pro­hibited in ed­u­ca­tion?

Eleni_A16 Jan 2023 22:29 UTC
3 points
2 comments1 min readEA link

Progress and re­search dis­rup­tive­ness

Eleni_A12 Jan 2023 3:45 UTC
5 points
0 comments1 min readEA link
(www.nature.com)