RSS

Eleni_A

Karma: 389

The In­ten­tional Stance, LLMs Edition

Eleni_AMay 1, 2024, 3:22 PM
8 points
2 comments8 min readEA link

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_ASep 23, 2023, 9:56 PM
27 points
3 comments2 min readEA link

Con­fu­sions and up­dates on STEM AI

Eleni_AMay 19, 2023, 9:34 PM
7 points
0 comments1 min readEA link

AI Align­ment in The New Yorker

Eleni_AMay 17, 2023, 9:19 PM
23 points
0 comments1 min readEA link
(www.newyorker.com)

A Study of AI Science Models

Eleni_AMay 13, 2023, 7:14 PM
12 points
4 comments24 min readEA link

A Guide to Fore­cast­ing AI Science Capabilities

Eleni_AApr 29, 2023, 6:51 AM
19 points
1 comment4 min readEA link

On tak­ing AI risk se­ri­ously

Eleni_AMar 13, 2023, 5:44 AM
51 points
4 comments1 min readEA link
(www.nytimes.com)

Every­thing’s nor­mal un­til it’s not

Eleni_AMar 10, 2023, 1:42 AM
6 points
0 comments3 min readEA link

Ques­tions about AI that bother me

Eleni_AJan 31, 2023, 6:50 AM
33 points
6 comments2 min readEA link

Emerg­ing Paradigms: The Case of Ar­tifi­cial In­tel­li­gence Safety

Eleni_AJan 18, 2023, 5:59 AM
16 points
0 comments19 min readEA link

[Question] Should AI writ­ers be pro­hibited in ed­u­ca­tion?

Eleni_AJan 16, 2023, 10:29 PM
3 points
2 comments1 min readEA link

Progress and re­search dis­rup­tive­ness

Eleni_AJan 12, 2023, 3:45 AM
5 points
0 comments1 min readEA link
(www.nature.com)