RSS

Eleni_A

Karma: 389

Ma­chine Learn­ing for Scien­tific Dis­cov­ery—AI Safety Camp

Eleni_AJan 6, 2023, 3:06 AM
9 points
0 comments1 min readEA link

[Question] Book recom­men­da­tions for the his­tory of ML?

Eleni_ADec 28, 2022, 11:45 PM
10 points
4 comments1 min readEA link

Why I think that teach­ing philos­o­phy is high impact

Eleni_ADec 19, 2022, 11:00 PM
17 points
2 comments2 min readEA link

Cog­ni­tive sci­ence and failed AI fore­casts

Eleni_ANov 18, 2022, 2:25 PM
13 points
0 comments2 min readEA link

My sum­mary of “Prag­matic AI Safety”

Eleni_ANov 5, 2022, 2:47 PM
14 points
0 comments5 min readEA link

Against the weird­ness heuris­tic

Eleni_AOct 5, 2022, 2:13 PM
5 points
0 comments2 min readEA link

There is no royal road to al­ign­ment

Eleni_ASep 17, 2022, 1:24 PM
18 points
2 comments3 min readEA link

It’s (not) how you use it

Eleni_ASep 7, 2022, 1:28 PM
6 points
3 comments2 min readEA link

A New York Times ar­ti­cle on AI risk

Eleni_ASep 6, 2022, 12:46 AM
20 points
0 comments1 min readEA link
(www.nytimes.com)

Three sce­nar­ios of pseudo-al­ign­ment

Eleni_ASep 5, 2022, 8:26 PM
7 points
0 comments3 min readEA link

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Eleni_ASep 3, 2022, 11:21 PM
5 points
0 comments17 min readEA link

But what are *your* core val­ues?

Eleni_ASep 3, 2022, 1:51 PM
15 points
0 comments2 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_ASep 1, 2022, 11:45 AM
17 points
1 comment3 min readEA link