RSS

Eleni_A

Karma: 389

It’s (not) how you use it

Eleni_ASep 7, 2022, 1:28 PM
6 points
3 comments2 min readEA link

A New York Times ar­ti­cle on AI risk

Eleni_ASep 6, 2022, 12:46 AM
20 points
0 comments1 min readEA link
(www.nytimes.com)

Three sce­nar­ios of pseudo-al­ign­ment

Eleni_ASep 5, 2022, 8:26 PM
7 points
0 comments3 min readEA link

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Eleni_ASep 3, 2022, 11:21 PM
5 points
0 comments17 min readEA link

But what are *your* core val­ues?

Eleni_ASep 3, 2022, 1:51 PM
15 points
0 comments2 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_ASep 1, 2022, 11:45 AM
17 points
1 comment3 min readEA link

Who or­dered al­ign­ment’s ap­ple?

Eleni_AAug 28, 2022, 2:24 PM
5 points
0 comments3 min readEA link

Align­ment’s phlo­gis­ton

Eleni_AAug 18, 2022, 1:41 AM
18 points
1 comment2 min readEA link

De­cep­tion as the op­ti­mal: mesa-op­ti­miz­ers and in­ner al­ign­ment

Eleni_AAug 16, 2022, 3:45 AM
19 points
0 comments5 min readEA link

“Nor­mal ac­ci­dents” and AI sys­tems

Eleni_AAug 8, 2022, 6:43 PM
5 points
1 comment1 min readEA link
(www.achan.ca)

[Question] AI risks: the most con­vinc­ing ar­gu­ment

Eleni_AAug 6, 2022, 8:26 PM
7 points
2 comments1 min readEA link

[Question] Slow­ing down AI progress?

Eleni_AJul 26, 2022, 8:46 AM
16 points
9 comments1 min readEA link

Why did I mi­s­un­der­stand util­i­tar­i­anism so badly?

Eleni_AJul 16, 2022, 4:36 PM
18 points
1 comment4 min readEA link

Talk­ing EA to my philos­o­phy friends

Eleni_AJul 14, 2022, 8:48 AM
11 points
0 comments5 min readEA link