RSS

Eleni_A

Karma: 389

Talk­ing EA to my philos­o­phy friends

Eleni_A14 Jul 2022 8:48 UTC
11 points
0 comments5 min readEA link

Why did I mi­s­un­der­stand util­i­tar­i­anism so badly?

Eleni_A16 Jul 2022 16:36 UTC
18 points
1 comment4 min readEA link

[Question] Slow­ing down AI progress?

Eleni_A26 Jul 2022 8:46 UTC
16 points
9 comments1 min readEA link

[Question] AI risks: the most con­vinc­ing ar­gu­ment

Eleni_A6 Aug 2022 20:26 UTC
7 points
2 comments1 min readEA link

“Nor­mal ac­ci­dents” and AI sys­tems

Eleni_A8 Aug 2022 18:43 UTC
5 points
1 comment1 min readEA link
(www.achan.ca)

De­cep­tion as the op­ti­mal: mesa-op­ti­miz­ers and in­ner al­ign­ment

Eleni_A16 Aug 2022 3:45 UTC
19 points
0 comments5 min readEA link

Align­ment’s phlo­gis­ton

Eleni_A18 Aug 2022 1:41 UTC
18 points
1 comment2 min readEA link

Who or­dered al­ign­ment’s ap­ple?

Eleni_A28 Aug 2022 14:24 UTC
5 points
0 comments3 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_A1 Sep 2022 11:45 UTC
17 points
1 comment3 min readEA link

But what are *your* core val­ues?

Eleni_A3 Sep 2022 13:51 UTC
15 points
0 comments2 min readEA link

An Episte­molog­i­cal Ac­count of In­tu­itions in Science

Eleni_A3 Sep 2022 23:21 UTC
5 points
0 comments17 min readEA link

Three sce­nar­ios of pseudo-al­ign­ment

Eleni_A5 Sep 2022 20:26 UTC
7 points
0 comments3 min readEA link

A New York Times ar­ti­cle on AI risk

Eleni_A6 Sep 2022 0:46 UTC
20 points
0 comments1 min readEA link
(www.nytimes.com)

It’s (not) how you use it

Eleni_A7 Sep 2022 13:28 UTC
6 points
3 comments2 min readEA link

There is no royal road to al­ign­ment

Eleni_A17 Sep 2022 13:24 UTC
18 points
2 comments3 min readEA link

Against the weird­ness heuris­tic

Eleni_A5 Oct 2022 14:13 UTC
5 points
0 comments2 min readEA link