RSS

So8res

Karma: 3,561

Dist­in­guish­ing test from training

So8resNov 29, 2022, 9:41 PM
27 points
0 comments1 min readEA link

How could we know that an AGI sys­tem will have good con­se­quences?

So8resNov 7, 2022, 10:42 PM
25 points
0 comments1 min readEA link

Su­per­in­tel­li­gent AI is nec­es­sary for an amaz­ing fu­ture, but far from sufficient

So8resOct 31, 2022, 9:16 PM
35 points
5 comments1 min readEA link

Notes on “Can you con­trol the past”

So8resOct 20, 2022, 3:41 AM
15 points
0 comments1 min readEA link

De­ci­sion the­ory does not im­ply that we get to have nice things

So8resOct 18, 2022, 3:04 AM
36 points
0 comments1 min readEA link

Con­tra shard the­ory, in the con­text of the di­a­mond max­i­mizer problem

So8resOct 13, 2022, 11:51 PM
27 points
0 comments1 min readEA link

Nice­ness is unnatural

So8resOct 13, 2022, 1:30 AM
20 points
1 comment1 min readEA link

Don’t leave your finger­prints on the future

So8resOct 8, 2022, 12:35 AM
93 points
4 comments1 min readEA link

What does it mean for an AGI to be ‘safe’?

So8resOct 7, 2022, 4:43 AM
53 points
21 comments1 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8resOct 6, 2022, 5:15 AM
93 points
20 comments2 min readEA link

Hu­mans aren’t fit­ness maximizers

So8resOct 4, 2022, 1:32 AM
30 points
2 comments5 min readEA link

Where I cur­rently dis­agree with Ryan Green­blatt’s ver­sion of the ELK approach

So8resSep 29, 2022, 9:19 PM
21 points
0 comments5 min readEA link

AGI ruin sce­nar­ios are likely (and dis­junc­tive)

So8resJul 27, 2022, 3:24 AM
53 points
5 comments6 min readEA link

Brain­storm of things that could force an AI team to burn their lead

So8resJul 25, 2022, 12:00 AM
26 points
1 comment13 min readEA link

A note about differ­en­tial tech­nolog­i­cal development

So8resJul 24, 2022, 11:41 PM
58 points
8 comments6 min readEA link

On how var­i­ous plans miss the hard bits of the al­ign­ment challenge

So8resJul 12, 2022, 5:35 AM
126 points
13 comments29 min readEA link

A cen­tral AI al­ign­ment prob­lem: ca­pa­bil­ities gen­er­al­iza­tion, and the sharp left turn

So8resJun 15, 2022, 2:19 PM
53 points
2 comments10 min readEA link

Visi­ble Thoughts Pro­ject and Bounty Announcement

So8resNov 30, 2021, 12:35 AM
35 points
2 comments13 min readEA link

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

EliezerYudkowskyNov 29, 2021, 5:28 PM
15 points
0 comments40 min readEA link

Altru­is­tic Motivations

So8resJan 4, 2019, 8:38 PM
54 points
7 comments3 min readEA link
(mindingourway.com)