RSS

So8res

Karma: 3,537

Quick takes on “AI is easy to con­trol”

So8res2 Dec 2023 22:33 UTC
−12 points
4 comments1 min readEA link

Apoca­lypse in­surance, and the hardline liber­tar­ian take on AI risk

So8res28 Nov 2023 2:09 UTC
21 points
0 comments1 min readEA link

Abil­ity to solve long-hori­zon tasks cor­re­lates with want­ing things in the be­hav­iorist sense

So8res24 Nov 2023 17:37 UTC
38 points
1 comment1 min readEA link

Thoughts on the AI Safety Sum­mit com­pany policy re­quests and responses

So8res31 Oct 2023 23:54 UTC
42 points
3 comments1 min readEA link

AI as a sci­ence, and three ob­sta­cles to al­ign­ment strategies

So8res25 Oct 2023 21:02 UTC
41 points
1 comment1 min readEA link

But why would the AI kill us?

So8res17 Apr 2023 19:38 UTC
45 points
3 comments1 min readEA link

Mis­gen­er­al­iza­tion as a misnomer

So8res6 Apr 2023 20:43 UTC
48 points
0 comments1 min readEA link

If in­ter­pretabil­ity re­search goes well, it may get dangerous

So8res3 Apr 2023 21:48 UTC
33 points
0 comments1 min readEA link

Hooray for step­ping out of the limelight

So8res1 Apr 2023 2:45 UTC
103 points
0 comments1 min readEA link

A rough and in­com­plete re­view of some of John Went­worth’s research

So8res28 Mar 2023 18:52 UTC
27 points
0 comments1 min readEA link

A stylized di­alogue on John Went­worth’s claims about mar­kets and optimization

So8res25 Mar 2023 22:32 UTC
18 points
0 comments1 min readEA link

Truth and Ad­van­tage: Re­sponse to a draft of “AI safety seems hard to mea­sure”

So8res22 Mar 2023 3:36 UTC
11 points
0 comments1 min readEA link

Deep Deceptiveness

So8res21 Mar 2023 2:51 UTC
40 points
1 comment1 min readEA link

Com­ments on OpenAI’s “Plan­ning for AGI and be­yond”

So8res3 Mar 2023 23:01 UTC
115 points
7 comments1 min readEA link

Ene­mies vs Malefactors

So8res28 Feb 2023 23:38 UTC
85 points
5 comments5 min readEA link

AI al­ign­ment re­searchers don’t (seem to) stack

So8res21 Feb 2023 0:48 UTC
47 points
3 comments1 min readEA link

A per­sonal re­flec­tion on SBF

So8res7 Feb 2023 17:56 UTC
321 points
23 comments19 min readEA link

Fo­cus on the places where you feel shocked ev­ery­one’s drop­ping the ball

So8res2 Feb 2023 0:27 UTC
92 points
6 comments1 min readEA link

Align­ment is mostly about mak­ing cog­ni­tion aimable at all

So8res30 Jan 2023 15:22 UTC
57 points
3 comments1 min readEA link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

RobBensinger7 Dec 2022 19:46 UTC
77 points
7 comments5 min readEA link