RSS

Steven Byrnes

Karma: 1,430

Hi I’m Steve Byrnes, an AGI safety /​ AI alignment researcher in Boston, MA, USA, with a particular focus on brain algorithms. See https://​​sjbyrnes.com/​​agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed , Twitter , Mastodon , Threads , Bluesky , GitHub , Wikipedia , Physics-StackExchange , LinkedIn

“Ar­tifi­cial Gen­eral In­tel­li­gence”: an ex­tremely brief FAQ

Steven Byrnes11 Mar 2024 17:49 UTC
12 points
0 comments1 min readEA link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven Byrnes11 Mar 2024 17:47 UTC
44 points
2 comments1 min readEA link

“X dis­tracts from Y” as a thinly-dis­guised fight over group sta­tus /​ politics

Steven Byrnes25 Sep 2023 15:29 UTC
89 points
9 comments8 min readEA link

Munk AI de­bate: con­fu­sions and pos­si­ble cruxes

Steven Byrnes27 Jun 2023 15:01 UTC
142 points
10 comments1 min readEA link

Con­nec­tomics seems great from an AI x-risk perspective

Steven Byrnes30 Apr 2023 14:38 UTC
10 points
0 comments1 min readEA link

What does it take to defend the world against out-of-con­trol AGIs?

Steven Byrnes25 Oct 2022 14:47 UTC
43 points
0 comments1 min readEA link

Chang­ing the world through slack & hobbies

Steven Byrnes21 Jul 2022 18:01 UTC
130 points
10 comments10 min readEA link

“In­tro to brain-like-AGI safety” se­ries—just finished!

Steven Byrnes17 May 2022 15:35 UTC
15 points
0 comments1 min readEA link

“In­tro to brain-like-AGI safety” se­ries—halfway point!

Steven Byrnes9 Mar 2022 15:21 UTC
8 points
0 comments2 min readEA link

A case for AGI safety re­search far in advance

Steven Byrnes26 Mar 2021 12:59 UTC
7 points
0 comments1 min readEA link
(www.alignmentforum.org)

[U.S. spe­cific] PPP: free money for self-em­ployed & orgs (time-sen­si­tive)

Steven Byrnes9 Jan 2021 19:39 UTC
14 points
0 comments2 min readEA link