RSS

Steven Byrnes

Karma: 1,503

Hi I’m Steve Byrnes, an AGI safety /​ AI alignment researcher in Boston, MA, USA, with a particular focus on brain algorithms. See https://​​sjbyrnes.com/​​agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed , Twitter , Mastodon , Threads , Bluesky , GitHub , Wikipedia , Physics-StackExchange , LinkedIn

“Ar­tifi­cial Gen­eral In­tel­li­gence”: an ex­tremely brief FAQ

Steven ByrnesMar 11, 2024, 5:49 PM
12 points
0 commentsEA link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven ByrnesMar 11, 2024, 5:47 PM
44 points
2 commentsEA link

“X dis­tracts from Y” as a thinly-dis­guised fight over group sta­tus /​ politics

Steven ByrnesSep 25, 2023, 3:29 PM
91 points
9 comments8 min readEA link

Munk AI de­bate: con­fu­sions and pos­si­ble cruxes

Steven ByrnesJun 27, 2023, 3:01 PM
142 points
10 commentsEA link

Con­nec­tomics seems great from an AI x-risk perspective

Steven ByrnesApr 30, 2023, 2:38 PM
10 points
0 commentsEA link

What does it take to defend the world against out-of-con­trol AGIs?

Steven ByrnesOct 25, 2022, 2:47 PM
43 points
0 commentsEA link

Chang­ing the world through slack & hobbies

Steven ByrnesJul 21, 2022, 6:01 PM
130 points
10 comments10 min readEA link

“In­tro to brain-like-AGI safety” se­ries—just finished!

Steven ByrnesMay 17, 2022, 3:35 PM
15 points
0 comments1 min readEA link

“In­tro to brain-like-AGI safety” se­ries—halfway point!

Steven ByrnesMar 9, 2022, 3:21 PM
8 points
0 comments2 min readEA link

A case for AGI safety re­search far in advance

Steven ByrnesMar 26, 2021, 12:59 PM
7 points
0 comments1 min readEA link
(www.alignmentforum.org)

[U.S. spe­cific] PPP: free money for self-em­ployed & orgs (time-sen­si­tive)

Steven ByrnesJan 9, 2021, 7:39 PM
14 points
0 comments2 min readEA link