RSS

Anthony DiGiovanni

Karma: 1,133

I’m Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.

[linkpost] When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

Anthony DiGiovanni16 Sep 2022 14:35 UTC
31 points
0 comments1 min readEA link
(www.lesswrong.com)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
125 points
10 comments32 min readEA link

an­ti­monyan­thony’s Quick takes

Anthony DiGiovanni19 Sep 2020 16:05 UTC
3 points
16 comments1 min readEA link