RSS

Anthony DiGiovanni

Karma: 1,133

I’m Anthony DiGiovanni, a suffering-focused AI safety researcher at the Center on Long-Term Risk. I (occasionally) write about altruism-relevant topics on my Substack. All opinions my own.

[linkpost] When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

Anthony DiGiovanni16 Sep 2022 14:35 UTC
31 points
0 comments1 min readEA link
(www.lesswrong.com)