RSS

Max_He-Ho

Karma: 30

Doing a PhD in Philosophy of AI. Working on conceptual AI Safety things

Against rac­ing to AGI: Co­op­er­a­tion, de­ter­rence, and catas­trophic risks

Max_He-Ho29 Jul 2025 22:22 UTC
6 points
1 comment1 min readEA link
(philpapers.org)

Misal­ign­ment or mi­suse? The AGI al­ign­ment tradeoff

Max_He-Ho20 Jun 2025 10:41 UTC
6 points
0 comments1 min readEA link
(www.arxiv.org)

ErFiN Dis­cus­sion on AI X-risk

Max_He-Ho2 Jun 2024 8:03 UTC
3 points
0 comments1 min readEA link

ErFiN Pro­ject work

Max_He-Ho17 Mar 2024 20:35 UTC
2 points
0 comments1 min readEA link

ErFiN Pro­ject work

Max_He-Ho17 Mar 2024 20:31 UTC
2 points
0 comments1 min readEA link

ErFiN Pro­ject work

Max_He-Ho5 Mar 2024 9:39 UTC
2 points
0 comments1 min readEA link

Pes­simism about AI Safety

Max_He-Ho2 Apr 2023 7:57 UTC
5 points
0 comments25 min readEA link
(www.lesswrong.com)