Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Max_He-Ho
Karma:
30
Doing a PhD in Philosophy of AI. Working on conceptual AI Safety things
All
Posts
Comments
New
Top
Old
Against racing to AGI: Cooperation, deterrence, and catastrophic risks
Max_He-Ho
29 Jul 2025 22:22 UTC
6
points
1
comment
1
min read
EA
link
(philpapers.org)
Misalignment or misuse? The AGI alignment tradeoff
Max_He-Ho
20 Jun 2025 10:41 UTC
6
points
0
comments
1
min read
EA
link
(www.arxiv.org)
ErFiN Discussion on AI X-risk
Max_He-Ho
2 Jun 2024 8:03 UTC
3
points
0
comments
1
min read
EA
link
ErFiN Project work
Max_He-Ho
17 Mar 2024 20:35 UTC
2
points
0
comments
1
min read
EA
link
ErFiN Project work
Max_He-Ho
17 Mar 2024 20:31 UTC
2
points
0
comments
1
min read
EA
link
ErFiN Project work
Max_He-Ho
5 Mar 2024 9:39 UTC
2
points
0
comments
1
min read
EA
link
Pessimism about AI Safety
Max_He-Ho
2 Apr 2023 7:57 UTC
5
points
0
comments
25
min read
EA
link
(www.lesswrong.com)
Back to top