Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Holden Karnofsky
Karma:
8,099
All
Posts
Comments
New
Top
Old
Page
1
Good job opportunities for helping with the most important century
Holden Karnofsky
18 Jan 2024 19:21 UTC
49
points
1
comment
4
min read
EA
link
(www.cold-takes.com)
We’re Not Ready: thoughts on “pausing” and responsible scaling policies
Holden Karnofsky
27 Oct 2023 15:19 UTC
143
points
23
comments
1
min read
EA
link
3 levels of threat obfuscation
Holden Karnofsky
2 Aug 2023 17:09 UTC
31
points
0
comments
6
min read
EA
link
(www.alignmentforum.org)
A Playbook for AI Risk Reduction (focused on misaligned AI)
Holden Karnofsky
6 Jun 2023 18:05 UTC
81
points
17
comments
1
min read
EA
link
Seeking (Paid) Case Studies on Standards
Holden Karnofsky
26 May 2023 17:58 UTC
92
points
14
comments
1
min read
EA
link
Success without dignity: a nearcasting story of avoiding catastrophe by luck
Holden Karnofsky
15 Mar 2023 20:17 UTC
99
points
3
comments
1
min read
EA
link
What does Bing Chat tell us about AI risk?
Holden Karnofsky
28 Feb 2023 18:47 UTC
100
points
8
comments
2
min read
EA
link
(www.cold-takes.com)
How major governments can help with the most important century
Holden Karnofsky
24 Feb 2023 19:37 UTC
55
points
4
comments
4
min read
EA
link
(www.cold-takes.com)
Taking a leave of absence from Open Philanthropy to work on AI safety
Holden Karnofsky
23 Feb 2023 19:05 UTC
424
points
31
comments
2
min read
EA
link
What AI companies can do today to help with the most important century
Holden Karnofsky
20 Feb 2023 17:40 UTC
104
points
8
comments
11
min read
EA
link
(www.cold-takes.com)
Jobs that can help with the most important century
Holden Karnofsky
12 Feb 2023 18:19 UTC
52
points
2
comments
32
min read
EA
link
(www.cold-takes.com)
We’re no longer “pausing most new longtermist funding commitments”
Holden Karnofsky
30 Jan 2023 19:29 UTC
199
points
40
comments
6
min read
EA
link
Spreading messages to help with the most important century
Holden Karnofsky
25 Jan 2023 20:35 UTC
123
points
21
comments
18
min read
EA
link
(www.cold-takes.com)
How we could stumble into AI catastrophe
Holden Karnofsky
16 Jan 2023 14:52 UTC
78
points
0
comments
31
min read
EA
link
(www.cold-takes.com)
Transformative AI issues (not just misalignment): an overview
Holden Karnofsky
6 Jan 2023 2:19 UTC
31
points
0
comments
22
min read
EA
link
(www.cold-takes.com)
Racing through a minefield: the AI deployment problem
Holden Karnofsky
31 Dec 2022 21:44 UTC
79
points
1
comment
13
min read
EA
link
(www.cold-takes.com)
High-level hopes for AI alignment
Holden Karnofsky
20 Dec 2022 2:11 UTC
118
points
14
comments
19
min read
EA
link
(www.cold-takes.com)
AI Safety Seems Hard to Measure
Holden Karnofsky
11 Dec 2022 1:31 UTC
90
points
3
comments
14
min read
EA
link
(www.cold-takes.com)
Why Would AI “Aim” To Defeat Humanity?
Holden Karnofsky
29 Nov 2022 18:59 UTC
24
points
0
comments
32
min read
EA
link
(www.cold-takes.com)
My takes on the FTX situation will (mostly) be cold, not hot
Holden Karnofsky
18 Nov 2022 23:57 UTC
398
points
33
comments
5
min read
EA
link
Back to top
Next