Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Holden Karnofsky
Karma:
8,640
All
Posts
Comments
New
Top
Old
Page
1
Case studies on social-welfare-based standards in various industries
Holden Karnofsky
20 Jun 2024 13:33 UTC
73
points
2
comments
1
min read
EA
link
Joining the Carnegie Endowment for International Peace
Holden Karnofsky
29 Apr 2024 15:45 UTC
228
points
14
comments
2
min read
EA
link
Good job opportunities for helping with the most important century
Holden Karnofsky
18 Jan 2024 19:21 UTC
46
points
1
comment
4
min read
EA
link
(www.cold-takes.com)
We’re Not Ready: thoughts on “pausing” and responsible scaling policies
Holden Karnofsky
27 Oct 2023 15:19 UTC
150
points
23
comments
1
min read
EA
link
3 levels of threat obfuscation
Holden Karnofsky
2 Aug 2023 17:09 UTC
31
points
0
comments
6
min read
EA
link
(www.alignmentforum.org)
A Playbook for AI Risk Reduction (focused on misaligned AI)
Holden Karnofsky
6 Jun 2023 18:05 UTC
81
points
17
comments
1
min read
EA
link
Seeking (Paid) Case Studies on Standards
Holden Karnofsky
26 May 2023 17:58 UTC
99
points
14
comments
1
min read
EA
link
Success without dignity: a nearcasting story of avoiding catastrophe by luck
Holden Karnofsky
15 Mar 2023 20:17 UTC
113
points
3
comments
1
min read
EA
link
What does Bing Chat tell us about AI risk?
Holden Karnofsky
28 Feb 2023 18:47 UTC
99
points
8
comments
2
min read
EA
link
(www.cold-takes.com)
How major governments can help with the most important century
Holden Karnofsky
24 Feb 2023 19:37 UTC
56
points
4
comments
4
min read
EA
link
(www.cold-takes.com)
Taking a leave of absence from Open Philanthropy to work on AI safety
Holden Karnofsky
23 Feb 2023 19:05 UTC
420
points
31
comments
2
min read
EA
link
What AI companies can do today to help with the most important century
Holden Karnofsky
20 Feb 2023 17:40 UTC
104
points
8
comments
11
min read
EA
link
(www.cold-takes.com)
Jobs that can help with the most important century
Holden Karnofsky
12 Feb 2023 18:19 UTC
57
points
2
comments
32
min read
EA
link
(www.cold-takes.com)
We’re no longer “pausing most new longtermist funding commitments”
Holden Karnofsky
30 Jan 2023 19:29 UTC
201
points
39
comments
6
min read
EA
link
Spreading messages to help with the most important century
Holden Karnofsky
25 Jan 2023 20:35 UTC
128
points
21
comments
18
min read
EA
link
(www.cold-takes.com)
How we could stumble into AI catastrophe
Holden Karnofsky
16 Jan 2023 14:52 UTC
83
points
0
comments
31
min read
EA
link
(www.cold-takes.com)
Transformative AI issues (not just misalignment): an overview
Holden Karnofsky
6 Jan 2023 2:19 UTC
36
points
0
comments
22
min read
EA
link
(www.cold-takes.com)
Racing through a minefield: the AI deployment problem
Holden Karnofsky
31 Dec 2022 21:44 UTC
79
points
1
comment
13
min read
EA
link
(www.cold-takes.com)
High-level hopes for AI alignment
Holden Karnofsky
20 Dec 2022 2:11 UTC
123
points
14
comments
19
min read
EA
link
(www.cold-takes.com)
AI Safety Seems Hard to Measure
Holden Karnofsky
11 Dec 2022 1:31 UTC
90
points
4
comments
14
min read
EA
link
(www.cold-takes.com)
Back to top
Next