Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Prometheus
Karma:
248
All
Posts
Comments
New
Top
Old
Back to the Past to the Future
Prometheus
18 Oct 2023 16:51 UTC
4
points
0
comments
1
min read
EA
link
Why Is No One Trying To Align Profit Incentives With Alignment Research?
Prometheus
23 Aug 2023 13:19 UTC
17
points
2
comments
4
min read
EA
link
(www.lesswrong.com)
Slaying the Hydra: toward a new game board for AI
Prometheus
23 Jun 2023 17:04 UTC
3
points
2
comments
1
min read
EA
link
Lightning Post: Things people in AI Safety should stop talking about
Prometheus
20 Jun 2023 15:00 UTC
5
points
3
comments
1
min read
EA
link
Aligned Objectives Prize Competition
Prometheus
15 Jun 2023 12:42 UTC
8
points
0
comments
1
min read
EA
link
AI Safety Strategy—A new organization for better timelines
Prometheus
14 Jun 2023 20:41 UTC
8
points
0
comments
2
min read
EA
link
Prometheus’s Quick takes
Prometheus
13 Jun 2023 23:26 UTC
3
points
2
comments
1
min read
EA
link
Using Consensus Mechanisms as an approach to Alignment
Prometheus
11 Jun 2023 13:24 UTC
14
points
0
comments
1
min read
EA
link
Humans are not prepared to operate outside their moral training distribution
Prometheus
10 Apr 2023 21:44 UTC
12
points
0
comments
1
min read
EA
link
Widening Overton Window—Open Thread
Prometheus
31 Mar 2023 10:06 UTC
12
points
5
comments
1
min read
EA
link
(www.lesswrong.com)
4 Key Assumptions in AI Safety
Prometheus
7 Nov 2022 10:50 UTC
5
points
0
comments
1
min read
EA
link
Five Areas I Wish EAs Gave More Focus
Prometheus
27 Oct 2022 6:13 UTC
8
points
14
comments
4
min read
EA
link
Back to top