Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Jim Buhler
Karma:
1,173
www.jimbuhler.site
Also on LessWrong
(with different essays).
All
Posts
Comments
New
Top
Old
Page
1
Jim Buhler’s Quick takes
Jim Buhler
18 Nov 2025 15:42 UTC
6
points
7
comments
EA
link
A list of resources on Cluelessness
Jim Buhler
17 Nov 2025 13:59 UTC
32
points
3
comments
3
min read
EA
link
Discussions of Longtermism should focus on the problem of Unawareness
Jim Buhler
20 Oct 2025 13:17 UTC
40
points
1
comment
34
min read
EA
link
Sadism and s-risks from first principles
Jim Buhler
22 Sep 2025 14:08 UTC
11
points
1
comment
4
min read
EA
link
Brief notes on key limitations in Mogensen’s Maximal Cluelessness
Jim Buhler
8 Sep 2025 13:28 UTC
24
points
0
comments
1
min read
EA
link
Modeling the (dis)value of human survival and expansion
Jim Buhler
1 Sep 2025 13:11 UTC
26
points
0
comments
2
min read
EA
link
Understanding Sadism
Jim Buhler
18 Aug 2025 13:25 UTC
20
points
2
comments
8
min read
EA
link
A summary of Tomasik’s Charity Cost-Effectiveness in an Uncertain World
Jim Buhler
11 Aug 2025 13:13 UTC
24
points
0
comments
2
min read
EA
link
Quick takeaways from Griffes’ Doing good while clueless
Jim Buhler
7 Aug 2025 9:59 UTC
10
points
4
comments
2
min read
EA
link
Chicken reforms and veg advocacy may contribute to the Small Animal Replacement Problem—it’s not just environmental strategies
Jim Buhler
3 Jun 2025 14:33 UTC
56
points
5
comments
2
min read
EA
link
Sentientia, Reversomelas, and the Small Animal Replacement Problem
Jim Buhler
15 May 2025 12:29 UTC
46
points
10
comments
4
min read
EA
link
Optimistic Longtermism and Suspicious Judgment Calls
Jim Buhler
24 Mar 2025 15:55 UTC
24
points
30
comments
4
min read
EA
link
An Evolutionary Argument undermining Longtermist thinking?
Jim Buhler
3 Mar 2025 14:47 UTC
31
points
13
comments
8
min read
EA
link
The ‘Dog vs Cat’ cluelessness dilemma (and whether it makes sense)
Jim Buhler
28 Nov 2024 11:34 UTC
24
points
28
comments
2
min read
EA
link
[Question]
How bad would AI progress need to be for us to think general technological progress is also bad?
Jim Buhler
6 Jul 2024 18:44 UTC
10
points
0
comments
1
min read
EA
link
Future technological progress does NOT correlate with methods that involve less suffering
Jim Buhler
1 Aug 2023 9:30 UTC
64
points
12
comments
4
min read
EA
link
What values will control the Future? Overview, conclusion, and directions for future work
Jim Buhler
18 Jul 2023 16:11 UTC
28
points
0
comments
1
min read
EA
link
Why we may expect our successors not to care about suffering
Jim Buhler
10 Jul 2023 13:54 UTC
70
points
31
comments
8
min read
EA
link
Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot
Jim Buhler
23 May 2023 13:07 UTC
64
points
5
comments
16
min read
EA
link
The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?
Jim Buhler
6 May 2023 19:28 UTC
52
points
12
comments
4
min read
EA
link
Back to top
Next