Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Thomas Kwa🔹
Karma:
3,919
AI safety researcher
All
Posts
Comments
New
Top
Old
A sliding scale for donation percentage
Thomas Kwa🔹
22 Jan 2026 23:00 UTC
98
points
11
comments
2
min read
EA
link
Introducing The Spending What We Must Pledge
Thomas Kwa🔹
1 Apr 2025 7:11 UTC
246
points
8
comments
2
min read
EA
link
Misleading phrase in a GiveWell Youtube ad
Thomas Kwa🔹
5 Jan 2023 10:28 UTC
86
points
13
comments
1
min read
EA
link
EA forum content might be declining in quality. Here are some possible mechanisms.
Thomas Kwa🔹
24 Sep 2022 22:24 UTC
130
points
46
comments
2
min read
EA
link
Most problems fall within a 100x tractability range (under certain assumptions)
Thomas Kwa🔹
4 May 2022 0:06 UTC
116
points
25
comments
4
min read
EA
link
How dath ilan coordinates around solving AI alignment
Thomas Kwa🔹
14 Apr 2022 1:53 UTC
13
points
1
comment
5
min read
EA
link
The case for infant outreach
Thomas Kwa🔹
1 Apr 2022 4:25 UTC
249
points
8
comments
4
min read
EA
link
Can we simulate human evolution to create a somewhat aligned AGI?
Thomas Kwa🔹
29 Mar 2022 1:23 UTC
19
points
0
comments
7
min read
EA
link
Effectiveness is a Conjunction of Multipliers
Thomas Kwa🔹
25 Mar 2022 18:44 UTC
254
points
34
comments
4
min read
EA
link
Penn EA Residency Takeaways
Thomas Kwa🔹
12 Nov 2021 9:34 UTC
99
points
10
comments
9
min read
EA
link
“Hinge of History” Refuted
Thomas Kwa🔹
1 Apr 2021 7:00 UTC
174
points
6
comments
6
min read
EA
link
Thomas Kwa’s Quick takes
Thomas Kwa🔹
23 Sep 2020 19:25 UTC
2
points
84
comments
EA
link
Back to top