RSS

Jan_Kulveit

Karma: 4,853

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

Do Not Tile the Light­cone with Your Con­fused Ontology

Jan_KulveitJun 13, 2025, 12:45 PM
24 points
1 comment5 min readEA link
(boundedlyrational.substack.com)

Ap­ply now to Hu­man-al­igned AI Sum­mer School 2025

PivocajsJun 6, 2025, 7:34 PM
5 points
0 comments1 min readEA link

Grad­ual Disem­pow­er­ment: Sys­temic Ex­is­ten­tial Risks from In­cre­men­tal AI Development

Jan_KulveitJan 30, 2025, 5:07 PM
38 points
4 commentsEA link
(gradual-disempowerment.ai)

“Char­ity” as a con­fla­tion­ary al­li­ance term

Jan_KulveitDec 13, 2024, 7:53 AM
76 points
4 comments5 min readEA link
(www.lesswrong.com)

Jan_Kul­veit’s Quick takes

Jan_KulveitDec 12, 2024, 10:22 PM
7 points
9 commentsEA link

Dis­tanc­ing EA from ra­tio­nal­ity is foolish

Jan_KulveitJun 25, 2024, 9:02 PM
135 points
33 comments2 min readEA link

An­nounc­ing Hu­man-al­igned AI Sum­mer School

Jan_KulveitMay 22, 2024, 8:55 AM
33 points
0 commentsEA link
(humanaligned.ai)

Box in­ver­sion revisited

Jan_KulveitNov 7, 2023, 11:09 AM
13 points
1 commentEA link