RSS

titotal

Karma: 9,207

I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.

Six ex­per­i­ments with a sim­ple op­ti­mizer’s curse model

titotal5 Mar 2026 14:20 UTC
32 points
1 comment13 min readEA link
(open.substack.com)

The best cause will dis­ap­point you: An in­tro to the op­ti­misers curse

titotal11 Feb 2026 15:49 UTC
164 points
19 comments14 min readEA link
(open.substack.com)

Author, as­sis­tant, and per­sona: the metaphors I use for LLM chatbots

titotal4 Feb 2026 14:10 UTC
10 points
0 comments13 min readEA link
(titotal.substack.com)

Ex­plain­ing a sub­tle but im­por­tant er­ror in the LEAP sur­vey of AI experts

titotal4 Dec 2025 15:45 UTC
40 points
4 comments10 min readEA link

How AI may be­come de­ceit­ful, syco­phan­tic… and lazy

titotal7 Oct 2025 14:15 UTC
30 points
4 comments22 min readEA link
(titotal.substack.com)

A deep cri­tique of AI 2027’s bad timeline models

titotal19 Jun 2025 13:35 UTC
286 points
32 comments40 min readEA link
(titotal.substack.com)

A widely shared AI pro­duc­tivity pa­per was re­tracted, is pos­si­bly fraudulent

titotal19 May 2025 10:18 UTC
34 points
4 comments3 min readEA link

De­bate: should EA avoid us­ing AI art out­side of re­search?

titotal30 Apr 2025 11:10 UTC
34 points
29 comments3 min readEA link

Slop­world 2035: The dan­gers of mediocre AI

titotal14 Apr 2025 13:14 UTC
90 points
1 comment29 min readEA link
(titotal.substack.com)

AI is not tak­ing over ma­te­rial sci­ence (for now): an anal­y­sis and con­fer­ence report

titotal11 Mar 2025 12:01 UTC
59 points
16 comments25 min readEA link
(open.substack.com)

Ex­plain­ing the dis­crep­an­cies in cost effec­tive­ness rat­ings: A repli­ca­tion and break­down of RP’s an­i­mal welfare cost effec­tive­ness calcu­la­tions

titotal14 Oct 2024 11:34 UTC
158 points
28 comments12 min readEA link

Is “su­per­hu­man” AI fore­cast­ing BS? Some ex­per­i­ments on the “539″ bot from the Cen­tre for AI Safety

titotal18 Sep 2024 13:07 UTC
69 points
4 comments14 min readEA link
(open.substack.com)

Most smart and skil­led peo­ple are out­side of the EA/​ra­tio­nal­ist com­mu­nity: an analysis

titotal12 Jul 2024 12:13 UTC
219 points
26 comments14 min readEA link
(open.substack.com)

In defense of stan­dards: A fe­cal thought experiment

titotal24 Jun 2024 12:18 UTC
11 points
10 comments8 min readEA link

Mo­ti­va­tion gaps: Why so much EA crit­i­cism is hos­tile and lazy

titotal22 Apr 2024 11:49 UTC
214 points
44 comments19 min readEA link
(titotal.substack.com)

[Draft] The hum­ble cos­mol­o­gist’s P(doom) paradox

titotal16 Mar 2024 11:13 UTC
39 points
6 comments10 min readEA link

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotal14 Jan 2024 15:03 UTC
56 points
2 comments21 min readEA link
(titotal.substack.com)

tito­tal’s Quick takes

titotal9 Dec 2023 0:25 UTC
8 points
53 commentsEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotal6 Dec 2023 14:09 UTC
29 points
20 comments16 min readEA link
(open.substack.com)

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotal29 Sep 2023 14:01 UTC
102 points
33 comments20 min readEA link
(titotal.substack.com)