RSS

titotal

Karma: 7,595

I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.

Ex­plain­ing the dis­crep­an­cies in cost effec­tive­ness rat­ings: A repli­ca­tion and break­down of RP’s an­i­mal welfare cost effec­tive­ness calcu­la­tions

titotal14 Oct 2024 11:34 UTC
149 points
28 comments12 min readEA link

Is “su­per­hu­man” AI fore­cast­ing BS? Some ex­per­i­ments on the “539″ bot from the Cen­tre for AI Safety

titotal18 Sep 2024 13:07 UTC
67 points
4 comments14 min readEA link
(open.substack.com)

Most smart and skil­led peo­ple are out­side of the EA/​ra­tio­nal­ist com­mu­nity: an analysis

titotal12 Jul 2024 12:13 UTC
213 points
25 comments14 min readEA link
(open.substack.com)

In defense of stan­dards: A fe­cal thought experiment

titotal24 Jun 2024 12:18 UTC
6 points
10 comments8 min readEA link

Mo­ti­va­tion gaps: Why so much EA crit­i­cism is hos­tile and lazy

titotal22 Apr 2024 11:49 UTC
211 points
44 comments19 min readEA link
(titotal.substack.com)

[Draft] The hum­ble cos­mol­o­gist’s P(doom) paradox

titotal16 Mar 2024 11:13 UTC
38 points
6 comments10 min readEA link

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotal14 Jan 2024 15:03 UTC
54 points
2 comments21 min readEA link
(titotal.substack.com)

tito­tal’s Quick takes

titotal9 Dec 2023 0:25 UTC
8 points
46 comments1 min readEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotal6 Dec 2023 14:09 UTC
29 points
20 comments16 min readEA link
(open.substack.com)

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotal29 Sep 2023 14:01 UTC
102 points
33 comments20 min readEA link
(titotal.substack.com)

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
70 points
15 comments17 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotal26 May 2023 15:57 UTC
59 points
0 comments18 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
43 points
3 comments15 min readEA link

How “AGI” could end up be­ing many differ­ent spe­cial­ized AI’s stitched together

titotal8 May 2023 12:32 UTC
31 points
2 comments9 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
19 points
8 comments5 min readEA link

How my com­mu­nity suc­cess­fully re­duced sex­ual misconduct

titotal11 Mar 2023 13:50 UTC
209 points
32 comments5 min readEA link

Does EA un­der­stand how to apol­o­gize for things?

titotal15 Jan 2023 19:14 UTC
159 points
48 comments3 min readEA link

Cryp­tocur­rency is not all bad. We should stay away from it any­way.

titotal11 Dec 2022 13:59 UTC
96 points
41 comments10 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotal22 Sep 2022 15:00 UTC
49 points
11 comments15 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotal30 Aug 2022 13:55 UTC
40 points
12 comments10 min readEA link