RSS

MichaelDickens

Karma: 7,906

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).

I have a website: https://​​mdickens.me/​​ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.

I used to work as a software developer at Affirm.

Can AI make ad­vance­ments in moral philos­o­phy by writ­ing proofs?

MichaelDickens14 Apr 2026 0:09 UTC
32 points
1 comment4 min readEA link

Paus­ing AI Is the Best An­swer to Post-Align­ment Problems

MichaelDickens11 Apr 2026 15:46 UTC
15 points
0 comments2 min readEA link

By Strong De­fault, ASI Will End Liberal Democracy

MichaelDickens6 Apr 2026 23:43 UTC
42 points
1 comment3 min readEA link

If the fu­ture goes well for an­i­mals, will it go well for AGI?

MichaelDickens1 Apr 2026 12:34 UTC
14 points
1 comment1 min readEA link

The Fu­ture Will Be Weirder Than That

MichaelDickens29 Mar 2026 16:55 UTC
56 points
9 comments7 min readEA link

Which is bet­ter for sen­tient be­ings: an “eth­i­cal” AI or a cor­rigible AI?

MichaelDickens28 Mar 2026 19:39 UTC
19 points
0 comments3 min readEA link

The re­source-con­straints ar­gu­ment for why al­igned ASI wouldn’t be bad for animals

MichaelDickens27 Mar 2026 13:37 UTC
12 points
0 comments2 min readEA link

List of ideas for im­prov­ing an­i­mal welfare in light of trans­for­ma­tive AI

MichaelDickens26 Mar 2026 17:51 UTC
24 points
0 comments8 min readEA link

I used to think al­igned ASI would be good for all sen­tient be­ings; now I don’t know what to think

MichaelDickens25 Mar 2026 22:11 UTC
55 points
6 comments4 min readEA link

Cost-effec­tive­ness model for AI al­ign­ment-to-an­i­mals vs. al­ign­ment-in-general

MichaelDickens24 Mar 2026 19:27 UTC
16 points
8 comments7 min readEA link

Which types of AI al­ign­ment re­search are most likely to be good for all sen­tient be­ings?

MichaelDickens23 Mar 2026 13:38 UTC
32 points
1 comment6 min readEA link

Wor­lds where we solve AI al­ign­ment on pur­pose don’t look like the world we live in

MichaelDickens20 Mar 2026 14:46 UTC
74 points
9 comments5 min readEA link

Re­turn Stack­ing Funds: A New Way to Get Leverage

MichaelDickens18 Jan 2026 18:21 UTC
45 points
9 comments7 min readEA link

If AI al­ign­ment is only as hard as build­ing the steam en­g­ine, then we likely still die

MichaelDickens10 Jan 2026 23:10 UTC
35 points
8 comments4 min readEA link

War­time ethics is weird

MichaelDickens20 Dec 2025 16:21 UTC
15 points
11 comments2 min readEA link

Align­ment Boot­strap­ping Is Dangerous

MichaelDickens27 Nov 2025 18:18 UTC
14 points
0 comments2 min readEA link

Where I Am Donat­ing in 2025

MichaelDickens22 Nov 2025 23:21 UTC
90 points
9 comments14 min readEA link

We won’t solve post-al­ign­ment prob­lems by do­ing research

MichaelDickens21 Nov 2025 18:03 UTC
72 points
5 comments4 min readEA link

Epistemic Spot Check: Ex­pected Value of Donat­ing to Alex Bores’s Con­gres­sional Campaign

MichaelDickens13 Nov 2025 19:09 UTC
67 points
3 comments6 min readEA link

Writ­ing Your Rep­re­sen­ta­tives: A Cost-Effec­tive and Ne­glected Intervention

MichaelDickens9 Nov 2025 1:33 UTC
14 points
1 comment9 min readEA link