I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
I used to work as a software developer at Affirm.
Right now I would give very little marginal philanthropic money to compute-based experiments. AI companies already do a lot of those, and I don’t expect them to work anyway. ML experiments are not addressing the fundamental barriers to solving AI misalignment. A core problem is that experiments can’t deal with the sharp left turn.
(I would make an exception for CaML-style alignment-to-animals work, but that’s not about AI safety as it’s normally construed.)