Amplifying the AI Alignment research projects of MATS scholars at LISA.
Formerly worked as a director of Product Management in Capital Markets, Data Analytics at Coalition Greenwich (S&P Global), McLagan (Aon)
Interested in Prediction markets, Semiconductors. AMF monthly donor for 9 years.
I’m completely sold on the arguments in general EV terms (the vast suffering, tractability, importance, neglect—even within EA), up to the limits of how confident I can be about anything this complex. That’s basically the fringe possibilities—weird second, third-order impacts from the messiness of life that mean I couldn’t be >98% on something like this.
The deontological point was that maybe there is a good reason I should only care or vastly weight humans over animals through some moral obligation. I don’t currently believe that but I’m hedging for it, because I could be convinced.
I realise now I’m basically saying I 90% agree that rolling a D20 for 3+ is a good idea, when it would be fair to also interpret it that I 100% agree it’s a good idea ex ante.
(Also my first comment was terrible, sorry I just wanted to get on the board on priors before reading the debate)