I think the more important question, which Richard brought up, is whether having X times more cash after a suboptimal/dangerous AI takeoff begins is better than simply donating the money now in an attempt to avert bad outcomes.
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.
Agree this is important. As I’ve thought about it some more, it appears quite complicated. Also seems important to have a view based on more than rough intuition, as it bears on the donation behavior of a lot of EA.
I’d probably benefit from having a formal model here, so I might make one.