I argue that Bernard must be implicitly claiming that cause B is unusually tractable, that there is a strong departure from logarithmic returns, or that there is no feasible plan of attack for cause A.
Related: Rob Bensinger says MIRI’s current take on AI risk is “we don’t see a promising angle of attack on the core problem”.
Related: Rob Bensinger says MIRI’s current take on AI risk is “we don’t see a promising angle of attack on the core problem”.