I’m curious about your 1-10% as effective than the LTFF figure. Would you say it’s that because you think AI safety is roughly 10-100x more pressing (important, neglected, tractable, etc.) than nuclear security, marginal reasons around NTI vs LTFF giving opportunities, or a fairly even mix of both?
I’m curious about your 1-10% as effective than the LTFF figure. Would you say it’s that because you think AI safety is roughly 10-100x more pressing (important, neglected, tractable, etc.) than nuclear security, marginal reasons around NTI vs LTFF giving opportunities, or a fairly even mix of both?