One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
I’m also curious about your opinion on whether biological-enhancement based approaches are likely to bear fruit in time to matter. Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
(Alternatively, is it more that you think that we need enhanced capabilities to succeed at alignment even if current timeline projections makes this appear challenging?).
One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
The round is SFF’s, so I can’t speak to the round in general.
Personally, I’m open in principle to this, but it would have a high burden of proof.
Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
One thing that’s unclear to me is whether attempts to use AI systems to augment human capabilities in these domains is in-scope or whether the round is focused on direct enhancement of these capabilities.
I’m also curious about your opinion on whether biological-enhancement based approaches are likely to bear fruit in time to matter. Do you think it’s plausible that timelines might be long on our current path or are you more hoping that there’s a pause that provides humanity with more time?
(Alternatively, is it more that you think that we need enhanced capabilities to succeed at alignment even if current timeline projections makes this appear challenging?).
The round is SFF’s, so I can’t speak to the round in general.
Personally, I’m open in principle to this, but it would have a high burden of proof.
Both. Pause is important. With or without a pause, I don’t think that confident short timelines make sense. See https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce and https://www.lesswrong.com/posts/5tqFT3bcTekvico4d/do-confident-short-timelines-make-sense
Something faster than reprogenetics would be nice, I just don’t see a way that seems likely to work.
I think alignment is probably extremely difficult, and we would have a relatively better chance with more brainpower, though maybe not a high chance. For why I think it helps X-risk, see https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html (though see also https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts).