LW server reports: not allowed.
This probably means the post has been deleted or moved back to the author's drafts.
I think this question some clarification.
Suppose we all agreed that we needed further capabilities for alignment.
Well, that wouldn’t be immediately obvious what the optional timeline for developing those capabilities without further discussion.
So I think you could make the argument you’d like addressed a little clearer, by explaining why you think the optimal timeline is what it is.
Fair enough. A more precise question could be, would it be beneficial to slow progress from the current trend?
Or another question could be, would it be desirable or undesirable to give more compute and talent to top AI labs?
Hmm… your worry still isn’t completely clear from that.
Is your worry that any attempts to slow AI will reduce the lead time of top labs, giving them less time to align a human-level AI when they develop it?
Current theme: default
Less Wrong (text)
Less Wrong (link)
I think this question some clarification.
Suppose we all agreed that we needed further capabilities for alignment.
Well, that wouldn’t be immediately obvious what the optional timeline for developing those capabilities without further discussion.
So I think you could make the argument you’d like addressed a little clearer, by explaining why you think the optimal timeline is what it is.
Fair enough. A more precise question could be, would it be beneficial to slow progress from the current trend?
Or another question could be, would it be desirable or undesirable to give more compute and talent to top AI labs?
Hmm… your worry still isn’t completely clear from that.
Is your worry that any attempts to slow AI will reduce the lead time of top labs, giving them less time to align a human-level AI when they develop it?