Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
Yeah, I think being less conceptually confused is definitely part of it.