Other people on the LTFF and the rest of the funding ecosystem seem to be more optimistic here, though one thing I’ve found from discussing my cruxes here for many hundreds of hours with others is that people’s models of the policy space drastically differ, and most people are pessimistic about most types of policy work (though then often optimistic about a specific type of policy work).
Surely something akin to this critique can also be levied at e.g. alignment research.
Oh, sorry, I didn’t intend this at all as a critique. I intended this as a way to communicate that I don’t think I am that alone in thinking that most policy projects are pretty unlikely to be helpful.
Sorry “critique” was poor choice of words on my part. I just meant “most LT plans will fail, and most LT plans that at least some people you respect like will on an inside view certainly fail” is just the default for trying to reason well on the frontier of LT stuff. But I’m worried that the framing will sound like you meant it narrowly for policy. Also, I’m worried your implied bar for funding policy is higher than what LTFF people (including yourself) actually use.
Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
(I work for the LTFF)
Surely something akin to this critique can also be levied at e.g. alignment research.
Oh, sorry, I didn’t intend this at all as a critique. I intended this as a way to communicate that I don’t think I am that alone in thinking that most policy projects are pretty unlikely to be helpful.
Sorry “critique” was poor choice of words on my part. I just meant “most LT plans will fail, and most LT plans that at least some people you respect like will on an inside view certainly fail” is just the default for trying to reason well on the frontier of LT stuff. But I’m worried that the framing will sound like you meant it narrowly for policy. Also, I’m worried your implied bar for funding policy is higher than what LTFF people (including yourself) actually use.
Hmm, yeah, I think we are both using subpar phrasing here. I think this is true for both policy and AI Alignment, but for example less true for biorisk, where my sense is there is a lot more people agreeing that certain interventions would definitely help (with some disagreement on the magnitude of the help, but much less than for AI Alignment and policy).
I agree about biosecurity, sure. Although, I actually think we’re much less conceptually confused about biosecurity policy than we are about AI policy. For example, pushing for a reasonable subset of the Apollo report seems reasonable to me.
Yeah, I think being less conceptually confused is definitely part of it.