I strongly agree that governance work along these lines is very important; in fact, I’m currently working on governance full time instead of technical alignment research.
Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn’t currently do much if any active solicitation of grants, we’re ultimately bottlenecked by the applications we receive.
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable.
I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren’t constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary—i.e. until there is a global consensus on the safety of running more powerful models (FLI’s 6 month suggestion was really just a “foot in the door”).
Newbie fund manager here, but:
I strongly agree that governance work along these lines is very important; in fact, I’m currently working on governance full time instead of technical alignment research.
Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn’t currently do much if any active solicitation of grants, we’re ultimately bottlenecked by the applications we receive.
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable.
I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren’t constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary—i.e. until there is a global consensus on the safety of running more powerful models (FLI’s 6 month suggestion was really just a “foot in the door”).
Thank you. This is encouraging. Hopefully there will be more applications soon.