In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn’t enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fund or funding circle could be set up that is dedicated to this (this is what I’m personally focusing my donations on; I’d like to be joined by others).
I strongly agree that governance work along these lines is very important; in fact, I’m currently working on governance full time instead of technical alignment research.
Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn’t currently do much if any active solicitation of grants, we’re ultimately bottlenecked by the applications we receive.
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable.
I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren’t constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary—i.e. until there is a global consensus on the safety of running more powerful models (FLI’s 6 month suggestion was really just a “foot in the door”).
In light of this (worries about contributing to AI capabilities and safetywashing) and/or general considerations around short timelines, have you considered funding work directly aimed and slowing down AI, as opposed to the traditional focus on AI Alignment work? E.g. advocacy work focused on getting a global moratorium on AGI development in place (examples). I think this is by far the highest impact thing we could be funding as a community (as there just isn’t enough time for Alignment research to bear fruit otherwise), and would be very grateful if a fund or funding circle could be set up that is dedicated to this (this is what I’m personally focusing my donations on; I’d like to be joined by others).
Newbie fund manager here, but:
I strongly agree that governance work along these lines is very important; in fact, I’m currently working on governance full time instead of technical alignment research.
Needless to say, I would be interested in funding work that aims to buy time for alignment research. For example, I did indeed fund this kind of AI governance work in the Lightspeed Grants S-process. But since LTFF doesn’t currently do much if any active solicitation of grants, we’re ultimately bottlenecked by the applications we receive.
Fwiw I’m pretty unsure of the sign on governance interventions like the above, both at the implementation level and the strategic level. I’d guess that I am more concerned about over hangs than most LTFF members, whilst thinking that the slow down plans that don’t create compute overhangs are pretty intractable.
I don’t think my views are common on the LTFF, though I’ve only discussed substantially with one other member (Thomas Larsen).
One way of dealing with overhangs is a taboo going along with the moratorium and regulation (we aren’t constantly needing to shut down underground human cloning labs). This is assuming that any sensible moratorium will last as long as is necessary—i.e. until there is a global consensus on the safety of running more powerful models (FLI’s 6 month suggestion was really just a “foot in the door”).
Thank you. This is encouraging. Hopefully there will be more applications soon.