While this is a good argument against it indicating governance-by-default (if people are saying that), securing longtermist funding to work with the free software community over this (thus overcoming two of the three hurdles) still seems to be a potentially very cost-effective way to reduce AI risk to look into, particularly combined with differential technological development of AI defensive v. offensive capacities.
That’s maybe a more productive way of looking at it! Makes me glad I estimated more than I claimed.
I think governments are probably the best candidate for funding this, or AI companies in cooperation with governments. And it’s an intervention which has limited downside and is easy to scale up/down, with the most important software being evaluated first.
While this is a good argument against it indicating governance-by-default (if people are saying that), securing longtermist funding to work with the free software community over this (thus overcoming two of the three hurdles) still seems to be a potentially very cost-effective way to reduce AI risk to look into, particularly combined with differential technological development of AI defensive v. offensive capacities.
That’s maybe a more productive way of looking at it! Makes me glad I estimated more than I claimed.
I think governments are probably the best candidate for funding this, or AI companies in cooperation with governments. And it’s an intervention which has limited downside and is easy to scale up/down, with the most important software being evaluated first.