Funding AI policy proposals to slow down high-risk AI capability research.
AI alignment, AI policy
We want AI alignment research to catch up and surpass AI capability research. Among others, AI capability research requires a friendly political environment. We would be interested in funding AI policy proposals that would increase the chance of obtaining effective regulations slowing down highly risky AI capability R&D. For example, some regulations could impose large language models to pass a thorough safety audit before deployment or scaling in parameters above determined safety thresholds. Another example would be funding AI policy projects increasing the chance of banning research aiming to build generally capable AI before solving the AI alignment problem. Such regulations would probably need to be implemented on a national and international scale to be effective.
Yes. To reduce that risk we could aim for an international agreement on banning high-risk AI capability research but might not be satisfying. I have the impression that very few people (if any) are working on that flavor of regulations and could be useful to explore it more. Ideally, if we could simply coordinate to not produce direct work on producing generally capable AI until we figure out safety it could be an important win.
Funding AI policy proposals to slow down high-risk AI capability research.
AI alignment, AI policy
We want AI alignment research to catch up and surpass AI capability research. Among others, AI capability research requires a friendly political environment. We would be interested in funding AI policy proposals that would increase the chance of obtaining effective regulations slowing down highly risky AI capability R&D. For example, some regulations could impose large language models to pass a thorough safety audit before deployment or scaling in parameters above determined safety thresholds. Another example would be funding AI policy projects increasing the chance of banning research aiming to build generally capable AI before solving the AI alignment problem. Such regulations would probably need to be implemented on a national and international scale to be effective.
One worry is that redtape increases the chance that someone who doesn’t care about regulation can frontrun the first team to AGI.
Yes. To reduce that risk we could aim for an international agreement on banning high-risk AI capability research but might not be satisfying. I have the impression that very few people (if any) are working on that flavor of regulations and could be useful to explore it more. Ideally, if we could simply coordinate to not produce direct work on producing generally capable AI until we figure out safety it could be an important win.