FAI Research Constraints and AGI Side Effects

Justin Shovelain and I have written a paper on modeling FAI research constraints. We have not tried applying any numbers or uncertainties to these models, but we have established a few simple equations that numbers could easily be applied to.

This is available on LessWrong, and Penflip, which is better formatted (but does not have comments)

Summary

Friendly artificial intelligence (FAI) researchers have at least two significant challenges. First, they must produce a significant amount of FAI research in a short amount of time. Second, they must do so without producing enough general artificial intelligence (AGI) research to result in the creation of an unfriendly artificial intelligence (UFAI). We estimate the requirements of both of these challenges using two simple models.

Our first model describes a friendliness ratio and a leakage ratio for FAI research projects. These provide limits on the allowable amount of artificial general intelligence (AGI) knowledge produced per unit of FAI knowledge in order for a project to be net beneficial.

Our second model studies a hypothetical FAI venture, which is responsible for ensuring FAI creation. We estimate necessary total FAI research per year from the venture and leakage ratio of that research. This model demonstrates a trade off between the speed of FAI research and the proportion of AGI research that can be revealed as part of it. If FAI research takes too long, then the acceptable leakage ratio may become so low that it would become nearly impossible to safely produce any new research.

Addition Information (Not in Paper)

This paper had many equations that required LaTeX. Doing this all in the LessWrong proved very time intensive and frustrating. Fortunately, we did find an assistant on TimeEtc, who was able to accomplish this in around 2 hours. This cost around $45 of time labor for, which was a bit much, but it was very easy to schedule (results the next day).

No comments.