Habryka, I appreciate you sharing your outputs. Do you have a few minutes to follow up with a little explanation of your models yet? It’s ok if it’s a rough/incomplete explanation. But it would help to know a bit more about what you’ve seen with government-funded research etc. that makes you think this would be net-negative for the world.
Alas, sorry, I do think it would take me a while to write things up in any comprehensive way, and sadly I’ve been sick the last few days and so ended up falling behind a number of other commitments.
Here is a very very rough outline:
There really is already a lot of money in the space. Indeed, enough money that even substantial contributions from the NSF are unlikely to increase funding in any substantial proportion.
I’ve talked to multiple people at various organizations in EA and AI Alignment over the years who accepted NSF and other government funding over the years, and I think they regretted it in every instance, and found the experience very strongly distorting on the quality and alignment of the research.
I think there are indeed multiple fields that ended up derailed by actors like the NSF entering them, and then strongly distorting the incentives of the field. Nanotechnology for example I think ended up derailed in this kind of way, and there are a number of other subfields that I studied that seemed kind of similar.
I also expect the NSF getting involved will attract a number of adversarial actors that I expect will be quite distracting and potentially disruptive.
There are some more complicated feelings I have about having high-prestige research that deals with the potential negative consequences of AGI being net-negative, by increasing the probability of arms-races towards AGI. E.g. it’s pretty plausible to me that publishing Superintelligence was quite bad for the world. I don’t have super settled thoughts here, and am still quite confused, but I think it’s an important dimension to think about, and I think it adds some downside risk to this NSF situation, with relatively limited upside.
Habryka, I appreciate you sharing your outputs. Do you have a few minutes to follow up with a little explanation of your models yet? It’s ok if it’s a rough/incomplete explanation. But it would help to know a bit more about what you’ve seen with government-funded research etc. that makes you think this would be net-negative for the world.
Alas, sorry, I do think it would take me a while to write things up in any comprehensive way, and sadly I’ve been sick the last few days and so ended up falling behind a number of other commitments.
Here is a very very rough outline:
There really is already a lot of money in the space. Indeed, enough money that even substantial contributions from the NSF are unlikely to increase funding in any substantial proportion.
I’ve talked to multiple people at various organizations in EA and AI Alignment over the years who accepted NSF and other government funding over the years, and I think they regretted it in every instance, and found the experience very strongly distorting on the quality and alignment of the research.
I think there are indeed multiple fields that ended up derailed by actors like the NSF entering them, and then strongly distorting the incentives of the field. Nanotechnology for example I think ended up derailed in this kind of way, and there are a number of other subfields that I studied that seemed kind of similar.
I also expect the NSF getting involved will attract a number of adversarial actors that I expect will be quite distracting and potentially disruptive.
There are some more complicated feelings I have about having high-prestige research that deals with the potential negative consequences of AGI being net-negative, by increasing the probability of arms-races towards AGI. E.g. it’s pretty plausible to me that publishing Superintelligence was quite bad for the world. I don’t have super settled thoughts here, and am still quite confused, but I think it’s an important dimension to think about, and I think it adds some downside risk to this NSF situation, with relatively limited upside.