Thanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
Note that I think that the mechanisms I describe aren’t specific to economics, but cover academic research generally—and will also include most of how most AI safety researchers (even those not in academia) will have impact.
There are potentially major crux moments around AI, so there’s also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won’t be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they’re more closely connected to the crucial moments—e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn’t have done.
Thanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
Note that I think that the mechanisms I describe aren’t specific to economics, but cover academic research generally—and will also include most of how most AI safety researchers (even those not in academia) will have impact.
There are potentially major crux moments around AI, so there’s also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won’t be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they’re more closely connected to the crucial moments—e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn’t have done.