My spouse isn’t currently planning to divest the full amount of her equity. Some factors here: (a) It’s her decision, not mine. (b) The equity has important voting rights, such that divesting or donating it in full could have governance implications. (c) It doesn’t seem like this would have a significant marginal effect on my real or perceived conflict of interest: I could still not claim impartiality when married to the President of a company, equity or no. With these points in mind, full divestment or donation could happen in the future, but there’s no immediate plan to do it.
The bottom line is that I have a significant conflict of interest that isn’t going away, and I am trying to help reduce AI risk despite that. My new role will not have authority over grants or other significant resources besides my time and my ability to do analysis and make arguments. People encountering any analysis and arguments will have to decide how to weigh my conflict of interest for themselves, while considering arguments and analysis on the merits.
For whatever it’s worth, I have publicly said that the world would pause AI development if it were all up to me, and I make persistent efforts to ensure people I’m interacting with know this. I also believe the things I advocate for would almost universally have a negative expected effect (if any effect) on the value of the equity I’m exposed to. But I don’t expect everyone to agree with this or to be reassured by it.
> Besides RSPs, can you give any additional examples of approaches that you’re excited about from the perspective of building a bigger tent & appealing beyond AI risk communities? This balancing act of “find ideas that resonate with broader audiences” and “find ideas that actually reduce risk and don’t merely serve as applause lights or safety washing” seems quite important. I’d be interested in hearing if you have any concrete ideas that you think strike a good balance of this, as well as any high-level advice for how to navigate this.
I’m pretty focused on red lines, and I don’t think I necessarily have big insights on other ways to build a bigger tent, but one thing I have been pretty enthused about for a while is putting more effort into investigating potentially concerning AI incidents in the wild. Based on case studies, I believe that exposing and helping the public understand any concerning incidents could easily be the most effective way to galvanize more interest in safety standards, including regulation. I’m not sure how many concerning incidents there are to be found in the wild today, but I suspect there are some, and I expect there to be more over time as AI capabilities advance.
> Additionally, how are you feeling about voluntary commitments from labs (RSPs included) relative to alternatives like mandatory regulation by governments (you can’t do X or you can’t do X unless Y), preparedness from governments (you can keep doing X but if we see Y then we’re going to do Z), or other governance mechanisms?
The work as I describe it above is not specifically focused on companies. My focus is on hammering out (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can try to catch early warning signs of these capabilities (and what challenges this involves); and (c) what protective measures (for example, strong information security and alignment guarantees) are important for safely handling such capabilities. I hope that by doing analysis on these topics, I can create useful resources for companies, governments and other parties.
I suspect that companies are likely to move faster and more iteratively on things like this than governments at this stage, and so I often pay special attention to them. But I’ve made clear that I don’t think voluntary commitments alone are sufficient, and that I think regulation will be necessary to contain AI risks. (Quote from earlier piece: “And to be explicit: I think regulation will be necessary to contain AI risks (RSPs alone are not enough), and should almost certainly end up stricter than what companies impose on themselves.”)