Overall I don’t have settled views on whether it’d be good for me to prioritize advocating for any particular policy.5 At the same time, if it turns out that there is (or will be) a lot more agreement with my current views than there currently seems to be, I wouldn’t want to be even a small obstacle to big things happening, and there’s a risk that my lack of active advocacy could be confused with opposition to outcomes I actually support.
You have a huge amount of clout in determining where $100Ms of OpenPhil money is directed toward AI x-safety. I think you should be much more vocal on this—at least indirectly by OpenPhil grant making. In fact I’ve been surprised at how quiet you (and OpenPhil) have been since GPT-4 was released!
You have a huge amount of clout in determining where $100Ms of OpenPhil money is directed toward AI x-safety. I think you should be much more vocal on this—at least indirectly by OpenPhil grant making. In fact I’ve been surprised at how quiet you (and OpenPhil) have been since GPT-4 was released!