Hi Lizka, thanks for the article!
I’m curious if you have an opinion about the updated OMB memos that have been released about AI adoption by the US government (M-25-21, M-25-22). Do you think they’re a step in the right direction?
My initial thoughts are:
The “Chief AI Officers” they’re requiring be designated at each agency will potentially be quite impactful—they might have quite a lot of say over whether the suggestions you’ve made get implemented. It seems important to have people with good judgement in these positions, particularly at powerful agencies
There seems to me to be an emphasis on reducing bureaucratic barriers to faster adoption of AI (I’m inclined to think it’s mostly a political talking point though)
but I’ve thought about this way less than you so I’m interested in what you think.
Thanks James, interesting post!
A minor question: where you say the following,
do you think human researchers’ access to compute and other productivity enhancements would have a significant impact on their research capacity? It’s not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to “AI researchers”.
More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.