Disclosure: I have past, present, and potential future affiliations with MIT FutureTech. These views are my own.
Thank you for this post. I think it would be helpful for readers if you explained the context a little more clearly; I think the post is a little misleading at the moment.
These were not “AI Safety” grants; they were for “modeling the trends and impacts of AI and computing” which is what Neil/the lab does. Obviously that is important for AI Safety/x-risk reduction, but it is not just about AI Safety/x-risk reduction and somewhat upstream.
Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn’t accurately reflect the grant’s impact.
You claim to have read ‘most of their research’ but only cite two papers, neither funded by Open Philanthropy. This doesn’t accurately represent the lab’s work.
Your criticisms of the papers lack depth, e.g., ‘This paper has many limitations (as acknowledged by the author)’ without explaining why this is problematic. Why are so many people citing that 2020 paper, if it is not useful? Do you do research in this area, or are you assuming that you know what is useful/good research here? (genuine question—I honestly don’t know).
By asking readers to evaluate ‘$16.7M for this work’, you imply that the work you’ve presented was what was funded, which is not the case.
Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab’s work?
Now, to answer your question, I personally think the work being done by the lab deserves significant funding. Some reasons:
I think modeling the trends and impacts of AI and computing is very important and that it is valuable for OP to be able to fund very rigorous work to reduce their related uncertainties.
I think that it is very valuable to have respected researchers and institutions producing rigorous and credible work; I think that the impact of research scales superlinearly based on the credibility and rigor of the researchers.
The lab is growing very rapidly and attracting a lot of funding and interest from many sources.
The work is widely cited in policy documents, including, for instance, the 2024 Economic Report of the President.
Neil seems to be well respected by those who know him. I joined the lab after I spoke to a range of people I respect about their experiences working with him. Everyone I spoke with was very positive about Neil and the importance of his work. My experiences at the lab have reinforced my perspective.
Many of the new and ongoing projects (which I cannot discuss) seem quite neglected and important (e.g., they respond to requests from funders and I don’t know of other research on them). I expect they will be very valuable once they are released.
The lab is interdisciplinary and has a very broad, balanced and integrative approach to AI trends and impacts. Neil has a broad background and knowledge across many domains. This is reflected by how the lab functions; we hire and engage with people across many areas of the AI landscape, from people working on hardware and algorithms, to those working directly on AI risks reduction and evaluation. For instance, see the wide range of attendees at our AI Scaling Workshop (and the agenda). This seems rare and valuable (especially in a place like MIT CSAIL).
Thanks a lot for giving more context. I really appreciate it.
These were not “AI Safety” grants
These grants come from Open Philanthropy’s focus area “Potential Risks from Advanced AI”. I think it’s fair to say they are “AI Safety” grants.
Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn’t accurately reflect the grant’s impact.
Fair point. I agree old papers might not accurately reflect the grant’s impact, but they correlate.
Your criticisms of the papers lack depth … Do you do research in this area, …
I totally agree. That’s why I shared this post as a question. I’m not an expert in the area and I wanted an expert to give me context.
Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab’s work?
I added an update linking to your answer.
Overall, I’m concerned about Open Philanthropy’s granting. I have nothing against Thompson or his lab’s work.
Disclosure: I have past, present, and potential future affiliations with MIT FutureTech. These views are my own.
Thank you for this post. I think it would be helpful for readers if you explained the context a little more clearly; I think the post is a little misleading at the moment.
These were not “AI Safety” grants; they were for “modeling the trends and impacts of AI and computing” which is what Neil/the lab does. Obviously that is important for AI Safety/x-risk reduction, but it is not just about AI Safety/x-risk reduction and somewhat upstream.
Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn’t accurately reflect the grant’s impact.
You claim to have read ‘most of their research’ but only cite two papers, neither funded by Open Philanthropy. This doesn’t accurately represent the lab’s work.
Your criticisms of the papers lack depth, e.g., ‘This paper has many limitations (as acknowledged by the author)’ without explaining why this is problematic. Why are so many people citing that 2020 paper, if it is not useful? Do you do research in this area, or are you assuming that you know what is useful/good research here? (genuine question—I honestly don’t know).
By asking readers to evaluate ‘$16.7M for this work’, you imply that the work you’ve presented was what was funded, which is not the case.
Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab’s work?
Now, to answer your question, I personally think the work being done by the lab deserves significant funding. Some reasons:
I think modeling the trends and impacts of AI and computing is very important and that it is valuable for OP to be able to fund very rigorous work to reduce their related uncertainties.
I think that it is very valuable to have respected researchers and institutions producing rigorous and credible work; I think that the impact of research scales superlinearly based on the credibility and rigor of the researchers.
The lab is growing very rapidly and attracting a lot of funding and interest from many sources.
The work is widely cited in policy documents, including, for instance, the 2024 Economic Report of the President.
The work is widely covered in the media.
Neil seems to be well respected by those who know him. I joined the lab after I spoke to a range of people I respect about their experiences working with him. Everyone I spoke with was very positive about Neil and the importance of his work. My experiences at the lab have reinforced my perspective.
Many of the new and ongoing projects (which I cannot discuss) seem quite neglected and important (e.g., they respond to requests from funders and I don’t know of other research on them). I expect they will be very valuable once they are released.
The lab is interdisciplinary and has a very broad, balanced and integrative approach to AI trends and impacts. Neil has a broad background and knowledge across many domains. This is reflected by how the lab functions; we hire and engage with people across many areas of the AI landscape, from people working on hardware and algorithms, to those working directly on AI risks reduction and evaluation. For instance, see the wide range of attendees at our AI Scaling Workshop (and the agenda). This seems rare and valuable (especially in a place like MIT CSAIL).
Peter—This is a valuable comment; thanks for adding a lot more detail about this lab.
Thanks a lot for giving more context. I really appreciate it.
These grants come from Open Philanthropy’s focus area “Potential Risks from Advanced AI”. I think it’s fair to say they are “AI Safety” grants.
Fair point. I agree old papers might not accurately reflect the grant’s impact, but they correlate.
I totally agree. That’s why I shared this post as a question. I’m not an expert in the area and I wanted an expert to give me context.
I added an update linking to your answer.
Overall, I’m concerned about Open Philanthropy’s granting. I have nothing against Thompson or his lab’s work.