[Question] Thoughts on this $16.7M “AI safety” grant?

UPDATE: read Peter Slattery’s answer.

Open Philanthropy has recommended a total of $16.7M to the MIT to support research led by Neil Thompson on modeling the trends and impacts of AI and computing.

2020 - MIT — AI Trends and Impacts Research - $550,688
2022 - MIT — AI Trends and Impacts Research - $13,277,348
2023 - MIT — AI Trends and Impacts Research - $2,911,324

I’ve read most of their research, and I don’t understand why Open Philanthropy thinks this is a good use of their money.

Thompson’s Google Scholar here.

Thompson’s most cited paper “The Computational Limits of Deep Learning” (2020)

@gwern pointed out some flaws on Reddit.

Thompson’s latest paper “A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning” (2024)

This paper has many limitations (as acknowledged by the author), and from an x-risks point of view, it seems irrelevant.


What do you think about Open Philanthropy recommending a total of $16.7M for this work?

No comments.