I think it’s worth noting that the two papers linked (which I agree are flawed and not that useful from an x-risk viewpoint) don’t acknowledge OpenPhil funding, and so maybe the OpenPhil funding is going towards other projects within the lab.
I think that Neil Thompson has some work which is pretty awesome from an x-risk perspective (often in collaboration with people from Epoch):
From skimming his Google Scholar, a bunch of other stuff seems broadly useful as well.
In general, research forecasting AI progress and economic impacts seems great, and even better if it’s from someone academically legible like Neil Thompson.
This seems misleading. Some of the authors are from Epoch, but there are authors from two other universities on the paper.
Also, where does it say that he is a guest author? Neil is a research advisor for Epoch and my understanding is that he provides valuable input on a lot of their work.
Thanks. My impression is that they are using ‘Guest author’ on their blog post to differentiate who works for Epoch or is external. As far as I can tell, that usage implies nothing about the contribution of the authors to the paper.
I would guess grants made to Neil’s lab are referring to the MIT FutureTech group, which he’s the director of. FutureTech says on its website that it has received grants from OpenPhil and the OpenPhil website doesn’t seem to mention a grant to FutureTech anywhere, so I assume the OpenPhil FutureTech grant was the grant made to Neil’s lab.
I think it’s worth noting that the two papers linked (which I agree are flawed and not that useful from an x-risk viewpoint)
I haven’t read the papers but I am surprised that you don’t think they are useful from an x-risk perspective. The second paper “A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning” seems highly relevant to forecasting AI progress which imo is one of the most useful AIS interventions.
The OP’s claim
This paper has many limitations (as acknowledged by the author), and from an x-risks point of view, it seems irrelevant.
Seems overstated and I’d guess that many people working on AI safety would disagree with them.
I think it’s worth noting that the two papers linked (which I agree are flawed and not that useful from an x-risk viewpoint) don’t acknowledge OpenPhil funding, and so maybe the OpenPhil funding is going towards other projects within the lab.
I think that Neil Thompson has some work which is pretty awesome from an x-risk perspective (often in collaboration with people from Epoch):
Algorithmic progress in language models
Economic impacts of AI-augmented R&D
The growing influence of industry in AI research
From skimming his Google Scholar, a bunch of other stuff seems broadly useful as well.
In general, research forecasting AI progress and economic impacts seems great, and even better if it’s from someone academically legible like Neil Thompson.
This paper is from Epoch. Thompson is a “Guest author”.
I think this paper and this article are interesting but I’d like to know why you think they are “pretty awesome from an x-risk perspective”.
Epoch AI has received much less funding from Open Philanthropy ($9.1M), yet they are producing world-class work that is widely read, used, and shared.
This seems misleading. Some of the authors are from Epoch, but there are authors from two other universities on the paper.
Also, where does it say that he is a guest author? Neil is a research advisor for Epoch and my understanding is that he provides valuable input on a lot of their work.
Sorry, I should have attached this in my previous message.
Here.
Thanks. My impression is that they are using ‘Guest author’ on their blog post to differentiate who works for Epoch or is external. As far as I can tell, that usage implies nothing about the contribution of the authors to the paper.
Yeah this seems the most straightforward interpretation
Setting aside whether Neil’s work is useful, presumably almost all of the grant is for his lab. I failed to find info on his lab.
I would guess grants made to Neil’s lab are referring to the MIT FutureTech group, which he’s the director of. FutureTech says on its website that it has received grants from OpenPhil and the OpenPhil website doesn’t seem to mention a grant to FutureTech anywhere, so I assume the OpenPhil FutureTech grant was the grant made to Neil’s lab.
It’s MIT FutureTech: https://futuretech.mit.edu/
Thanks.
I notice they have few publications.
I haven’t read the papers but I am surprised that you don’t think they are useful from an x-risk perspective. The second paper “A Model for Estimating the Economic Costs of Computer Vision Systems that use Deep Learning” seems highly relevant to forecasting AI progress which imo is one of the most useful AIS interventions.
The OP’s claim
Seems overstated and I’d guess that many people working on AI safety would disagree with them.
Hi calebp.
If you have time to read the papers, let me know if you think they are actually useful.