My first guess is that there’s significant value in someone maintaining an open, exhaustive database of AIS research.
Yeah, I agree. But there’s also significant value in doing more AIS research, and I suspect that on the current margin for a full-time researcher (such as myself) it’s better to do more AIS research compared to writing summaries of everything.
Note that I do intend to keep adding all of the links to the database, it’s the summaries that won’t keep up.
It is plausible to me that an org with a safety team (e.g. DeepMind/OpenAI) is already doing this in-house, or planning to do so.
I’m 95% confident that no one is already doing this, and if they were seriously planning to do so I’d expect they would check in with me first. (I do know multiple people at all of these orgs.)
More broadly, these labs might have some good systems in place for maintaining databases of new research in areas with a much higher volume than AIS, so could potentially share some best-practices.
You know, that would make sense as a thing to exist, but I suspect it does not. Regardless that’s a good idea, I should make sure to check.
Yeah, I agree. But there’s also significant value in doing more AIS research, and I suspect that on the current margin for a full-time researcher (such as myself) it’s better to do more AIS research compared to writing summaries of everything.
Note that I do intend to keep adding all of the links to the database, it’s the summaries that won’t keep up.
I’m 95% confident that no one is already doing this, and if they were seriously planning to do so I’d expect they would check in with me first. (I do know multiple people at all of these orgs.)
You know, that would make sense as a thing to exist, but I suspect it does not. Regardless that’s a good idea, I should make sure to check.