GovAI now has a full-time researcher working on compute governance. Chinchilla’s Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.
What’s going on here? I can see two possible answers:
Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and we’re better off putting our time into compute.
Someone might already be working on this that I just haven’t heard of.
If anyone has an answer to this I’d love to know!
Focusing more on data governance:
GovAI now has a full-time researcher working on compute governance. Chinchilla’s Wild Implications suggests that access to data might also be a crucial leverage point for AI development. However, from what I can tell, there are no EAs working full time on how data protection regulations might help slow or direct AI progress. This seems like a pretty big gap in the field.
What’s going on here? I can see two possible answers:
Folks have suggested that compute is relatively to govern (eg). Someone might have looked into this and decided data is just too hard to control, and we’re better off putting our time into compute.
Someone might already be working on this that I just haven’t heard of.
If anyone has an answer to this I’d love to know!
NB: One reason this might be tractable is that lots of non-EA folks are working on data protection already, and we could leverage their expertise.