I think OSINT is a good method for varying types of enforcement, especially because the general public can aid in the gathering of evidence to send to regulators. This happens a lot in the animal welfare industry AFAIK, though someone with experience here please feel free to correct me. I know Animal Rising recently used OSINT to gather evidence of 280 legal breaches from the livestock industry which they handed to DEFRA which is pretty cool. This is especially the case given that these were RSPCA-endorsed farms so it showed that the stakeholder vetting (pun unintended) was failing. This only happened 3 days ago, so the link may expire, but here is an update.
For AI this is often a bit less effective, but is still useful. A lot of the models in nuclear, policing, natsec, defence, or similar are likely to be protected in a way that makes OSINT difficult, but I’ve used it before for AI Governance impact. The issue is that even if you find something, a DSMA-Notice or similar can be used to stop publication. You said “Information on AI development gathered through OSINT could be misused by actors with their own agenda” which is almost word for word the reason the data is often, haha. So you’re 100% right that in AI Governance in these sectors OSINT can be super useful but may fall at later hurdles.
However, commercial AI is much more prone to OSINT because there’s no real lever to stop you publishing OSINT information. You can usually in my experience use the supply chain for a fantastic source of OSINT, depending on how dedicated you are. That’s been a major AI Governance theme in the instances I’ve been involved in on both sides of this.
These are some interesting thoughts.
I think OSINT is a good method for varying types of enforcement, especially because the general public can aid in the gathering of evidence to send to regulators. This happens a lot in the animal welfare industry AFAIK, though someone with experience here please feel free to correct me. I know Animal Rising recently used OSINT to gather evidence of 280 legal breaches from the livestock industry which they handed to DEFRA which is pretty cool. This is especially the case given that these were RSPCA-endorsed farms so it showed that the stakeholder vetting (pun unintended) was failing. This only happened 3 days ago, so the link may expire, but here is an update.
For AI this is often a bit less effective, but is still useful. A lot of the models in nuclear, policing, natsec, defence, or similar are likely to be protected in a way that makes OSINT difficult, but I’ve used it before for AI Governance impact. The issue is that even if you find something, a DSMA-Notice or similar can be used to stop publication. You said “Information on AI development gathered through OSINT could be misused by actors with their own agenda” which is almost word for word the reason the data is often, haha. So you’re 100% right that in AI Governance in these sectors OSINT can be super useful but may fall at later hurdles.
However, commercial AI is much more prone to OSINT because there’s no real lever to stop you publishing OSINT information. You can usually in my experience use the supply chain for a fantastic source of OSINT, depending on how dedicated you are. That’s been a major AI Governance theme in the instances I’ve been involved in on both sides of this.