Similarly to Geoffrey, I like the way this question is set up but not quite sure I have understood it correctly.
However, I would say as an initial response that the legal approach to AI is so in its infancy that the focus to respond to risk has to be more holistic (see the EU AI Act, which has ‘risk tiers’).
When we think about IP laws, they tend not to play quite the same role in reducing risk. Tight IP might have corollary effects on eg how NLP systems can be trained, but I would need to have a good think to uncover whether, if at all, intellectual property laws could have such an effect. Would love to hear your thoughts however!
Similarly to Geoffrey, I like the way this question is set up but not quite sure I have understood it correctly.
However, I would say as an initial response that the legal approach to AI is so in its infancy that the focus to respond to risk has to be more holistic (see the EU AI Act, which has ‘risk tiers’).
When we think about IP laws, they tend not to play quite the same role in reducing risk. Tight IP might have corollary effects on eg how NLP systems can be trained, but I would need to have a good think to uncover whether, if at all, intellectual property laws could have such an effect. Would love to hear your thoughts however!