ML deepens the trench between online content and its distribution. Publishers have been seeking a larger share of online advertising revenue. But platforms such as Google and Facebook have been steadfast in denial, arguing content creators earn by employing their reach. Generative AI adds another layer to the issue by introducing an explicit commercial value to the content it sucks up. Humans do the same thing. But the process is less tangible. AI deployed on select data, residing on a specific platform, to solve a unique problem needs rules on how the commercial benefits ought to be shared.
A data regulatory structure should set up guard rails for orderly development of AI. Gov-led AI safety protocols, as recommended by the likes of Microsoft’s Brad Smith, will have to incorporate the treatment of content ownership. It should not be left to the power balance between content creators and AI developers. Blocked content and unacknowledged copyright infringement does not serve the interest of a technology with tremendous transformative potential. National frameworks must mesh at a global level to avoid content localisation, which can slow down ML. Use of content for AI is a case of market imperfection that needs to be addressed by lawmakers.