But the response to whether Julius or Augustus was the better Caesar will depend on what the machine has been ‘reading’. Technology companies argue this should not matter, considering the wealth of interpretation already available.
However, it does become important when humans have not had time to form their opinion, such as in the slipping and sliding terrain of current affairs. Or, if some interpretations were denied to AI.
These denials can emerge in the economic, political and cultural contexts. IP is a big hurdle for LLMs, with most liberal news sites in the US being more prickly about scraping content – to the advantage of right-wing media outlets happy to ‘share’ their views for AI amplification.
Access can also be restricted, as China does, through government intervention on public use of AI training models. Delayed deployment of generative AI in specific languages also reinforces the majoritarian bias. Persian and Hebrew bots could diverge over the causes of the Red Sea crisis.
From a technology creator’s perspective, though, the bigger worry is the human intervention needed at AI’s current stage of evolution to ensure its output does not hallucinate and is consistent. Bots are not smart enough yet to tell truth from falsehood. They also need to be guided in judging the merits of an argument.To be functional, AI will have to become far more responsible than it is now. Lawmakers can aid the process by seeking transparency in product and process development. But they are unlikely to be able to ensure global access to training models without harmonised rules. The world is going through an AI race with differing approaches on how to push individual teams along. The winner will have to jump over man-made hurdles.