technology

Former Google researcher Timnit Gebru calls for stringent AI regulation


Timnit Gebru, a former researcher on ethical artificial intelligence (AI) with Google, has called for more regulation in AI as Big Tech companies will not “self-regulate” amid the ongoing gold rush, he said.

Speaking to The Guardian, Gebru — who claims she was sacked by Google for calling out the inherent biases in the tech giant’s AI systems — said, “Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and something better than just a profit motive.”

“In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not,” she told The Guardian.

Gebru, who co-led Google’s ethical AI team, wrote an academic paper that warned about the kind of AI that is increasingly built into our lives. The paper noted that the clear danger with such intelligence was the huge data sets on which it stood.

It further said that such data sets “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Gebru claims she was asked to either withdraw the paper or take her name off it.

Calls to regulate AI have grown over the past year amid technological advancements at breakneck speed. The launch of ChatGPT — a conversational text-based chatbot — by OpenAI can well be seen as an inflection point in the field of generative AI, which triggered a wave that everyone today wants to ride on.


During a Senate hearing earlier this month, Sam Altman, the chief executive of OpenAI, said that regulating AI was essential.“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman told the Senate on the impact of AI.

“We have tried to be very clear about the magnitude of the risks here. My biggest fear is that we — the technology or industry — cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong and we want to work with the government to prevent that.”

In an interview with news agency AP, OpenAI’s chief technology officer Mira Miurati echoed a similar view and asked governments to be involved in the regulation of AI.

“These (AI) systems should be regulated. At OpenAI, we’re constantly talking with governments, regulators, and other organisations that are developing these systems, to — at least at the company level — agree on some level of standards. But I think a lot more needs to happen. Government regulators should certainly be involved.”

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.