Technology and Science

As AI rises, lawmakers try to catch up

From “intelligent” vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence (AI) has burrowed its way into every arena of modern life.

Its promoters reckon it is revolutionising human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions.

Regulators in Europe and North America are worried.

Advertisement

The AI Act

The European Union is likely to pass legislation next year – the AI Act – aimed at reining in the age of the algorithm.

The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.

Looming large in the debates has been China’s use of biometric data, facial recognition and other technology to build a powerful system of control.

Advertisement

Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating “totalitarian infrastructures”.

“I see that as a huge threat, no matter the benefits,” she told AFP.

But before regulators can act, they face the daunting task of defining what AI actually is.

Advertisement

‘Mug’s game’

Suresh Venkatasubramanian of Brown University, who co-authored the AI Bill of Rights, said trying to define AI was “a mug’s game”.

Any technology that affects people’s rights should be within the scope of the bill, he tweeted.

The 27-nation EU is taking the more tortuous route of attempting to define the sprawling field.

Advertisement

Its draft law lists the kinds of approaches defined as AI, and it includes pretty much any computer system that involves automation.

The problem stems from the changing use of the term AI.

Robots thinking like humans

For decades, it described attempts to create machines that simulated human thinking.

Advertisement

But funding largely dried up for this research – known as symbolic AI – in the early 2000s.

The rise of the Silicon Valley titans saw AI reborn as a catch-all label for their number-crunching programs and the algorithms they generated.

This automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars.

“AI was a way for them to make more use of this surveillance data and to mystify what was happening,” Meredith Whittaker, a former Google worker who co-founded New York University’s AI Now Institute, told AFP.

So the EU and US have both concluded that any definition of artificial intelligence needs to be as broad as possible.

This video is no longer available.

ALSO READ: ChatGPT is taking the world by storm – the viral AI bot explained

‘Too challenging’

But from that point, the two Western powerhouses have largely gone their separate ways.

The EU’s draft AI Act runs to more than 100 pages.

Among its most eye-catching proposals are the complete prohibition of certain “high-risk” technologies — the kind of biometric surveillance tools used in China.

It also drastically limits the use of AI tools by migration officials, police and judges.

Hasselbach said some technologies were “simply too challenging to fundamental rights”.

The AI Bill of Rights, on the other hand, is a brief set of principles framed in aspirational language, with exhortations like “you should be protected from unsafe or ineffective systems”.

The bill was issued by the White House and relies on existing law.

Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because Congress is deadlocked.

ALSO READ: What will humanoid robots be used for in the future?

This video is no longer available.

‘Flesh wound’

Opinions differ on the merits of each approach.

“We desperately need regulation,” Gary Marcus of New York University told AFP.

He points out that “large language models” – the AI behind chatbots, translation tools, predictive text software and much else – can be used to generate harmful disinformation.

Whittaker questioned the value of laws aimed at tackling AI rather than the “surveillance business models” that underpin it.

“If you’re not addressing that at a fundamental level, I think you’re putting a band-aid over a flesh wound,” she said.

But other experts have broadly welcomed the US approach.

Regulating AI

AI was a better target for regulators than the more abstract concept of privacy, said Sean McGregor, a researcher who chronicles tech failures for the AI Incident Database.

But he said there could be a risk of over-regulation.

“The authorities that exist can regulate artificial intelligence,” he told AFP, pointing to the likes of the US Federal Trade Commission and the housing regulator HUD.

But where experts broadly agree is the need to remove the hype and mysticism that surrounds AI technology.

“It’s not magical,” McGregor said, likening AI to a highly sophisticated Excel spreadsheet.

Joseph Boyle with Julie Jammot in San Francisco

© Agence France-Presse

For more news your way

Download our app and read this and other great stories on the move. Available for Android and iOS.

Published by
By Agence France Presse