Liberals’ proposed AI law too vague and costly, big tech executives tell MPs


Representatives of big tech companies say a Liberal government bill that would begin regulating some artificial intelligence systems is too vague.

Amazon and Microsoft executives told MPs at a meeting of the House of Commons industry committee on Wednesday that Bill C-27 does not sufficiently differentiate between AI systems high and low risk.

Businesses said complying with the law as written would be costly.

Nicole Foster, director of global artificial intelligence and Canadian public policy at Amazon, said using the same approach for all applications is “highly impractical and could inadvertently stifle innovation.”

A peace officer’s use of AI is considered high impact in all cases, she said, even when an officer uses autocorrect to fill out a ticket for a code violation of the road.

“Laws and regulations must clearly differentiate high-risk applications from those that pose little or no risk. This is a fundamental principle that we must respect,” Foster said.

“We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially bring much-needed productivity gains to Canadian businesses large and small.

Microsoft gave its own example of how the law doesn’t seem to make a difference based on the level of risk introduced by certain AI systems.

A man gestures with his hands as he speaks to reporters in a press conference room.  Two Canadian flags are draped in the background.
The Minister of Industry, François-Philippe Champagne, provided some information on the amendments planned to the bill. (Adrian Wyld/The Canadian Press)

An AI system used to approve a person’s mortgage and manage sensitive details about their finances would be considered the same as one used to optimize package delivery routes using public data.

The Minister of Industry, François-Philippe Champagne, provided some information on the amendments that the government intends to propose to the bill to ensure it is updated.

But despite these additional details, companies said the definitions in the bill still remain too ambiguous.

Amanda Craig, senior director of public policy at Microsoft’s Office of Responsible AI, said not differentiating between the two would “spread the time, money, talent and resources of Canadian companies — and could mean that limited resources are not sufficiently focused on the highest risks.” “.

Bill C-27 was introduced in 2022 to target what is described as “high-impact” AI systems.

But generative AI systems like ChatGPT, capable of creating text, images and videos, only became widely available to the public after the bill was first introduced.

The Liberals now say they will change the law to introduce new rules, including one requiring companies behind such systems to take steps to ensure the content they create is identifiable as AI-generated .

Earlier this week, Yoshua Bengio, nicknamed the “godfather” of AI, told the same committee that Ottawa should put legislation in place immediately, even if the legislation isn’t perfect.

Bengio, scientific director of Mila, the Quebec AI Institute, said a “superhuman” intelligence as smart as a human could arrive in just a few years.

Advanced systems could be used for cyberattacks, he said, and the law must anticipate that risk.

AI already poses risks. Deepfake videos, generated to make it appear as if a real person is doing or saying something they never did, can be used to spread misinformation, Bengio said.

Source link

Scroll to Top