When more than 50 tech companies, universities and startups from around the world united to form the AI Alliance last December, much of the globe was still making sense of the rapid advances in artificial intelligence.
With regulators eyeing the technology and questions swirling about whether its use would amplify biases and discrimination, take people’s jobs or even spell the end of humanity, the industry group was meant to parse through the worries and find practical ways to move forward with AI.
About seven months later, the organization, led by IBM and Meta Platforms Inc., numbers roughly 100 members and has formed working groups to address everything from AI skills to safety.
The Canadian Press asked members what measures Canada should prioritize as AI evolves.
More risk, more reward
Abhishek Gupta, founder of the Montreal AI Ethics Institute, considers Canada “the original home of AI.”
Some of the technology’s pioneers, including Yoshua Bengio and Geoffrey Hinton, have done much of their work within the country. Long before AI was buzzy, Canada was a hotbed for research in the sector.
But Gupta is worried about the country’s ability to turn AI into profits.
“Where we started to lose our edge, unfortunately, is in commercialization,” he said.
Some of that stems from Canadian talent seeking higher pay in the U.S. and other countries, where Gupta has heard of engineers making just shy of $1 million a year. U.S. venture capital firms with deeper pockets — and an often bolder approach — can outspend those in Canada, further driving home-grown companies away, he said.
The pattern continues when investors sell part or all of their ownership in a company. Many Canadian founders have opted for an exit that hands their business to a firm outside of Canada because of how much money buyers are willing to pay elsewhere.
As an example of how AI talent has seeped out of the country, Gupta points to Element AI, a Montreal-based firm that created AI solutions for large organizations, which was sold to California-based ServiceNow in 2020.
“It’s not great that it didn’t continue to remain a Canadian company … because the big thing we want to see is, of course, a translation of research into commercial success,” he said.
Jeremy Barnes, ElementAI’s former chief technology officer and now vice-president of AI for ServiceNow, similarly laments how Canada has been unable to take advantage of the edge it once had.
To turn things around, he thinks the country has to stop being so conservative and VC firms need to focus less on protecting themselves from losses and more on how to “share in the benefits” of startups.
“You have got to put your chips in the game in order to be able to win the jackpot,” he said.
Canada needs to look outside the “highly visible companies” and pour support into breakout businesses that are garnering less attention but have lots of potential, Barnes said.
The right guardrails
When the Alliance was founded, countries were already shaping their AI regulations.
U.S. President Joe Biden had issued an executive order requiring AI developers to share safety test results and other information with the government and the European Union had implemented tough compliance requirements.
Manav Gupta, vice-president and chief technology officer at IBM Canada, likes the expediency with which the U.S. government moved and the EU policy because it’s a layered approach that recognizes that AI systems tied to weapons, for example, carry very different risks than those involved in tasks like processing welfare claims.
He thinks the two policies have “championed the way” for other countries, acting as a benchmark for what AI regulations should look like worldwide.
Canada tabled an AI-centric bill in 2022, but it won’t be implemented until at least 2025, so the country has resorted to a voluntary code of conduct, which IBM and a few dozen other companies have signed, in the meantime.
Any policy the country lands on, Gupta said, should have a “well-defined framework” with a tiered approach to risks.
“The greater the risk of the technology, the higher the grading of the risk and therefore, the greater the regulation and the greater transparency,” he said.
The country should also be careful not to stray too far from the global direction regulations are taking on, said ServiceNow’s Barnes.
“What it will do if it’s done wrong is it will create friction, which makes it harder for Canadian companies to compete with others, so to some extent, the role of Canada can’t be to go it alone.”
Focus on open-source AI
As gains in AI become more frequent, Kevin Chan, global policy campaign strategies director at Facebook- and Instagram-owner Meta, is advocating for the tech industry to embrace the open-source model.
Open-source models mean the code underpinning the AI system is freely available for anyone to use, modify and build on, thus expanding access to AI, bolstering development and research and even bringing transparency to the technology.
“That’s actually how innovation happens,” Chan said of the open-source philosophy.
“We want to make sure that there is space that exists for people to choose to use open models so that we can get faster innovation, so that we can democratize this technology to more people.”
Open-source models have their downsides though — people can use them to harm and when vulnerabilities become known, hackers can attack multiple systems at once — but Chan sees the approach as an opportunity.
“Open models are great for countries like Canada, who may not have the … resources to build their own frontier models,” he says.
This report by The Canadian Press was first published June 21, 2024.
The post What AI Alliance members want Canada to prioritize appeared first on Bloomberg.