Elon Musk, Steve Wozniak, and DeepMind scientists also signed the letter.
Yoshua Bengio, the co-founder of Montréal-based Mila, along with hundreds of tech leaders, artificial intelligence (AI) researchers, policymakers and other concerned parties, signed an open letter urging all AI labs to agree to a six-month pause on training systems that are more powerful than GPT-4.
At a press conference that included Bengio, MIT physics professor Max Tegmark, and Future of Life Institute director of multistakeholder engagements Emilia Javorsky, all three panelists agreed that the goal isn’t to put a stop to all AI technology, development and research. Rather, it’s to give private industry, governments, and the public time to fully grasp AI and some of its applications, and to create appropriate regulations around it.
The open letter was meant to flag how quickly GPT-powered AI is moving, to convey the urgent need to regulate it, and to suggest a feasible timeline that most stakeholders could agree to. The idea being to give everyone involved enough breathing room to tackle the implications of developing this particular area of AI further. All three panelists agreed that six months is not necessarily enough time to create the required transparency, discourse, and governance around AI.
“The speed at which it’s moving is outpacing our ability to make sense of it.”
– Emilia Javorsky, Future of Life Institute
“The speed at which it’s moving is outpacing our ability to make sense of it, know what risks it poses, and our ability to mitigate those risks,” Javorsky said at the conference. “Six months gives us the time to create governance around it and to understand it better. It buys us time for those conversations, risk analyses and risk mitigation efforts.”
The open letter and comments from Canadian AI experts comes as the federal government has tabled legislation that includes potential regulation for AI. The federal government tabled Bill C-27 in June, a wide-ranging privacy legislation that included what would be Canada’s first law regulating the development and deployment of high-impact AI systems.
The United States and the European Union also currently have legislation on the table that could have implications for the creation and deployment of AI.
Speaking to his concerns about the rapid progression of AI systems and the need for regulation, Bengio said, “We can’t let the industry regulate itself … Governments have to provide guidelines and put it under scrutiny.”
While some have expressed concerns about Bill C-27, other experts, like Bengio, have recognized the need. Speaking at the conference, Bengio added that what he particularly likes about the way Canada’s Bill C-27 was codified, for example, is how its principles will be able to adapt quickly, if needed.
Bengio cited conversations he’s had with Canadian government officials, who he did not name.
“It’s a real preoccupation and it’s on their mind,” Bengio said. “There’s a lot of consensus over the need for a framework.”
This isn’t the first time Bengio, in particular, is calling for better regulation around AI. Just last week, Mila and UNESCO released a joint book on AI governance.
At least one of the risks Bengio and those that signed the letter are concerned about includes the spread of disinformation and propaganda on our information channels, namely social media. Bengio mentioned that we’re already able to manipulate information to look very authentic, citing deep-fake content, and he – like the letter – suggested that a stamp or watermark be required to help audiences distinguish between what’s generated by AI and what isn’t. Tegmark agreed, saying that democracy depends on it.
“In a democracy, humans are able to make decisions about difficult public issues,” he explained. “That works only if people are living in the same reality, if people have a shared understanding of what’s going on … Because of AI’s ability to create fake and persuasive content, there’s a risk we’ll be outnumbered by AI algorithms.”
This request for a pause, the panelist said, comes to them from multiple stakeholders, including tech industry leaders who specialize in AI. According to Tegmark, AI companies have expressed concerns about corporate pressure driving them to produce market-ready technology too soon so that the competition doesn’t get there first.
“People who work in AI see those risks more clearly,” Tegmark argued. “No company has the power to stop this alone. That’s why, as a company, you want [all companies] to pause at the same time.”
This might explain why AI-industry players like Elon Musk, Julien Billot (Scale AI), Emad Mostaque (Stability AI), DeepMind scientists, as well as tech notables like Apple co-founder Steve Wozniak all signed the open letter. To date, thousands of people have signed the letter.
Leave a Reply