The AI Action Summit in Paris brought together top AI experts, global leaders, and major tech executives. But it also sparked a debate between China’s former UK ambassador, Fu Ying, and Professor Yoshua Bengio, often called the ‘AI Godfather.’ Their dispute? AI safety and the role of transparency in mitigating risks.
A Heated Exchange Over AI Safety
During a panel discussion ahead of the two-day summit, Fu Ying couldn’t resist teasing Prof. Bengio about the international AI safety report he co-authored. She thanked him for the “very, very long” document, pointing out that the Chinese translation stretched to 400 pages. That made it a challenge to read in full.
But the real tension arose when she took a jab at the title of the AI Safety Institute, of which Bengio is a key member. China, she noted, had taken a different approach. Rather than calling its organization an ‘Institute,’ it opted for ‘The AI Development and Safety Network’.
That emphasized collaboration over restriction. The subtle dig also highlighted the philosophical divide between China and Western nations regarding AI governance.
An Industry at a Crossroads
The AI Action Summit featured key players from 80 countries. Notable figures included OpenAI CEO, Sam Altman, Microsoft President, Brad Smith, and Google CEO, Sundar Pichai. Absent from the guest list? Elon Musk. Whether he would make a surprise appearance remained unknown.
The summit came at a critical time. Just weeks earlier, China’s DeepSeek unveiled a powerful, low cost AI model, shaking up the AI space and challenging US dominance. The debate between Fu Ying and Prof. Bengio highlighted bigger geopolitical tensions about AI.
Is Transparency Good or Bad?
Fu Ying argued that open-source AI fosters safety. When AI models are accessible, more eyes can scrutinize their mechanics, making it easier to detect and fix problems. She criticized US tech giants for keeping their AI models closed. She further warned that secrecy breeds uncertainty and fear.
Prof. Bengio pushed back. While he acknowledged the benefits of open-source models, he warned that unrestricted access also opens the door for bad actors. Criminals could exploit AI for malicious purposes, making regulation essential.
AI Regulations
On Tuesday, global leaders including French President, Emmanuel Macron, Indian Prime Minister, Narendra Modi, and US Vice President, JD Vance, joined discussions on AI’s impact on jobs, public services, and risk management. A $400 million partnership was also announced to fund AI ventures aimed at public welfare, particularly in healthcare.
In an interview with the BBC, UK Technology Secretary, Peter Kyle, stressed that Britain couldn’t afford to lag in AI adoption. Dr. Laura Gilbert, an AI advisor to the UK government, echoed this view, emphasizing AI’s potential to improve healthcare. “How are you going to fund the NHS without leveraging AI?” she asked.
Also read: The UK Will Launch AI Breast Cancer Screening
Matt Clifford, architect of the UK’s AI Action Plan, predicted AI’s impact would be even more disruptive than the transition from typewriters to word processors. AI firm Faculty CEO, Marc Warner, took it a step further: “The industrial revolution automated physical labor; AI is automating cognitive labor.” He speculated that by the time his two-year-old grows up, traditional jobs might look completely different.