World leaders, tech CEOs and academics gathered in Paris this week for the AI Action Summit.
The focus of the event was a commitment to a common approach to artificial intelligence based on the sharing of science, solutions and standards.
However, if the aim of the summit was to present a global united front on the future of AI, the opposite was achieved.
It served to highlight the divisions that exist between Europe and the US when it comes to the regulation of the technology.
The Summit
French President Emmanuel Macron posted a montage of AI-generated deepfake videos of himself to his social media accounts to publicise the start of the AI Action Summit.
It was a humorous way to kick off the gathering but the tone quickly turned more serious when US Vice President JD Vance took to the stage and criticised European tech regulation.
“We believe that excessive regulation of the AI sector could kill a transformative industry,” Mr Vance told the summit.

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” he added.
Mr Macron and European Commission President Ursula von der Leyen committed to invest more in AI and to cut EU red tape.
But the French President stressed that regulation was needed to ensure trust in AI, or people would end up rejecting it.
“Every time there’s a major breakthrough in technology, fears arise.”
The US and the UK did not sign up to the final statement of the summit which committed to making AI inclusive, open, ethical and safe.
Taoiseach Micheál Martin attended the summit and said Europe needed to balance innovation with regulation, adding that the EU risked being left behind if the only regulation of AI was emanating from Europe.
“Every time there’s a major breakthrough in technology, fears arise,” Mr Martin said.
“On the other hand, we also need to be very strongly aware of the enormous benefits that can accrue from breakthroughs in technology,” he added.
US v EU regulation
The recent launch of the low-cost Chinese AI model DeepSeek was a “wakeup call” to US tech companies, according to President Donald Trump.
It meant that the AI race between the US and China was well and truly underway.
But even before the arrival of DeepSeek, Mr Trump had begun to unwind AI safety regulations, revoking policies he said “act as barriers to American AI innovation.”

Contrast this with Europe, where the EU AI Act came into force in August 2024, banning dangerous artificial intelligence systems and imposing strict rules on high-risk AI models.
John Clancy is the founder and CEO of Irish artificial intelligence company Galvia AI.
“The AI Summit is to be welcomed because Europe needs to wake up when it comes to our position in the world of AI,” Mr Clancy said.
“But it did feel a little bit like watching a team of engineers trying to redesign a jet engine mid-flight, there is a sense that Europe is only catching up now.
“The US solution is to throw billions at it and win the so-called race with China, and that’s their big geopolitical battle.”
“The reality for Europe is that we’re kind of sitting in the middle and we have probably over-regulated,” he added.
“We have regulated for applications that have yet to be deployed, really across Europe, regulation is a good thing, but in moderation.”

Ireland’s place in the AI race
Mr Clancy believes that Ireland is uniquely positioned to help navigate the tensions that exist between the US and Europe when it comes to AI regulation.
“We can play a central role in bridging the gap between Europe’s regulatory ambitions and the US drive for innovation.”
He believes Ireland can take advantage of being a ‘halfway house’ between the US and Europe.
“I think Ireland should lead when it comes to AI,” he said.
“There should be grants for research and development, public-private partnerships, and a dedicated AI campus.
“Think of what CERN in Switzerland is for physics and nuclear research. We should have a CERN for AI here in Ireland.”
“We already have tonnes of data centres so we are in a strategically unique position,” he added.
AI risks
Amid a push back against regulation, it is worth remembering that there are serious risks associated with AI, if its development is allowed to proceed unchecked.
At the most severe level, the technology could be used to harm human life, if it finds its way into the wrong hands.
“North Korea, or Iran, or even Russia could adopt and misuse the technology to create biological weapons,” former Google CEO Eric Schmidt told the BBC.
He warned that AI could also be used by terrorists.
“I’m always worried about the ‘Osama bin Laden’ scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people,” Mr Shmidt said.

Professor Geoffrey Hinton, the computer scientist dubbed the ‘Godfather of AI’, left his job at Google in 2023 warning of the dangers of the technology.
Last year, he told RTÉ News that “nasty things” will have to happen before the use of AI weapons is properly regulated.
“One of the threats is ‘battle robots’ which will make it much easier for rich countries to wage war on smaller, poorer countries and they are going to be very nasty and I think they are inevitably coming,” Professor Hinton said.
Aside from threats to human life, AI poses other risks.

There are concerns about its impact on the spread of disinformation, job displacement, discrimination and racial profiling.
Cybersecurity experts are worried about how the technology is being exploited by hackers.
“With the rise of open-source AI models like DeepSeek, the barriers to entry for cybercriminals have never been lower,” said Raluca Saceanu, CEO, Smarttech247.
Read more: Martin warns AI innovation needs to be balanced with regulation
“Just as the public can now easily generate AI-driven content, threat actors can just as easily create sophisticated phishing attacks, automate malware generation, and launch large-scale cyber assaults with minimal effort,” Mr Saceanu added.
“We strongly urge the government to match its commitment to AI innovation with a robust cybersecurity strategy and investment.
“There’s no point in modernising public services with AI if we’re simultaneously exposing critical infrastructure and sensitive data to AI-powered cyber threats.”
Was the AI Action Summit a success?
This a question I put to ChatGPT to see what the world’s most popular AI chatbot made of it all.
“While it showcased significant commitments to advancing artificial intelligence, it also highlighted global divisions regarding AI governance,” it replied.
“The lack of consensus on regulatory frameworks and the evident geopolitical tensions suggest that achieving unified global AI governance remains a challenge,” according to ChatGPT.
We do not know what the next ChatGPT or DeepSeek will look like and what powers it will hold.
One thing that is certain, is that the next advancements will happen quickly.
Some countries will choose to embrace and control those developments.
Others, who choose to try to regulate this fast-moving technology, may be left struggling to keep up.