Rafael Henrique | Lightrocket | Getty Images
PARIS, France â U.S. technology giants this week have talked up the benefits of artificial intelligence for humanity, turning on the charm at one of Europe’s largest industry events as regulators globally work to curb the harms associated with the tech.Â
At the Viva Tech conference in Paris on Wednesday, Amazon Chief Technology Officer Werner Vogels and Google Senior Vice President for Technology and Society James Manyika spoke about the great potential AI is unlocking for economies and communities.Â
It’s worth noting that their comments come as the world’s first major law governing AI, the EU’s AI Act, was given the final greenlight. Regulators are looking to rein in harms and abuses of the technology, such as misinformation and copyright abuse.Â
Meanwhile, European Commissioner Thierry Breton, a major architect of rules around Big Tech, is set to speak later in the week.Â
Vogels, who is tasked with driving technology innovation within Amazon, said that AI can be used to “solve some of the world’s hardest problems.”Â
He said that, while AI has the potential to make businesses of all stripes successfully, “at the same time we need responsibly to use some of this technology to solve some of the world’s hardest problems.” Â
Vogels said that it was important to talk about “AI for now” â in other words, the ways that the technology can benefit populations around the world currently.
He mentioned examples of how AI is being used in Jakarta, Indonesia, to link small rice farm owners to financial services. AI could also be used to build up a more efficient supply chain for rice, which he termed “the most important staple of food,” with 50% of the planet dependent on rice as their main food source.Â
Manyika, who oversees efforts across Google and Alphabet on responsible innovation, said that AI can lead to huge benefits from a health and biotechnology standpoint. Â
He said a version of Google’s Gemini AI model recently released by the firm is tailored for medical applications and able to understand context relating to the medical domain.Â
Google DeepMind, the key unit behind the firm’s AI efforts, also released a new version of its AlphaFold 3 AI model that can understand “all of life’s molecules, not just proteins,” and that it has made this technology available to researchers.Â
Manyika also called out innovations the company announced at its recent Google I/O event in Mountain View, California, including new “watermarking” technology for identifying text that’s been generated by AI, as well as images and audio which it’s launched previously.Â
Manyika said Google open-sourced its watermarking tech so that any developer can “build on it, improve on it.”Â
“I think it’s going to take all of us, these are some of the things, especially in a year like this, a billion people around the world have voted, so concerns around misinformation are important,” Manyika said. “These are some of the things we should be focused on.”Â
Manyika also stressed that a lot of the innovation that Google has been bringing to the table has been sourced from engineers at its French hub, stressing it’s committed to sourcing much of its innovation from within the European Union. Â
He said that Google’s recently introduced Gemma AI, a lightweight, open-source model, was developed heavily at the U.S. internet giant’s French tech hub. Â
EU regulators set global rulesÂ
Manyika’s comments arrived just a day after the EU approved the AI Act, a groundbreaking piece of legislation that sets comprehensive rules governing artificial intelligence. Â
The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the tech are treated differently depending on the perceived threats they pose.Â
 “I worry sometimes when all our narratives are just focused on the risks,” Manyika said. “Those are very important, but we should also be thinking about, why are we building this technology?”Â
“All of the developers in the room are thinking about, how do we improve society, how do we build businesses, how do we do imaginative, innovative things that solve some of the world’s problems.” Â
He said that Google is committed to balancing innovation with “being responsible,” and “being thoughtful, about will this harm people in any way, will this benefit people in any way, and how we keep on researching these things.”Â
Major U.S. tech firms have been trying to win favor with regulators as they face criticisms over their massive businesses having an adverse effect on smaller companies in areas ranging from advertising to retail to media production. Â
Specifically, with the advent of AI, opponents of Big Tech are concerned of the growing threats of new advanced generative AI systems undermining jobs, exploiting copyrighted material for training data, and producing misinformation and harmful content. Â
Friends in high placesÂ
Big Tech has been looking to curry favor with French officials. Â
Last week, at the “Choose France” foreign investment summit, Microsoft and Amazon signed commitments to invest a combined 5.2 billion euros ($5.6 billion) of funding for cloud and AI infrastructure and jobs in France.Â
This week, French President Emmanuel Macron met with Eric Schmidt, former CEO of Google, Yann LeCun, chief AI scientist of Meta, and Google’s Manyika, among other tech leaders, at the Elysee Palace to discuss ways of making Paris a global AI hub.Â
In a statement issued by the Elysee, and translated into English via Google Translate, Macron welcomed leaders from various tech firms to France and thanked them for their “commitment to France to be there at Viva Tech.”Â
Macron said that the “pride is mine to have you here as talents” in the global AI sphere.
Matt Calkins, CEO of U.S. enterprise software firm Appian, told CNBC that large tech firms “have a disproportionate influence on the development and deployment of AI technologies.”Â
“I am concerned that there is potential for monopolies to emerge around Big Tech and AI,” he said. “They can train their models on privately-owned data â as long as they anonymize it. This isn’t enough.”Â
“We need more privacy than this if we use individual and business data,” Calkins added.Â
Source Agencies