Google and the European Commission will collaborate on AI ground rules


The world’s governments have taken note of generative AI’s potential for massive disruption and are acting accordingly. European Commission (EC) industry chief Thierry Breton said Wednesday that it would work with Alphabet on a voluntary pact to establish artificial intelligence ground rules, according to Reuters. Breton met with Google CEO Sundar Pichai in Brussels to discuss the arrangement, which will include input from companies based in Europe and other regions. The EU has a history of enacting strict technology rules, and the alliance gives Google a chance to provide input while steering clear of trouble down the road.

The compact aims to set up guidelines ahead of official legislation like the EU’s proposed AI Act, which will take much longer to develop and enact. “Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” Breton said in a statement. He encouraged EU nations and lawmakers to settle on specifics by the end of the year.

In a similar move, EU tech chief Margrethe Vestager said Tuesday that the federation would work with the United States on establishing minimum standards for AI. She hopes EU governments and lawmakers will “agree to a common text” for regulation by the end of 2023. “That would still leave one if not two years then to come into effect, which means that we need something to bridge that period of time,” she said. Topics of concern for the EU include copyright, disinformation, transparency, and governance.

OpenAI’s ChatGPT, the service most associated with AI fears, exploded in popularity after its November launch, on its way to becoming the fastest-growing application ever (despite not having an official mobile app until this month). Unfortunately, its viral popularity is paired with legitimate fears about its capacity to upend society. In addition, image generators can produce AI-generated “photos” that are increasingly difficult to discern from reality, and speech cloners can mimic the voices of famous artists and public figures. Soon, video generators will evolve, making deep fakes even more of a concern.

Despite its undeniable potential for creativity and productivity, generative AI can threaten the livelihoods of countless content creators while posing new security and privacy risks and proliferating misinformation/disinformation. Left unregulated, corporations tend to maximize profits no matter the human cost and generative AI is a tool that, paired with bad actors, could wreak immeasurable global havoc. “There is a shared sense of urgency. In order to make the most of this technology, guard rails are needed,” Vestager said. “Can we discuss what we can expect companies to do as a minimum before legislation kicks in?”

You May Also Like

More From Author