AI tools like ChatGPT are putting pressure on rule makers in Strasbourg and Brussels. The regulations could be stricter than originally planned. Credit: Loek Essers The European Data Protection Board (EDPB) wants to set up a task force to take a closer look at AI tools like ChatGPT, which is being interpreted as an indication that European data protection officers could set stricter rules for the use of AI. The Italian data protection authorities in particular got a head start a few weeks ago. Since ChatGPT operator OpenAI couldn’t prove a working age verification for use, and the models behind the AI tool were trained with data from Italian citizens without their knowledge, Italians banned ChatGPT without further details, and set the operator a deadline of late April to present plans for improvements. ChatGPT is threatened with bans across Europe Other countries in Europe could follow suit with comparable measures. In Germany, for example, Federal Commissioner for Data Protection and Freedom of Information Ulrich Kelber announced that his agency was closely monitoring developments in Italy, and an AI task force of data protection officers has taken on the matter, he said. Further south, if colleagues come to the conclusion that ChatGPT violates the EU data protection regulation (EU-DSGVO), a ban could also loom in Spain. So data protection officials there have also announced a preliminary investigation in order to shed more light on the practices of OpenAI. Taking various approaches into consideration, the EDPB task force aims to promote cooperation and exchange of information between multiple data protection authorities. Member states also hope to align their political positions, an insider quoted by Reuters said at a national supervisory authority, who asked not to be named. All of this this will take time and the point is not to punish OpenAI ChatGPT owners or to issue rules, but rather to create general and responsible guidelines that make the use of AI more transparent. Meanwhile, the EU is currently working on a new legal framework to not only meet the challenges and opportunities of AI effectively, but strengthen trust in these rapidly evolving technologies. It will also be about regulating potential effects on individuals, society, and the economy in the best possible way, and creating an economic environment in which research, innovation and entrepreneurship could flourish. The aim of the European Commission is to increase private and public investments in AI technologies to €20 billion annually. Commitment by providers is not enough Despite ChatGPT and AI rapidly developing and stealing recent headlines around the world, the complexities of setting up such an AI set of rules has been going on for years. Reacting to this, the rules planned so far could be tightened again as a result before anything comes into force. Despite the dynamics of an ever-changing landscape, the European Parliament intends to enact the world’s toughest regulations for AI use. “Companies’ duty of care alone is not enough,” says Dragoș Tudorache, member of the European Parliament and co-negotiator, in a recent Financial Times article. To fulfil this objective, the European Parliament plans to oblige AI developers to disclose which data they use to train their algorithms and models. Facial recognition using AI in public spaces will be banned entirely, which is likely to lead to heated debate with police authorities. In addition, it also says AI manufacturers should be held liable for the misuse of their solutions, not users. However, EU bodies in Strasbourg and Brussels coming to an agreement won’t happen overnight. If the EU Parliament has a draft, it will be further coordinated with the EU Commission, individual member states, and MEPs, and a then final draft law should result from these negotiations. The aim is to pass this law in the current legislative period, which lasts until 2024. Meanwhile, representatives of the IT industry are warning against strict rules and bans. “We have to drive forward the technological development of AI in Germany and develop a practical set of rules for its application in Europe and worldwide,” said Bitkom president Achim Berg. “The current ban discussion, as initiated by the Federal Data Protection Commissioner, is going in the completely wrong direction.” Related content opinion AI, cybersecurity investments and identity take center stage at RSA 2024 The industry has renewed confidence, and many innovative AI cybersecurity solutions are ready to battle today’s increased security challenges. By Rick Grinnell May 29, 2024 6 mins Security how-to Download our data management platform (DMP) enterprise buyer’s guide From the editors of CIO, this enterprise buyer’s guide helps CIOs and other IT leaders understand the benefits of a data management platform (DMP) — which are increasingly important for customer-centric sales and marketing campaigns &mdas By Peter Wayner May 29, 2024 1 min Data Management Enterprise Buyer’s Guides news NIST launches ambitious effort to assess LLM risks The standards entity’s ARIA program attempts to establish guidelines on large language model (LLM) risks — a ‘delicate and challenging concept,’ according to industry experts. By Evan Schuman May 29, 2024 6 mins Generative AI Data Governance IT Governance news Former OpenAI board member tells all about Altman’s ousting Helen Toner speaks about why she voted Altman out, and why AI governance is so important. By Paul Barker May 29, 2024 6 mins Regulation Generative AI PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe