The two blocs agreed to collaborate on interoperable and international standards for safer AI. Credit: Shutterstock The European Union and the US have agreed to increase co-operation in the development of technologies based on artificial intelligence (AI), placing a particular emphasis on safety and governance. The announcement came at the end of a meeting of the EU-US Trade and Technology Council in Leuven, Belgium, on Friday, and followed this week’s broadly similar pact between the US and UK on AI safety. The EU and US want to foster scientific information exchange between AI experts on either side of the Atlantic in areas such as developing benchmarks and assessing potential risks. The emphasis is on developing “safe, secure, and trustworthy” AI technologies. Developing compatible regulatory environments The two parties agreed to minimise divergence in their respective emerging AI governance and regulatory systems. In a statement, the EU and US sketched out area of existing collaboration on AI applications: “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables in the areas of extreme weather, energy, emergency response, and reconstruction.” Working well together requires agreement on the meaning of terms, and to that end the two parties released an updated edition of their EU-US Terminology and Taxonomy for Artificial Intelligence, now available for download. The European Union is seeking to regulate the development of artificial intelligence in the region with a recently approved AI Act. Despite industry calls for AI regulations in the US from industry heavyweights such as Google, Microsoft and OpenAI, partisan splits in Congress make it unlikely that agreement will be reached before fresh Congressional elections in November. The US government has, however, taken steps to put its own house in order by developing a strategy on the use of AI for federal agencies. AI guardrails Experts quizzed by CIO.com broadly welcomed the agreement between the EU and US as a positive development for the fast-moving field of artificial intelligence technologies. Gaurav Pal, CEO and Founder of stackArmor, an IT security consulting company and also a member of the US AI Safety Institute Consortium, told CIO.com, “This is an important step in helping develop a common set of AI guardrails and frameworks between the EU and the US.” Pal continued: “This will hopefully avoid creating silos and friction in conducting business between the US and EU for US AI companies.” Business leaders should keep abreast of the rapidly emerging regulatory framework around AI because it is likely to impact business operations across multiple sectors, perhaps akin to how GDPR has impacted US firms conducting business in the EU. The desire to steer away from clashing AI regulatory regimes on either side of the Atlantic is therefore welcome, according to Pal. “The co-operation agreement is very important as it seeks to develop a common set of regulatory standards and frameworks thereby reducing the cost and complexity of compliance,” Pal explained. Researchers gave the development of US-EU coordination on AI a cautious welcome, while looking for more detail on the specifics. “AI regulation necessitates joint efforts from the international community and governments to agree a set of regulatory processes and agencies,” Angelo Cangelosi, professor of machine learning and robotics at the University of Manchester in England, told CIO.com. “The latest UK-US agreement is a good step in this direction, though details on the practical steps are not fully clear at this stage, but we hope that this will continue at a wider international level, for example with integration with the EU AI agencies, as well as in the wider UN framework,” he added. Risks of AI misuse Dr Kjell Carlsson, head of AI strategy at Domino Data Lab, argued that focusing on the regulation of commercial AI offerings loses sight of the real and growing threat: the misuse of artificial intelligence by criminals to develop deep fakes and more convincing phishing scams. “Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use,” Carlsson said. “As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.” “At this stage in the development of AI, investment in testing and safety is far more effective than regulation,” Carlsson argued. Research on how to effectively test AI models, mitigate their risks and ensure their safety, carried out through new AI Safety Institutes, represents an “excellent public investment” in ensuring safety whilst fostering the competitiveness of AI developers, Carlsson said. Legal challenges Many mainstream companies are using AI to analyze, transform, and even produce data – developments that are already throwing up legal challenges on myriad fronts. Ben Travers, a partner at legal firm Knights and specializes in AI, IP and IT issues, explained: “Businesses should have an AI policy, which dovetails with other relevant policies, such as those relating to data protection, IP and IT procurement. The policy should set out the rules on which employee can (or cannot engage with AI).” Recent instances have raised awareness of the risks to employers when employees upload otherwise protected or confidential information to AI tools, while the technology also poses issues in areas such as copyright infringement. “Businesses need to decide how they are going to address these risks, reflect these in relevant policies and communicate these policies to their teams,” Travers concluded. Related content feature IT leaders’ AI talent needs hinge on reskilling Most organizations see the need to revamp their training programs to address AI skills shortages — an approach that delivers intangibles hiring can’t provide. By Grant Gross May 31, 2024 7 mins Hiring Generative AI IT Skills feature Skills the Irish Government CIO uses to advance digital transformation In his eight-year tenure as CIO at Ireland’s Department of Public Expenditure and Reform, Barry Lowry always had a vision of what digital government could look like. Here, he details how an approach built on transparency and innovation is conti By Ian Campbell May 31, 2024 8 mins CIO Government IT Cloud Management brandpost Sponsored by Cisco 3 reasons you should adopt cloud monitoring Cloud network management offers increased security, operational efficiencies, and more. By D Matthew Landry May 30, 2024 4 mins Machine Learning opinion Faultless with serverless: Cloud best practices for optimized returns What does a well-defined serverless approach look like? Let's learn some of the best modern approaches to handling Enterprises and SMEs growing serverless computing needs. By Yash Mehta May 30, 2024 5 mins Serverless Computing PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe