We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Britain urged not to diverge from EU on AI
Pic: Shutterstock

27 Jun 2023 technology Print

Britain urged not to diverge from EU on AI rules

The Law Society of England and Wales has urged the British Government to ensure that its approach to regulating artificial intelligence (AI) systems does not diverge from the EU's nascent regulatory regime.

The call came in a response to a consultation on the British Government's white paper on AI regulation.

Divergence from EU and US principles-based regimes “adds complexity for law firms when determining which ethical guidelines apply and in which jurisdictions,” the submission states.

Liability

The solicitors’ body also highlights the “urgent need” for explicit regulations on liability across the lifecycle of an AI-based system.

The Law Society Gazette of England and Wales said that concerns about the regulation of AI technology had rocketed up the political agenda since the emergence of so-called large language-model systems, such as ChatGPT.

Last month, British Prime Minister Rishi Sunak said that he wanted Britain to become a global centre for AI under “safe and secure” rules.

Meanwhile, the EU is in the process of drawing up an Artificial Intelligence Act designed to “ensure that AI developed and used in Europe is fully in line with EU rights and values”.

Call for AI officers

In its 48-page response to a white paper published in March by the Department for Science, Innovation and Technology, the Law Society calls for a “nuanced, balanced approach” to regulation, with a blend of adaptable regulation and firm legislation.

The organisation says that the issue of liability requires strong regulation.

Current routes for contestability and redress for AI-related “harms” are not adequate, mainly due to the lack of clear definitions in the current legal framework for terms such as 'meaningful human intervention', the society states.

It recommends that the Law Commission or the British Government should review crimes and civil offences involving an element of subjective mental state or intention, to understand whether any such harm-creating activities should also be applied to AI.

Entities above a certain size, or working in high-risk areas, should be required to appoint an AI officer, the submission states.

Other recommendations cover the need for organisations to be transparent in their use of AI, and for decisions made by such systems to be “interpretable”.

Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland

Copyright © 2024 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.