Navigating the EU AI Act: ensuring AI literacy in legal practices
The Law Society Technology Committee outlines key steps to help meet the AI Act’s literacy obligation from February 2025.
As the governance of AI evolves, the EU's AI Act (the Act) introduces critical requirements for organisations which use AI systems. While much of the Act focuses on requirements under a risk categorisation system (minimal, low, high and unacceptable risks), there is a standalone element which must be adhered to regardless of the risk category of the AI system being used.
Effective from 2 February 2025, under Article 4 of the Act, organisations including legal practices must ensure that all staff (and other persons dealing with the operation and use of the AI systems on their behalf) have a sufficient understanding of any AI systems, tailored to their specific roles and the context in which these systems are deployed.
Why AI literacy?
Recital 20 of the Act emphasises the important role AI literacy plays in allowing us to take advantage of AI technologies whilst fostering a culture of responsible AI use. The Act requires all organisations using AI systems to take measures to ensure that the knowledge and skills of their personnel are sufficient. This requirement does not only apply to high-risk AI systems, but extends to any legal practice that deploys AI.
Steps towards compliance
To effectively comply with the AI Literacy provisions, and in line with how legal practices approach the profession’s CPD obligations, a structured approach is necessary. However, AI literacy has an additional step not found in other training needs analyses as an audit of the existing use of the technology must be undertaken.
1. Audit AI systems
The first step is to conduct an audit of all AI systems currently in use and also an assessment of potential future AI systems that may be utilised. This will enable legal practices to create a complete AI landscape for the legal practice.
For existing systems, an analysis through the lens of the people (those using and those impacted), the process (how and where), the technology (the type of AI) framework may be useful as a technology-only analysis will not provide sufficient information to design a comprehensive training programme.
2. Assess existing knowledge
A training needs analysis should be carried out to assess the current level of AI literacy among staff. While there may be awareness of AI, there may be a lack of knowledge and proper and effective use of such AI systems and the needs analysis should highlight these gaps.
3. Design tailored training programs
Training should be role-specific, addressing the responsibilities of staff members in relation to AI systems. Key components of the training may include the following:
-
Use considerations: Covering compliance, ethical implications, and the risks associated with AI.
-
Technical aspects: Providing information on best practices for AI deployment and how to use AI systems in a proper and effective manner, as well as guidance on interpreting AI outputs.
-
Risk management: Focusing on identifying and mitigating risks linked to prohibited AI practices.
4. Implement monitoring mechanisms
Establish processes to monitor compliance. This includes documenting training sessions, attendance, and assessments to demonstrate adherence to the AI Act, and any future Codes of Conduct or guidelines related to the AI Act.
5. Maintain comprehensive training records
Legal practices must keep detailed records of all training activities across the organization. This documentation is crucial for demonstrating compliance and may align with existing CPD requirements though it must be expanded to include all staff.
6. Adapt training regularly
Given the rapid evolution of AI technology and regulatory frameworks, training programs should be regularly updated. A cyclical approach ensures that staff remain informed about new developments in AI usage within the firm and compliance obligations.
The consequences of non-compliance
Failure to comply with the AI Literacy provisions can have implications for legal practices. Potential consequences include (but are not limited to) fines, litigation, reputational damage, and a loss of client trust. In addition, non-compliance may hinder a firm’s ability to use AI effectively, impacting overall competitiveness and innovation in an increasingly digital legal landscape.
AI Literacy obligations under the Act may present challenges and opportunities for legal practices. By proactively addressing these obligations through tailored training and robust governance, legal practices can not only ensure compliance but may also foster a culture of responsible AI use. As the legal profession continues to adapt to technological advancements, AI literacy may be essential for navigating the future of law.