We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Watching the clock

22 May 2024 / employment Print

Watching the clock

The French supervisory authority has fined Amazon France Logistique €32 million for its ‘excessively intrusive’ employee-monitoring system. Rosemarie Blake, Colin Rooney, Sonam Gaitonde, and Cían Beecher pack the boxes

In the brave new world of remote and hybrid work, practices surrounding workplace productivity continue to pose data-protection challenges.

The recent decision of the French supervisory authority, Commission Nationale de l’Informatique et des Libertés (CNIL), to fine Amazon France Logistique €32 million for “an excessively intrusive system for monitoring the activity and performance of employees” provides a timely reminder of the need for careful analysis if monitoring employees in the workplace, whether conducted remotely or in the office.

In the case of Amazon’s monitoring practices, scanners were put in place to document how long it took its warehouse workers to carry out certain tasks, and to quality-check articles within a certain minimum time frame.

This information was stored and used to calculate indicators providing information on the quality, productivity, and periods of inactivity of each employee, and was further utilised as part of employee coaching and performance reviews.

Paragraph of 168 of the CNIL’s decision states: “Infringements of the principles of minimisation and of the obligation to have a legal basis are therefore reflected in almost continuous and massive processing of indicators relating to all direct tasks and to the performance of employees, which result in disproportionate computer surveillance of their activity.

It recalls that this processing makes it possible to evaluate the employee working on direct tasks by means of the detailed consultation of the data in the tools, in order to maintain a certain pace and quality of activity.

She points out that awareness-raising letters can be sent following only one or two quality errors, observed over a week, or a drop in productivity in some cases of less than 10% and notes that, in some positions, ‘underperformance’ observed over a single day can lead to the implementation of coaching.

Accordingly, it considers that such processing of personal data induces disproportionate pressure on workers, disproportionately affecting their rights and freedoms in the light of the company’s economic and commercial objectives.”

Compliance failures

The CNIL found that Amazon’s practices failed to comply with the data-minimisation principle, pursuant to article 5(1)(c) of the GDPR, and a failure to ensure lawful processing under article 6 of the GDPR.

Regarding the type of personal data processed, three indicators processed by the company were found to be non-compliant:

  • The ‘stow machine-gun’ indicator, which provided an error message when an employee scanned an item “too quickly” (that is, less than 1.25 seconds after scanning a previous item),
  • The ‘idle-time’ indicator, which signalled periods of scanner downtime of ten minutes or more, and
  • The ‘latency-under-ten-minutes’ indicator, which signalled periods of scanner interruption between one and ten minutes.

The CNIL found that the processing of all three indicators could not be based on legitimate interest, as it led to excessive monitoring of the employee when balanced against the commercial objectives pursued by Amazon.

The CNIL noted that Amazon already had access to numerous indicators in real time, both individual and aggregated, to achieve its objective of quality and safety in its warehouses, and that, as implemented, the processing required employees to justify every break or interruption to their work. Accordingly, the processing was found to be excessively intrusive.

The decision also found that the company had failed to properly inform employees that their personal data would be processed by the scanners in advance of their data being collected, leading to a breach of its obligation to provide information and transparency, pursuant to articles 12 and 13 of the GDPR, and a failure to comply with the obligation to ensure the security of personal data captured, pursuant to article 32 of the GDPR.

DPC guidance

The Data Protection Commission (DPC) has noted in previous guidance on data protection in the workplace that employers have a legitimate interest in protecting their business, reputation, resources, and equipment.

The DPC cautions within this guidance that any limitation of the employee’s right to privacy in the workplace particularly with regards to monitoring software, should be proportionate to the likely impact to the employer’s legitimate interests.

The commission further notes that, in the ordinary course of business, employers should consider implementing other less-intrusive means of monitoring employees.

Lawful basis

Employers must have a lawful basis to process personal data under article 6 of the GDPR (such as consent, contractual necessity, legal obligation, vital interests, legitimate interests, etc).

In addition to identifying an appropriate ‘article 6 ground’, and to the extent that an employer is processing health data (for example, information regarding reasonable-adjustment requests, ergonomic-assessment information, or details of medical leave), the employer will also need to ensure that it complies with one of the exceptions in article 9 of the GDPR.

As noted by the CNIL, employers also need to tell their employees in an appropriate privacy notice of the legal basis relied upon to collect personal data, and the purposes for which they are collected.

Disproportionate impact

The CNIL’s decision demonstrates the readiness of data-protection authorities to impose fines for the unlawful monitoring of employees in the workplace, where monitoring has a disproportionate impact on worker privacy.

It reinforces the need for employers to demonstrate compliance with their existing data-protection obligations when processing employee data and, critically, for employers to undertake appropriate risk assessments in advance of commencing any employee-monitoring measures.

Employers may find it challenging to justify any measures that have a high impact, where less intrusive measures are available. The CNIL’s decision is currently under appeal, so employers should watch this space for further guidance.

Rosemarie Blake, Colin Rooney, Sonam Gaitonde and Cían Beecher are members of the Technology and Innovation Group at Arthur Cox LLP.

PRACTICAL GUIDANCE FOR EMPLOYERS

Consider the use of AI carefully: conduct a data-protection-impact assessment and a legitimate-interests assessment.

Regarding the legal basis for monitoring, the DPC notes that, while ‘legitimate interests’ is the most flexible legal basis to rely on, employers should exercise caution before doing so.

In relying on legitimate interests, employers should undertake a full legitimate-interests test noting: (a) the existence of a legitimate interest justifying the processing, (b) the processing of the personal data that is necessary for the realisation of the legitimate interest, and (c) that the interest prevails over the rights and interests of the data subject.

Examples of legitimate interests cited by the DPC include fraud prevention, commercial interests, or broader benefits to wider society.

If a controller is unsure of the outcome of the balancing test, it may be safer to consider another lawful basis for processing, especially where processing is unexpected or poses a high level of risk.

If processing activities involved in the monitoring involve high-risk processing – for example, monitoring of turnstile data, the use of a large-scale CCTV programme, or tracking of employee vehicles (see the ‘Look it up’ panel) – a data-protection-impact assessment will also be required.

In addition, if employers are using AI to undertake monitoring activities, consideration will need to be given to compliance with the obligations of the forthcoming AI Act.

The AI Act was approved by the European Parliament on 13 March 2024 and is expected to be finally adopted in the coming months, pending a lawyer linguist check through the corrigendum procedure.

In particular, where the employer’s activities involve a ‘high-risk’ AI system, at a minimum, employers will need to consider how transparency is provided to employees, embedding human review in the process and ensuring that risk management is incorporated into the AI system’s lifecycle.

LOOK IT UP

CASES:

LEGISLATION:

LITERATURE:

Rosemarie Blake, Colin Rooney, Sonam Gaitonde and Cían Beecher
Rosemarie Blake, Colin Rooney, Sonam Gaitonde and Cían Beecher are members of the Technology and Innovation Group at Arthur Cox LLP.