BY PAUL DE HERT AND GUILLERMO LAZCOZ
Article 22 of the General Data Protection Regulation (Regulation (EU) 2016/679) declares the right not to be subject to automated individual decision-making, in the following terms:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent.
In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
Despite the theoretical interest this provision has aroused among legal scholars, it is now in a delicate position. Its ambiguity, lack of clarity and complexity make it difficult to apply. In view of its unhelpfulness, the UK government is now considering minimising its content and the safeguards it contains for the rights of data subjects. In this post, we argue in the opposite direction. Our proposal is to clarify and simplify Article 22 and to strengthen its three pillars: transparency, contestability, accountability.
Brexit, the good life and ‘yes’ to more machine decisions
In what we consider a radical proposal, the UK government announced on 10 September 2021 that, in the name of promoting a data-driven economy and society, it is planning to do away with Article 22 GDPR and the right to human intervention for artificial intelligence (AI) systems. Although the complete removal of the Article 22 – as initially put forward – seems off the table now, the UK’s new proposal aims to broaden considerably the use of automated decision-making (ADM) on the basis of legitimate interests or public interests, also in relation to sensitive personal data. Given that the use of ADM is likely to increase greatly in many industries, the proposal argues that the need for human review is neither practicable nor proportionate. Article 22 GDPR is said to lack certainty on how and when current safeguards are intended to apply in practice.
Despite its symbolic force in the GDPR ecosystem, the fate of Article 22 so far seems to be similar to that of its predecessor, Article 15 of the 1995 Data Protection Directive: a second-class data protection right “rarely enforced, poorly understood and easily circumvented”. It is undeniable that the UK government’s diagnosis on Article 22 GDPR is more accurate than we would like to concede.
Nonetheless, weakening the safeguards on the use of ADM systems seems to contradict the EU’s plans to regulate AI, in which controls on ADM systems using high-risk AI appear to be even tighter. Therefore, we propose to rewrite Article 22 GDPR once again and to go against the UK’s proposal. In the AI era, and regardless of the new regulations to come, we argue that Article 22 GDPR should remain in the data protection regulation. But how?
It is impossible to comply with what is not understood
The lack of clarity of this provision makes it difficult for controllers to comply with it. Article 22 GDPR is a long provision that contains two prohibitions – respectively, in paragraphs (1) and (4) – and a series of exceptions to both prohibitions in paragraphs (2) and (4). Let us look in more detail at this complex architecture and start with the first prohibition on solely automated decision-making spelled out in Article 22(1) GDPR. First, is this a general prohibition on ADM or a right to object ADM? Its wording looks like a right to object that needs to be actively exercised, but most of the legal literature sees it as a prohibition, including the European Data Protection Board (EDPB). This ambiguity in the wording needs to be remedied.
So, what does Article 22(1) want to prohibit? The text specifies: decisions based solely on automated processing, including profiling. If the decision should not be based solely on automated processing, we understand that it should also be based on human reasoning to avoid prohibition. Thus, understanding what is based solely on automated processing necessarily involves understanding what kind of human intervention makes a reasoning not solely automated. The wording again is ambiguous here: why say ‘based solely on automated processing’ when you mean ‘based on automated processing without human intervention’?
Also, certain effects must be produced for the Article 22(1) prohibition to apply: legal effects or similarly significantly affecting effects. Is this just a tongue-twister to annoy European lawyers? We have sufficient evidence on the impact of ADM on data subjects. We must therefore be able to establish ex ante what kind of ADM systems produce significant effects and, consequently, make clear to what kind of decisions Article 22 GDPR applies. In our view, the EDPB could contribute new guidelines clarifying this issue. Ideas from the Parliament or the Commission to qualify high-risk AI according to the sector, use and purpose of the system may also prove useful for this task.
Pragmatic exceptions, prohibitive exceptions & pragmatic exceptions to a prohibition
Never say ‘no’ in data protection law: for pragmatic reasons the GDPR disregards the general prohibition in Article 22 and permits certain decisions based solely on automated processing. Article 22(2) GDPR indeed formulates three exceptions to the prohibition on solely automated processing activities. These are possible based on contractual necessity (Article 22(2)(a) GDPR), consent (Article 22(2)(c) GDPR), or on European or domestic laws (Article 22(2)(b) GDPR).
Before we panic about these three large derogations to a sound prohibition, we should note that there are also exceptions to these exceptions (never say ‘yes’ in data protection law either): automated decisions based on these three exceptions are not welcome when these decisions are based on the sensitive categories of personal data listed in Art. 9(1) GDPR (e.g. data on health, on sexual orientation or ethnic origin, etc.). This is the second prohibition in Article 22 GDPR and one can find it towards in the end in the fourth paragraph of the provision. So, European pragmatism for the sake of labour relations, consent and governments (see Article 22(2)) is tempered based on the distinction between sensitive personal data and more banal, other personal data. Humans, not machines, should decide on our health, sexual and other data that is considered sensitive. The real privacy stuff is kept out of the reach of the gourmand machines with this second prohibition. But, as stated before, never say ‘no’ in data protection law. Again, there are pragmatic exceptions to the prohibitive exceptions on the pragmatic exceptions to the general prohibition: the Article 22(4) prohibition is lifted when automated decisions based on sensitive data are justified by explicit consent (Article 9(2)(a)) or ‘reasons of substantial public interest’ based on European or domestic laws (Article 9(2)(g)).
Why so many derogations in Article 22 GDPR?
Like many other GDPR-provisions, Article 22(2)(b) is blatantly state-friendly: simple laws will do away with the two prohibitions, as long as it provides some justification (‘reasons of substantial public interest’). Even the specific prohibition on automated decision-making based on sensitive data can be neutralized!
The other two (more business-friendly) exceptions mentioned in Article 22(2)(a) and (c) GDPR are not any less suspicious. Where does the exception on the necessity of the processing for the performance of a contract begin and where does it end? For instance, is extensive automated profiling necessary to obtain a loan from a bank? Perhaps we should let consumers negotiate – and give their explicit consent to – whether they want contractual ADM or not. In any case, the exception based on consent (explicit or non-explicit) also needs to be assessed from a human rights standpoint. Why let individuals consent to procedures without human intervention? The role of consent in the context of AI needs to be reviewed, especially when it involves the processing of sensitive data.
There is a lot of window dressing in Article 22 GDPR with its grand statements that any of the exceptions requires the data controller to implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests. For the exceptions in Article 22(2)(a)&(c) the safeguards are contained in Article 22(3): required is at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the automated decision
For the Article 22(2)(b) exception, the GDPR is completely silent on what kind of safeguards can be ‘suitable’ here to allow this exception to be invoked. This is even more confusing, and has led to very different and even incompatible approaches by Member States implementing Article 22(2)(b) in their national legislation. Minimum safeguards in Article 22(3) should be applicable for any exception in Article 22(2) and (4) and, therefore, Member States should only be allowed to strengthen minimum safeguards.
Strengthen its three pillars: transparency, contestability, accountability
From the very beginning – in the days of the 1995 Directive – the European legislator faced with ADM has tried to address two fears. Firstly, fears about data controllers abdicating their own responsibilities over the automated decisions they apply. To avoid this loss of control, it was held necessary to hold data controllers accountable in ADM and activate them in such a way that they would be in control over the machines. Secondly, fears about citizens losing control over automated decisions that significantly affect them. In this regard, contestability on the part of the data subjects was seen as the safeguard needed for them to influence ADM. Accountability and contestability are key to understand the ratio legis of Article 22 GDPR, but so is transparency: to see whether data controllers remain in control over the decisions they make and whether data subjects can influence the decisions significantly affecting them, transparency is also required and might be regarded as a third pillar upholding Article 22 GDPR.
Regarding the latter, the GDPR contains specific information and access rights in Articles 13(2)(f), 14(2)(g) and 15(1)(h) GDPR. The provisions are long and abundant, but one is struck by one omission: information and access on the ‘existence of automated decision-making, including profiling’ and ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’ are only granted for machine decisions without human intervention. Hence, for profiling not prohibited by Article 22(1), no rights to information on its existence, rationale and consequences are recognised.
Profiling activities are highly privacy-invasive and we cannot systematically rely upon human agents to overcome or mitigate the concerns associated with ADM systems. Thus, those information and access rights should be granted for every kind of profiling, whether or not they are used for ADM purposes, and whether or not humans are involved in the ADM system. We need to shed some light on all those dark edges that evaluate our personality.
Strengthen its three safeguards: possibility to contest, to be explained and to see humans
The safeguards to protect us against machine decisions in Article 22 GDPR – a right to contest, a right to have decisions explained and a right to human intervention – build further on the information duties contained in the provisions mentioned in the previous section and are, as said, based on principles such as transparency, contestability, and accountability.
Primarily, Article 22 needs to safeguard the right to contest machine decisions. This safeguard is the backbone in the GDPR of the possible suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests in Article 22(3) GDPR. Both the right to obtain a human review, out of the decision-loop, and the right to express one’s point of view aim to produce a different decision from the original, fully automated one. However, this decision should not in any event be altered at the discretion of the person concerned, but only on the basis of, for example, inaccuracy, unlawfulness or unfairness set out in Article 5(1) GDPR. The set of safeguards therefore constitutes a due process right, where the right to contest is the main tool with which the data subject can influence decisions that affect him or her.
The missing safeguard in Article 22(3) – i.e., the right to an explanation detailed in Recital 71 of the Preamble of the GPDR – is over-discussed in the legal literature. In our view, this right should be explicit in the wording of Article 22 GDPR, not as an explanation, but as a justification (see reworded Article 22(3)). While the goal of an explanation is to make humans understand, the goal of a justification is to convince humans that the decision is appropriate. Otherwise, explanations will provide little beyond the already granted rights of information and access.
However, the data controller can avoid these Article 22(3) safeguards by introducing human intervention, a human in the decision-loop, who might be as biased as the ADM system. A dangerous game might open up, by having controllers adding a ‘light’ human intervention to avoid the whole set up of Article 22 GDPR. The provision needs to be strengthened at this point. If Article 22 GDPR does not allow data subjects to influence decisions that are not fully automated, it should at least be clear under what conditions the controller can do this. We therefore consider that the human intervention introduced by the controller must be meaningful in order to avoid the Article 22(3) safeguards (see rewording in Article 22(3)). So, the requirement of meaningfulness needs to be added in the text of the GDPR. Moreover, the controller must be held accountable for it and demonstrate that such human intervention is meaningful. This is not new, the EDPB guidelines state that human intervention should be meaningful to avoid the prohibition in Article 22(1) and that, as part of the data protection impact assessment (DPIA), the controller should identify and record the degree of any human involvement in the decision-making process and at what stage this takes place. We believe that DPIAs are a proper accountability tool to demonstrate that the controller is not using human decision-makers to abdicate its own responsibilities.
Rewrite Article 22 GDPR or let it die
The UK’s radical proposal places Art. 22 GDPR on the verge of extinction, but it is only a little push in the slow agony it is suffering. Our radical proposal seeks to avoid the Darwinian fate that awaits this provision. May this creative rewriting exercise serve to revive our dear friend:
The data subject shall have the right not to not be subject to a decision based solely on automated processing without meaningful human intervention, including profiling, which produces legal effects concerning him or her or similarly significantly affects a significant effect on him or her.
Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent.
In the cases referred to in points (a) and (c) of paragraph 2, The data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain meaningful human intervention on the part of the controller, to express his or her point of view, to obtain a justification of the decision reached after such assessment and to contest the decision.
Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Art. 9(1), unless point (a) or (g) of Art. 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
Source: https://europeanlawblog.eu/
Comments