Browsing Tag

information technology

Liability for Artificial Intelligence
Law,

A new proposal to address liabilities caused by the AI

The European Commission proposed new guidelines[1] on 28 September that would require developers of artificial intelligence-powered software and goods to compensate victims of their creations.

In 2020, the chamber had urged[2] the Commission to implement regulations to ensure victims of harmful AI can get compensation, notably requesting that creators, providers, and users of high-risk autonomous AI could be held legally liable for accidental injury. However, the EU executive opted for a “pragmatic” approach that is weaker than this severe liability regime, arguing that the facts did not “justify” such a regime.

AI Liability Directive

The new AI Liability Directive would make it simpler to claim for compensation when a person or organization is injured or suffers damages as a result of drones, robots, or software such as automated hiring algorithms powered by artificial intelligence.

The proposed legislation is the most recent attempt by European officials to govern AI and establish a global standard for controlling the burgeoning technology. It comes at a time when the EU is in the midst of negotiating the AI Act,[3] the world’s first bill to restrict high-risk uses of AI, such as facial recognition, “social score” systems, and AI-enhanced software for immigration and social benefits.

Under the proposed new AI Liability Directive, the presumption of causality will only apply if claimants satisfy three core conditions: first, that the fault of an AI system provider or user has been demonstrated or at least presumed to be so by a court; second, that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and third, that the presumption of causality Some of the strategies that DeepMind’s AlphaGo used to defeat a human were ones that expert Go players had never considered. This exemplifies both the benefits and challenges presented by artificial intelligence. We may not be able to comprehend the reasoning behind an AI’s decision or how one thing led to another as humans. This is one of the reasons why AI liability laws may be required to make a policy judgment based on an assumption. Therefore, subject to certain conditions and in limited circumstances, national courts would be required to presume, for the purposes of applying liability rules to a claim for damages, that the output produced by the AI system (or the failure of the AI system to produce an output) was caused by, for instance, the AI provider’s fault.

For instance, claimants would need to demonstrate that providers or users of high-risk AI systems had not complied with the obligations imposed by the AI Act. For providers, these responsibilities include training and testing of data sets, system oversight, as well as system precision and resiliency. For users, who, under the AI Act, may be an organization or a consumer, the obligations include monitoring or utilizing the AI system in accordance with the accompanying instructions.

To aid claimants in establishing blame, the proposed AI liability regulation permits courts to force providers or users of high-risk AI systems to preserve and disclose material pertaining to those systems. The proposed legislation further encourages disclosure by allowing the presumption of causality to be rebutted when a provider or user can demonstrate that adequate evidence and expertise are reasonably accessible for the claimant to establish the causal link. In the event of a malfunction with a conventional product, it may be clearly apparent what went wrong. Nonetheless, this may not be the case with AI. A court can force a provider of a high-risk AI system (as defined by the EU AI Act) to disclose relevant and necessary evidence concerning their product under certain conditions. In restricted instances, third parties may also submit requests.

Product Liability Directive

The Commission proposed the AI Liability Directive alongside a separate, but related, proposal for a new Product Liability Directive.[4]

Under the proposed Product Liability Directive, AI systems and AI-enabled items would be classified as “products” and consequently subject to the liability framework of the directive. As with any other product, compensation is available when defective AI causes harm, and the affected party is not required to prove manufacturer negligence. In addition, producers may be held accountable for modifications they make to items they have previously introduced to the market (i.e. when these changes are triggered by software updates or machine learning). In some instances, a person who modifies a product that has been placed on the market or put into service may be regarded the product’s manufacturer and consequently accountable for any resulting damages. Other parties, such as importers, authorised representatives, fulfilling service providers, and distributors, may be held accountable for faulty products created outside the EU or when the producer cannot be identified.

In addition, the plan makes it clear that not only hardware makers but also software providers and providers of digital services, such as a navigation service in an autonomous vehicle, might be held accountable.

In addition, the burden of proof will be decreased when the court determines that a claimant will have trouble showing a fault or the causal relationship between the defect and the loss due to technical or scientific complexity (though the person sued can contest this). This proposal links the planned Product Liability Directive with the AI Liability Directive by shifting the burden of evidence.

Finally, claim thresholds and compensation level ceilings are eliminated. Under existing regulations, in order for a claim to be considered, the harm caused by a defective product must be at least 500 euros, and Member States may place a cap on the producer’s total culpability for damages resulting in death or personal injury (this amount cannot be lower than EUR 70 million).

The need for further implementation

Despite the transmission of essential information, it can be exceedingly difficult to show fault in complex systems, which is one of the major critiques leveled against the planned AILD. Especially given that certain AI systems behave autonomously and have functions that are so complicated that their outputs cannot be simply explained. This challenge relates to the ‘black box’ algorithm problem that develops when the complexities of an AI system make it difficult to comprehend the input that leads to a certain output.

While the proposed AILD cites autonomy as a barrier to understanding the system in its Recitals 3, 27, and 28, it offers little to aid injured parties in establishing a presumption of cause. An injured party still has a heavy burden of proof under the AILD, from providing evidence in support of the plausibility of the claim (Article 3(1)) to identifying non-compliance with AIA requirements (Article 4(1)(a)) and demonstrating a connection between the action of the AI system and the damage sustained (Article 4(1)(c)). It may also be difficult to demonstrate noncompliance with the proposed AIA’s rules. For example, it may be difficult to demonstrate that the datasets utilized in the creation of an AI system, or the accuracy levels of a given system are insufficient. Therefore, the proposed AILD provides very minimal procedural convenience for aggrieved parties at best. More must be done to improve the effectiveness of the recourse accessible to victims of AI-caused harm.

Using the updated PLD’s defect-based remedy, which requires no proof of wrongdoing, is one strategy for addressing this issue. Article 4(6) of the PLD stipulates that compensation may only be sought in the event of material damage. It means that AI systems used in, for example, credit-scoring by any financial institution that could hurt persons in a non-physical manner cannot be challenged on the basis of being flawed. To collect compensation, the injured party would need to demonstrate fault through the AILD. The Deputy Director of the European Consumer Organization (BEUC), Ursula Pachl, has already expressed this worry when discussing the draft directives.[5] BEUC is an umbrella consumer group that unites 46 independent consumer organizations from 32 countries and has advocated for years for an update to EU liability laws to account for the rising applications of artificial intelligence and to guarantee that consumer protections laws are not outrun. In its view, the EU’s proposed policy package falls short of the more comprehensive reform package it advocated for. It consists of modifications to the existing Product Liability Directive (PLD) so that it covers software and AI systems (among other changes); and a new AI Liability Directive (AILD) that aims to address a broader range of potential harms stemming from automation. In contrast to traditional product liability regulations, if a consumer is hurt by an AI service provider, they will be required to prove that the service provider was at fault. Given the opaqueness and complexity of AI systems, these characteristics will make it impossible for consumers to exercise their claim to compensation.

In this context, the Commission’s approach is more advantageous for developers, some of whom opposed stringent liability for non-material damages during the proposal development process.[6] In addition to this barrier, the PLD requires claimants to demonstrate the likelihood that injury was caused by the system’s failure in order to utilize the presumption of defectiveness (Article 9). This raises questions concerning the likelihood criterion that a claimant must achieve, particularly when a given system is difficult to comprehend.

A month ago, the U.K.’s data protection watchdog issued a blanket[7] warning against pseudoscientific AI systems that claim to perform ’emotional analysis,’ urging that such technology is not used for anything other than pure entertainment. This is illustrative of the types of AI-driven harms and risks that may be driving demands for robust liability protections. In the public sector, a Dutch court ruled[8] in 2020 that an algorithmic evaluation of social security claimants’ welfare risk violated human rights law. In recent years, the United Nations has also issued a warning[9] regarding the hazards of automating public service delivery to human rights. In addition, the employment of ‘blackbox’ AI systems by US courts to make sentence decisions, which opaquely bake in bias and discrimination, has been a crime against humanity for decades.

In conclusion, the proposed AI liability directions of the Commission largely facilitate the gathering of information regarding AI systems in order to prove culpability. In a sense, this codifies the transparency requirement for AI systems, which is also included in the proposed AIA. However, while doing so, the proposed liability directives saddle claimants with difficult obstacles to surmount, such as having to demonstrate blame or presumptions of defectiveness and causality, as well as the connection between the harm and the defect or fault, as outlined in the AILD and PLD. The Commission’s proposals are in the beginning stages of the legislative process and will likely undergo a number of adjustments prior to their final adoption. Moreover, possible changes to the proposed AIA could also affect the execution of the proposed liability guidelines, as the directives depend on the AIA’s definitions and standards. The legislative measures to establish AI liability are a significant step in the right direction towards successfully regulating AI. However, it is still essential to exercise caution in regard to the intricacies of AI systems, especially when the objective is to safeguard wounded parties.

Sources

  1. https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807
  2. https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html
  3. https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(COD)&l=en
  4. https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-7b6b23220bcd_en
  5. https://www.euractiv.com/section/digital/podcast/the-new-liability-rules-for-ai/
  6. https://www.ccianet.org/wp-content/uploads/2022/08/2022.08.24-Joint-Industry-Letter-on-the-PLD-and-AI-Directive.pdf
  7. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/10/immature-biometric-technologies-could-be-discriminating-against-people-says-ico-in-warning-to-organisations/
  8. https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878
  9. https://www.ohchr.org/en/statements/2018/11/statement-visit-united-kingdom-professor-philip-alston-united-nations-special?LangID=E&NewsID=23881
ראיות-דיגיטליות-במשפט-פלילי
Law,

E-Evidence proposal and its public involvement of private actors

Background Analysis

Since its incorporation into the Tampere Conclusions, the issue of the admission of evidence obtained in cross-border criminal proceedings in the EU has been on the table. Article 82(2) of the Treaty on the Functioning of the European Union (TFEU)[1] grants the European Parliament and the Council the ability to establish fundamental rules for the reciprocal admission of evidence. Common minimum standards on how evidence is to be gathered and transferred – and also on a limited set of exclusionary rules – are required to protect fundamental rights and facilitate judicial cooperation at the EU level, especially given that e-evidence introduces a cross-border element into virtually every criminal investigation and procedure. Due to the rapid digitalization of private and public spheres, as well as professional and non-professional activities, the importance of a common set of evidence standards, and in particular e-evidence standards, has increased. Furthermore, the repercussions of COVID-19 have a multiplier effect on the impact of e-evidence at the EU level, as the epidemic has caused a significant movement towards digitalization and a notable change towards the collection of e-data for security purposes (mainly geolocation, and to potentially track contacts of infected persons).

Recent legislative initiatives (Directive 2014/41/EU on the European Investigation Order (EIO)[2] in criminal proceedings and Council Regulation (EU) 2017/1939[3] implementing greater cooperation on the establishment of the European Public Prosecutor’s Office, EPPO) have addressed this issue in part. However, the EIO provides no regulations governing the admissibility or rejection of evidence. The admissibility of evidence gathered in a foreign country will rely on how it was obtained and compliance with any applicable restrictions. Moreover, Article 37 of the EPPO Regulation[4] essentially creates an inclusionary rule, leaving all possible grounds for evidence exclusion unaddressed.

As a result, there is no uniform policy among EU member states. The diversity of solutions in each Member State impedes the creation of what has been termed a “zone of free movement of criminal evidence” and may have a severe impact on the rights of defendants. In the past, Member States may have concluded that supranational standards on admissibility of evidence were not strictly required, and that, as a result, the principles of subsidiarity and proportionality for EU legislation would not be followed. However, the situation has changed dramatically over the past few decades as a result of obvious shifts in the modern “digital society.”

The Evidence Package

The European Commission proposed the “E-Evidence” legislative package (E-Evidence)[5] on 17 April 2018 to overcome the widely discussed issues associated with the traditional instruments for cross-border gathering of electronic evidence. The main innovation of this proposal consists of allowing law enforcement in one member state to directly compel service providers in another member state to produce or preserve data.[6] Internet service providers (ISPs) already play a significant role as gatekeepers for the data they possess, particularly in the context of voluntary cooperation. Due to restricted enforcement options, the frequently global context of data collection, and the economic clout of big ISPs, it is their decision whether or not to submit data to authorities. While the final text of the EPO Regulation is still being negotiated, I argue in this post that the proposal for the E-Evidence regulation (in all of its available versions) does not solve the problem of such “privatisation” of enforcement in the context of e-evidence collection, and I explain why this is problematic.[7]

E-Evidence legislation has been in the legislative process for some time. While the EU Council agreed its broad strategy very swiftly,[8] on 7 December 2018, The European Parliament’s (EP) extensive deliberations lasted over two years. On 11 November 2020, the EP delivered its Report on the draft Regulation, which differed significantly in some respects from the Council’s general approach, which was generally comparable to the Commission’s proposal.[9]

The notification system, which establishes the criteria under which the authority originating the access request must notify the authorities in the executing member states, is one of the most contentious aspects.

For governments represented in the EU Council, making this process overly burdensome would negate the aim of the rule, but MEPs and civil society want protections for protected groups such as journalists, lawyers, and political activists. The member nations were successful in meeting the so-called “residence criterion.” In other words, if the individuals in question are residents of the member state executing the order, there is no need to notify the authorities of the executing country about the location of their data storage. If the requested information can only be used to identify a person, no notification is necessary.

In exchange, MEPs gained the notification’s suspensive effect. In the event that a law enforcement agency requests content and traffic data, the other member states will have ten days, or eight hours in the event of an emergency, to object. The suspension effect stipulates that the service provider must safeguard the requested communication but will be unable to disclose it until the deadline has passed and no rejection has been raised.

The executing member states may appeal the order if it violates the legal framework’s fundamental rights or immunities, such as press freedom. The legislature adopted the notion of dual criminality, which states that the persecuted crime must also be recognized in the country of execution.

Special safeguards against alleged infringement of basic rights have been added to orders refused by member states whose rule of law has been officially brought into question by the activation of EU procedures, such as Hungary and Poland at present.

Political problem that is yet to be resolved

Unresolved from a political standpoint is whether the executing member states ‘may’ or ‘shall’ oppose the order if one or more reasons for rejection are discovered. The Parliament favors the latter formulation because legislators want to guarantee that these precautions are applied effectively.

In accordance with the GDPR, the EU’s data protection regulation, the order must be sent to the data controller, the entity that determines why and how the data is processed. The authorities will only refer directly to the data processor, the organization that processes the data on behalf of the controller, in exceptional circumstances. The co-legislators of the EU only agreed in principle to the establishment of a common European exchange, an EU-wide platform for issuing orders that would guarantee the secrecy and legitimacy of the orders to service providers.

While the interinstitutional meeting, or trilogue in jargon, resulted in significant progress on a number of key issues, according to two knowledgeable sources, the gaps between the co-legislators may still be too substantial to be resolved at the technical level. The French negotiators were under significant political pressure to find an agreement before the end of their Presidency on Friday, and on Thursday they even requested a new political trilogue. However, the European Parliament could not meet such a short deadline.

Private actors and potential conflicts of interest

In light of the preceding, the E-Evidence package will establish a new connection between law enforcement agencies and ISPs, regardless of the establishment of the mandatory notification system. These are expected to become extended arms of law enforcement, replacing national authorities in the tasks of receiving, complying with, and reviewing orders.[10] ISPs will unavoidably become more of a public authority than a private actor, although lacking the characteristics of public authorities, such as accountability, impartiality, and independence.

This shift of public responsibilities to ISPs, as envisaged by the E-Evidence package, is not novel in European law, but rather conforms to a pattern that has intensified over the past few years.[11] Indeed, private players’ participation in crime prevention has increased. This tendency is exemplified by the AML regulatory[12] framework: private actors, particularly banks and financial institutions, are required to create risk prevention measures and report to competent authorities in order to avoid money laundering or terrorism funding. In this sense, the E-Evidence proposal codifies a quantum leap in the role of private actors: not only are they involved in crime prevention, but they are also required to play an active (proactive) role in enforcement by directly responding to requests from a law enforcement authority and evaluating the validity and legitimacy of these requests.[13]

This role poses various questions. ISPs, as private actors, are entities that are profit-driven and answerable to their owners or stakeholders. These traits have (at least) a double bearing on their ability to fulfill this public function. First, ISPs will make decisions based on their commercial interests. In fact, unlike public actors, when ISPs must choose between competing ideals, they do so at the risk of punishment for noncompliance or reputational damage, which may have a direct impact on their financial interests. Moreover, even if private actors present themselves as acting for the greater good, they will only engage in this manner if it serves their financial interests. The commercial reasoning also influences the accountability and duty of ISPs. A democratic system of accountability controls value judgments in the public realm; private firms (i.e. ISPs) are answerable to their owners first and foremost.[14]

Lastly, additional practical issues relating to the implementation of such power may arise due to the potential for ISPs to abuse their authority. For instance, what about ISP personnel who are offered bribes to influence the judgments they execute? This sort of corruption, however, no longer affects the private sphere, as it is not a violation of the entity’s duty. This circumstance more closely resembles public corruption but is not covered by the applicable regulations.[15]

Conclusion

There appears to be a need for cooperation between law enforcement agencies and ISPs, given the latter possess information that may be crucial to criminal investigations. This new interaction between public bodies and commercial players cannot be governed by the current regulatory system. Consequently, legislative intervention is required, and the E-Evidence package has the ability to eliminate one of the most significant barriers facing contemporary criminal investigations. However, a more complete framework is required to ensure that the rights of impacted individuals are adequately protected and that their fate is not contingent on the commercial interests of private enterprises.

The issue of the public duty of ISPs is not confined to the collection of electronic evidence. Similar issues and arguments can be raised in relation to online content control and the Digital Services Act discussion (DSA).[16] The challenges in negotiating both legislative proposals (E-Evidence and DSA) highlight how difficult it is to manage a domain in which private players wield so much effective enforcement capacity and de facto adjudicative authority. To guarantee the fairness of the procedures and the correct protection of the fundamental rights of the affected parties, however, a precise set of boundaries is required. If shared adjudication is to be recognized, a significantly more robust structure must be established to protect the rights of those affected.

  1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A12008E082
  2. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32014L0041
  3. https://eur-lex.europa.eu/eli/reg/2017/1939/oj
  4. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32017R1939
  5. https://ec.europa.eu/info/policies/justice-and-fundamental-rights/criminal-justice/e-evidence-cross-border-access-electronic-evidence_en
  6. https://europeanlawblog.eu/2018/10/12/the-european-commissions-e-evidence-proposal-toward-an-eu-wide-obligation-for-service-providers-to-cooperate-with-law-enforcement/
  7. https://journals.sagepub.com/doi/pdf/10.1177/1023263X18792240
  8. https://data.consilium.europa.eu/doc/document/ST-15020-2018-INIT/en/pdf
  9. https://www.europarl.europa.eu/doceo/document/A-9-2020-0256_EN.html#_section1
  10. https://www.sciencedirect.com/science/article/pii/S026736492100087X
  11. https://www.uu.nl/sites/default/files/rebo-renforce-PRIVATE%20REGULATION%20AND%20ENFORCEMENT%20IN%20THE%20EU-Introduction.pdf
  12. https://finance.ec.europa.eu/financial-crime/eu-context-anti-money-laundering-and-countering-financing-terrorism_en
  13. https://journals.sagepub.com/doi/full/10.1177/2032284420919802
  14. https://academic.oup.com/edited-volume/28191/chapter-abstract/213102034?redirectedFrom=fulltext
  15. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32003F0568&from=EN
  16. https://www.sciencedirect.com/science/article/pii/S026736492100087X
RGPD, client data, client experience, information, RGPD, réglement générale protection donnée, GDPR, General data protection regulation, companies, business secret, data quality, data collection, collection de data, ramasser de la data, information, information technology,
Law,

Data Governance after GDPR and the protection of personal data

RGPD, client data, client experience, information, RGPD, réglement générale protection donnée, GDPR, General data protection regulation, companies, business secret, data quality, data collection, collection de data, ramasser de la data, information, information technology,

The Structure of Data Governance in Enterprises

Data governance refers to all the organisations and procedures put in place within a company to control the collection and use of data. According to a study conducted by Reach Five and Opinion Way, 78% of French companies harvest data to personalise the customer experience. However, simply collecting data is not enough to improve competitiveness: companies need to learn how to use this data in an optimal way. This collection is subject to restrictions such as the respect of users’ privacy. Therefore, in their data governance process, it is necessary for companies to take into account the limitations posed by both national and European legislation.

Personal data refers to all information that makes it possible to identify a natural person. France was a forerunner in the supervision of its citizens’ data. As early as 1978, it introduced legislation to protect users, even though at that time the Internet was foreign to the general public. The French law of 6 January 1978 has established the principle of freedom to create nominative files and to process data by computer, but this freedom has its limits: the collection of data must respect the principle of fairness and transparency. This means that companies are obliged to inform the persons concerned of the compulsory or optional nature of their replies, of the list of legal persons to whom their replies are addressed and of the consequences of these replies. However, if the absence of a response leads to an inability to access the proposed service, can we still consider that the user has a choice in the disclosure of his data?

One of the fundamental notions of this law is the right of opposition and rectification of the information collected. This issue has been the subject of litigation and the courts are trying to enforce this rule. Through a judgment of 14 March 2006, the criminal chamber of the french court of cassation considered that : « It is a collection of personal data to identify electronic addresses and to use them, even without registering them in a file, to send electronic messages to their holders. It is unfair to collect, without their knowledge, the personal e-mail addresses of natural persons on the public space of the Internet, as this process impedes their right of opposition. ». It can be seen that data is not treated as a commodity that can be exchanged, but rather as the property of an individual who must give his or her consent to its use and to its knowledge.

The 1995 Directive and the RGPD Regulation

In reaching this solution, the judges relied on the Directive of 7 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, which led to the amendment of the law of 6 January 1978. However, the aim of this European legislation remains the same as the former French legislation: to regulate data flows and protect users’ information.

To comply with these rules, companies must implement a clear and precise data collection policy. First, it is important to consider the methodology to be adopted in the data collection process. This is essential to ensure compliance with the regulations and effective use of the data. To this end, a data management plan should be drawn up to define the data collection methods and organisational systems, as well as the legal and ethical framework surrounding this information: how will the data be shared? How will you protect the identity of your users?

In addition, it is necessary to define precisely how the data will be stored in order to put in place a security system to prevent data leakage. As a company you need to have systems in place to protect against breaches that could lead to the disclosure of user information and how you will react if this happens. Anticipating the risks and your attitude to them is paramount: knowing that you are prepared in case of an incident gives users confidence.

Finally, it is imperative that you, as a company, ensure the quality of the data. How can you ensure that your data is reliable? This control is achieved by implementing monitoring and processing methods. Poor quality or badly structured data is a security risk because it will be more difficult to determine what data is at risk and what the level of risk actually is. How to monitor and determine what data is at risk? The implementation of data governance tools is a necessity to manage data and to determine the areas at risk.

Some states did not regulate data right away and waited for European intervention before putting in place rules on this matter, such as Luxembourg, which only put in place legislation in 2005, in order to transpose the European directive 2002/58/EC (since repealed) on privacy and electronic communications. Subsequently, 2 laws were enacted on 1th august 2018 : the Act on the organisation of the National Commission for Data Protection and the general data protection regime and the law on the protection of individuals with regard to the processing of personal data in criminal matters and in matters of national security. In the end, however, it can be seen that this regulation is essentially derived from European rules: it is mainly these that provide the framework for data protection.

RGPD, client data, client experience, information, RGPD, réglement générale protection donnée, GDPR, General data protection regulation, companies, business secret, data quality, data collection, collection de data, ramasser de la data, information, information technology,

Facebook problematics

Today, good management and protection of user data is fundamental to a company’s image. The giant Facebook is proof of this: in April 2021 data of 533 million Facebook users leaked . Facebook stated that the data came from an illegal collection that exploited a security flaw discovered and fixed in 2019. This case does not improve the giant’s image in terms of data protection. This is not the first time Facebook has faced a disclosure of its users’ information. In 2018, Cambridge Analytica The UK and US press revealed a massive misuse of users’ personal data for political purposes. This case illustrates the extent to which individuals’ personal information can play a role in shaping behaviour.

Unfortunately, the provision of personal information is nowadays indispensable when you want to surf the Internet, but how can you protect yourself as a user? You have to be vigilant. In the case of Facebook, users were aware of the data leak, but in many situations individuals do not know that their data has been disclosed, so when you receive an SMS, an email, you should check who the sender is. If the message asks you to log in to your personal space, never click on the link directly but type the address of the site into your bank.

RGPD, client data, client experience, information, RGPD, réglement générale protection donnée, GDPR, General data protection regulation, companies, business secret, data quality, data collection, collection de data, ramasser de la data, information, information technology,

Introduction of GDPR in Data Governance

The European Union has taken action to ensure that users of internet platforms have their data protected via the regulation (UE) 2016/679 of the European Parliament and of the Council of 27 April 2016, on the protection of individuals with regard to the processing of personal data and on the free movement of such data which repealed the 1995 Data Protection Directive. First of all, the GDPR has placed an emphasis on consent and transparency, these two principles are at the heart of the data protection rules: ‘The principle of fair and transparent processing requires that the data subject be informed of the existence of the processing operation and its purposes’. It is on this basis that companies must inform users about how their data will be processed: no operation can be carried out without the consent of the owner of the data. The question arises as to who should prove consent. However, it must be clear and unambiguous.

The RGPD grants new rights: the right to data portability implies that it is possible to recover one’s data and transfer them to a third party. The aim here is to give people back control over their data, and to partially compensate for the asymmetry between the data controller and the data subject.

For the first time, the European Union has taken specific measures for minors under the age of 16: the child must be able to understand the information on data processing and the consent of those with parental authority must be obtained.

This regulation offers ever greater guarantees to users, with in particular a simplification of procedures in the event of prejudice, with in particular the introduction of class actions. In addition, the RGPD institutes a code of conduct to ensure the proper application of the regulation. In particular, this code requires cloud computing providers in Europe to put in place physical means of safeguarding and processing data on European territory. Microsoft has taken a public position : data of Europeans will remain within the European territory.

The broad scope of the data protection regulation was seen during the covid-19 crisis. The French CNIL had to intervene to remind employers of their obligations regarding data collection. The sensitive nature of data relating to a person’s state of health justifies the special protection afforded to it: but how to reconcile respect for privacy and personal security? In principle, the CNIL states that: “the employer does not have to organise the collection of health data from all employees“. The employer is only allowed to take individual action against an employee if the employee himself reports that he had been exposed or had exposed some of his colleagues to the virus.

The GDPR has sought to address this issue more comprehensively by introducing 2 exceptions to allow disclosure of an individual’s medical data:

  • Employees self-report their situation
  • The need for a health professional to process this data for the purposes of preventive or occupational medicine, (health) assessment of the worker’s working capacity, medical diagnoses etc.

The Luxembourgian position

Like the French authorities, the Luxembourg National Commission for Data Protection has intervened, notably by issuing opinions on draft laws concerning measures to combat the Covid-19 pandemic. In its opinion on the proposed law n°7808 on the Covid-19 screening strategy in structures for vulnerable persons and in support and care networks.

The CNPD states that the processing of data carried out in the context of proposed law no. 7808, which provides for the obligation to carry out Covid-19 screening tests for external service providers and visitors to certain structures, must “rely on one of the lawfulness bases listed atArticle 6 of the GDPR as well as meeting one of the conditions referred to in article 9, paragraphe (2), of the GDPR insofar as data relating to the health of data subjects may be processed. ».

Moreover, the CNPD’s reflection is interesting because it raises issues that are not related to data protection but that will have to be framed: “The CNPD wonders, in terms of labour law, about the consequences of a refusal by an employee or an external service provider to submit to such obligations. Will the employee have to work at another job? What will be the consequences for an external service provider when the organisation is not its employer?

The CNPD concludes by stating that it cannot comment further on the data protection issues as “the text under opinion would not meet the requirements of clarity, precision and predictability that a legal text must meet“. This response demonstrates the importance of this institution, and of supervisory institutions in general, because it is thanks to it that the legislator was able to realise that it did not meet the criteria of clarity and intelligibility of the law required by European texts.

We can see that the protection of our data and its legislation is a very broad area. Regulation will have to continue to adapt as new technologies evolve. Companies need to check the compliance of their data processing policies with current legislation and users need to be vigilant about how they disclose their personal information.