Partner Sylvie Gallage-Alwis discusses whether there is a need to adapt the EU Product Liability Directive in light of the digital era and ever growing technological advancements, in Law360.
Sylvie’s article was published in Law36o, 24 February 2020, and can be found here. It was also published in Droit & Technologies, 17 February 2020, and can be found here, and in Open Access Government, 11 March 2020, here. A version of this article has been published in Lawyer Monthly, 23 March 2020, here.
While the EU is working on sustainability of household appliances and looking into means to have its legislative framework stay relevant in light of the changes that affect our society, the question is being asked as to whether or not the EU’s product liability directive, which was enacted back in 1985, when self-driving cars and artificial intelligence were the stuff of science fiction, should be revised.
Nowadays, people routinely speak to virtual assistants, self-driving cars are already on the streets as real-world testing continues apace. Everyday appliances such as fridges, heating systems and television sets are increasingly connected to the internet. Such technological advances mean that EU product liability law has to keep pace with technology.
On 22 January 2020, the European Parliament Committee on Internal Market and Consumer Protection held a public hearing to discuss such issues with a range of stakeholders. The hearing, which was entitled, “Product Liability Directive: protecting consumers in the Digital Single Market” heard a variety of views as to the best way forward.
Stakeholders set out their perspectives as to whether Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (the “Product Liability Directive”) is still fit for purpose in the 21st century. Some even questioned whether the laws relating to liability for defective products are still relevant in light of new technologies such as artificial intelligence (AI), smart self-learning products and the advent of the Internet of Things (IoT).
The European Commission’s representatives emphasised that the issue is vital to “reinforce EU’s industrial capacity to allow it to be technologically sovereign” as well as being able to offer a competitive advantage by facilitating innovation, and so enabling Europe to compete with China and the United States.
The stakeholders sharing their views included industry representatives CLEPA, which represents the European Association of Automotive Suppliers, consumer representatives BEUC, along with members of the Commission and the expert group on liability and new technologies.
Only the Orgalim group – which represents European technology industries – argued that the Product Liability Directive does not need to be modified at all. Orgalim argued that amendments were unnecessary, since the directive is technology-neutral and already strikes the right balance between the obligations of consumers and producers, thereby creating legal certainty.
However, all other stakeholders favoured the EU adapting the Product Liability Directive. The consensus was that the key amendments should include:
- The definition of “product” being updated, since there is a growing interaction between physical products and digital services.
- The definition of “producer” should be clarified in order to determine who should be the producer in the case of an update, upgrade or modification. For example, there was a call by CLEPA to have those who make modifications to the product be considered producers.
The consensus was also that the EU should consider the following questions when amending the Product Liability Directive:
- Should type of damage to be compensated should be expanded to include damage to data or digital assets?
- Whether strict liability should apply, with the suggestion that all manufacturers involved in the product be jointly liable.
- Should there should be a reversal of a burden of proof so that the onus would lie with producers rather than consumers?
- Whether changes should be made the “Development risk defence”, given that new innovations appear in the marketplace very quickly.
- Should products be graded? Should there be a sectorial approach? Or should regulation remain product neutral, noting that the Commission expressed the view that a sectorial approach does not seem appropriate.
Members of the Expert Group on Liability and New Technologies had already worked extensively to deal with issues such as how liability can apply to AI. In November 2019, the group published a detailed report, “Liability for Artificial Intelligence and other emerging digital technologies”, assessing the existing liability regimes in the light of merging digital technologies. Key takeaways from the report were addressed during the public hearing of 22 January 2020, where the discussion revolved around strict liability, burden of proof and development risk defence.
The resolution dated 12 February 2020 entitled “Automated decision-making processes: Ensuring consumer protection, and free movement of goods and services” is another important development. The resolution noted that while the committee, “welcomes the potential of automated decision-making to deliver innovative and improved services to consumers, including new digital services such as virtual assistants and chatbots” consumers interacting with AI automated decision making should “be properly informed about how it functions, about how to reach a human with decision-making powers, and about how the system’s decisions can be checked and corrected”.
The resolution goes on to urge the Commission to monitor the implementation of the Better Enforcement Directive’s rules requiring traders to inform consumers when prices “have been personalised on the basis of automated decision-making and profiling of consumer behaviour”.
Another important issue to be addressed is the possibility of AI and automatic decision making being used to “discriminate against consumers based on their nationality, place of residence or temporary location”.
The committee also specifically recognises that “the emergence of products with automated decision-making capabilities presents new challenges, since such products may evolve and act in ways not envisaged when first placed on the market”. It goes on to urge the Commission to bring forward proposals to adapt the EU’s product safety rules across a broad range of areas, including the Machinery Directive, the Toy Safety Directive, the Radio Equipment Directive and the Low Voltage Directive.
The resolution further “stresses the need for a risk-based approach to regulation, in light of the varied nature and complexity of the challenges created by different types and applications of AI and automated decision-making systems” and calls on the Commission to develop an:
“assessment scheme for AI and automated decision-making to ensure a consistent approach to the enforcement of product safety legislation in the internal market; and emphasises that Member States must develop harmonised risk-management strategies for AI in the context of their national market surveillance strategies.”
In addition, in the banking sector, the committee calls for the supervision of automated decision-making systems by professionals where the public interest is at stake. The resolution says that automated decision-making systems should use high-quality, unbiased data sets and “explainable and unbiased algorithms”.
This text already gives a good idea of the general shape of the EU regulatory landscape that is likely to emerge to meet the challenges posed by AI, automated decision making, automation, robotics, self-driving cars and our increasingly online world.
On 19 February 2020, the commission published a white paper on Artificial Intelligence – A European approach to excellence and trust. This white paper calls for a “broad consultation of Member States civil society, industry and academics, of concrete proposals for a European approach to AI”, emphasising again on the need to adapt the existing regulatory framework.
The Commission’s approach was no doubt informed by the expert report it commissioned, entitled “liability for artificial intelligence and other emerging digital technologies”. It stated that “the most important findings of this report on how liability regimes should be designed – and, where necessary, changed” included to that:
“a person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.”
This is interesting, in that it means that the operator of a self-driving car, or drone, would be made primarily and strictly liable for any resulting damage. The operator would also be “required to abide by duties to properly select, operate, monitor and maintain the technology in use and – failing that – should be liable for breach of such duties if at fault.”
Although this would appear to shield manufacturers from primary liability, the report goes on to state that:
“Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.”
Therefore, while the operator would be primarily liable, if for example a self-driving car were to injure a third party due to a defect, the manufacturer could ultimately be held liable.
The expert report goes on to state that a future regime should not give “autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.” In short, if something goes wrong, you won’t be able to blame the robot.
No doubt there will be a great deal more debate before the detail of revisions to the EU’s product liability regime finally emerge. Yet, whatever the precise form of the updated product liability regulation, it is already clear that the EU regime will seek to require transparency, along with human supervision of AI systems and, ultimately, human accountability for the actions of AI systems.
Other important proposed changes, such as those to the definitions of key product concepts like “product” and “producer”, as well as to the burden of proof, may ultimately have wider impacts on EU product liability law across the continent.
Even if this was not addressed during the debates, no doubt that the position of the European Court of Justice (ECJ) expressed in its case law will also have to be taken into account when finalizing the new piece of legislation. During the last years, the ECJ has rendered rather plaintiff-friendly case law, implicitly reversing the burden of proof on a number of occasions and weakening the causal link that is required when it comes to product liability. This has led to manufacturers having to demonstrate that their products were compliant rather than the plaintiffs demonstrating the opposite.
This case law, if unchanged, could lead to a system where the manufacturer’s liability would be at stake each time it is believed that the technology attached to the product could not really be controlled by the user. This is a dangerous path to take as this position which exists in some EU Member States has led to the multiplication of litigation and a fear to innovate on the part of manufacturers rather than an increase in protection of consumers and product offering. To be monitored…
Latest news

@SignatureLitLLP
We are pleased to announce that Ela Barda has been promoted to Counsel - many congratulations, Ela! Read more here: bit.ly/3XMqd7A #Litigation #CommercialLitigation pic.twitter.com/JxZLKh1qN2
Sylvie Gallage-Alwis and Gaëtan de Robillard discuss the impact of recent legislative change on the green industry in Option Droit & Affaires
7 December 2023
Sylvie Gallage-Alwis and Gaëtan de Robillard discuss the impact of recent legislative change on the green industry in Option Droit & Affaires
7 December 2023