AI in Manufacturing: Who is to Blame for Mistakes?

10 January 2025by Naomi Cramer
AI in Manufacturing: Who is to Blame for Mistakes?


In this article, we explore how AI is being used by manufacturers, consider the question of who should be liable for AI-driven mistakes at a production level and explore strategies manufacturers can adopt to mitigate legal risks.

Use of AI at Production Level

Artificial Intelligence has quickly become a transformative force in various industries, including manufacturing. MakeNZ’s recent report, Future Factories Powered by AI, found that AI adoption in manufacturing is growing and states that it is expected to increase further. Nina Gryf, Senior Policy Advisor at MakeNZ, states:

“AI and automation are driving dramatic change in speeding up manufacturing processes…Their potential to drive economic growth and reshape industries is becoming increasingly clear, and the manufacturing sector and its factories of the future have a central role to play.”

As AI adoption grows, manufacturers are increasingly tapping into its potential to drive efficiency, improve quality, and optimise safety within their operations. AI is being implemented in several key areas, transforming traditional manufacturing processes into more agile and data-driven systems.

For example, the Toyota Research Institute has developed an AI technique that reduces design alterations by integrating engineering necessities earlier in the creative process. By combining optimisation principles with text-to-image generative AI, constraints like aerodynamic drag and chassis dimensions can now be considered in design, minimising disruption to aesthetics. Designers can input text requests for designs with specific attributes, like “sleek” or “SUV-like,” based on a prototype and performance criteria.

AI is also transforming quality control by replacing manual inspection with AI-powered systems, particularly those using computer vision, ensuring faster and more accurate product checks. Tesla and BMW use AI to inspect vehicle components before they leave the production line.

AI is now also being used by manufacturers for predictive maintenance, where algorithms monitor machinery in real time to predict when maintenance is needed, preventing breakdowns. GE has been using AI and machine learning for predictive maintenance in various industrial sectors, including wind turbines and gas turbines. By analysing data from turbine sensors, AI helps GE predict potential failures before they occur, allowing for more efficient maintenance scheduling. This is aligned with the concept of predictive maintenance used in many industries.

Robotic Process Automation, where AI-powered robots perform repetitive or hazardous tasks like assembly, welding, and packaging, is also increasingly being adopted. Fanuc, a Japanese manufacturer of “cobots” (robots designed to work alongside humans), produces AI-integrated robots used worldwide for tasks like component assembly and packaging. Unlike traditional robots, cobots interact safely with human workers, and future AI advancements may make them even smarter, enabling them to learn from human behaviour and improve efficiency.

Beyond production, AI is also being used to help with the broader monitoring of supply chains. For example, Siemens’ MindSphere is an Internet of Things platform that connects devices and machines across the supply chain. It is designed to enable real-time monitoring and advanced data analysis, to help companies track inventory, optimise logistics and predict potential disruptions by leveraging AI and analytics to enhance decision-making and improve supply chain efficiency.

How AI Can Make Mistakes

AI systems, while offering significant benefits in manufacturing, are not infallible and can make mistakes. These errors often stem from issues with the training data, the algorithms, or the integration with hardware.

A common cause of AI mistakes is poor or biased data. AI relies on large datasets to learn patterns, so incomplete or biased data can lead to flawed outcomes. For example, an AI trained on a limited range of product defects may miss new or uncommon issues.

AI mistakes can also arise from “overfitting” and “underfitting.” Overfitting occurs when an AI model is too aligned with its training data, capturing irrelevant patterns that don’t apply in real-world scenarios. Underfitting happens when the model is too simplistic, missing critical data signals, preventing accurate predictions of machine failures.

Algorithmic assumptions are another source of AI errors. AI systems often rely on assumptions about processes, which, if wrong, can lead to mistakes. For example, an AI-driven inspection system may assume that the surface texture of a metal part remains uniform throughout the production run. If a batch of parts has variations in texture due to a supplier issue or slight changes in production conditions, the system may fail to detect defects, leading to faulty products reaching the market. Similarly, AI systems designed to optimise efficiency may prioritise speed over quality, resulting in defects or product failures.

AI systems often interact with hardware, and faulty communication between the two can lead to errors. In particular, issues can arise where an AI system becomes too advanced for the hardware on which it is installed.

Furthermore, human errors during AI training or deployment can result in mistakes, such as applying an AI model built for one product type to a different type or failing to adjust AI programming to align with changes in quality standards.

Liability for AI-Driven Mistakes

While AI offers tremendous opportunities, as AI systems are given more decision-making roles in manufacturing, questions arise about who is liable when AI makes a mistake. If AI makes a mistake, who is responsible for the losses caused? Is it the manufacturer given that they implemented the AI system, or the AI software provider? If an AI-driven robot was involved, might the manufacturer of the robot be liable (if different to the AI software provider)?

The evolving nature of AI law means that many jurisdictions have yet to establish clear laws on who is liable for AI-driven mistakes. Manufacturers need to be aware of the growing legal complexity and anticipate the potential for new legal frameworks that might address AI liability more directly.

A critical factor in determining liability will be the root cause of the error. If the mistake was due to poor implementation by the manufacturer or misuse of the system, liability may rest with the manufacturer. However, if the issue arose from the AI software’s design or algorithm, the software provider may be held accountable. To determine the cause, expert evidence will likely be required, and legal disputes may involve technical assessments of how the AI system operates and the specific errors it made.

In addition to the technical aspects of the failure, the level of control and oversight exercised by the manufacturer is likely to be considered. If a manufacturer heavily customised the AI system or operated it outside of the intended use cases, they might bear responsibility for the outcomes. On the other hand, if the manufacturer used an off-the-shelf solution from a reputable AI provider, the responsibility may shift toward the software provider, especially if the system’s failure was caused by a defect in the AI model or its training data.

The fact that AI systems can be continuously updated and modified over time (whether by human input or through organic learning) complicates matters. Manufacturers must ensure that their AI systems are regularly monitored and updated to minimise risks. However, they may also face challenges in maintaining clear accountability for systems that evolve, especially if the modifications were made by different parties or if the AI’s decision-making capabilities change post-deployment.

The evolving nature of AI law further complicates matters. Many jurisdictions around the world are still in the early stages of developing comprehensive legal frameworks that address AI-driven decision-making. In the absence of clear, established laws, manufacturers are operating in a grey area. Legal uncertainty could expose manufacturers to unforeseen risks, especially as new legal precedents emerge. This is particularly problematic in a globalised manufacturing environment where AI systems are integrated across borders. Jurisdictions with differing legal standards may interpret AI-related liability in inconsistent ways, creating challenges for manufacturers who operate internationally.

This lack of clear legal direction means that manufacturers must anticipate future changes to the regulatory landscape and prepare accordingly. AI liability is expected to be a key focus of regulatory bodies in the near future, with laws and standards likely to evolve as the technology matures. As such, manufacturers need to stay informed about developments in AI-related legislation and work proactively to ensure that they comply with emerging requirements.

How Manufacturers May Be Able to Mitigate Risks

To mitigate the risks associated with AI implementation, manufacturers must be proactive in addressing potential liabilities before issues arise. Below are some non-exhaustive examples of how manufacturers may look to mitigate the risk of being liable for AI-driven mistakes at production level.

Contractual Protections

Manufacturers should take steps to ensure that contracts with relevant parties, including customers contracts clearly define the roles and responsibilities of all parties involved in the AI ecosystem and that their liability is limited so far as possible. Indemnity clauses can also help protect manufacturers by shifting potential financial risks to other parties.

However, the above clauses require mutual agreement and, even when agreed upon, may later be subject to challenge or interpreted differently, leading to potential legal disputes.

Working with Reputable AI Suppliers

To reduce the risk of AI-driven errors, manufacturers should work exclusively with reputable and experienced AI suppliers. Selecting a supplier with a proven track record of delivering reliable, well-tested AI solutions is crucial to ensuring that the technology operates as intended. Reputable suppliers typically have rigorous testing processes in place, reducing the likelihood of malfunctions that could lead to costly errors.

Manufacturers should also ensure that AI suppliers are transparent about their products’ limitations and provide clear documentation on how the AI system operates. This includes understanding the data inputs and the logic behind decision-making processes. By selecting trustworthy suppliers and fostering strong communication, manufacturers can better manage the risks associated with AI implementation.

Ensuring Adequate Insurance Coverage

Given the potential risks associated with AI systems in manufacturing, it is essential for manufacturers to invest in comprehensive insurance coverage. Traditional insurance policies may not cover AI-driven errors, so manufacturers should work closely with insurers to ensure that their policies address specific AI-related risks. Some insurers may be unwilling to cover such risks.

Gradual Integration

Rather than immediately deploying AI systems across the entire manufacturing process, manufacturers should consider a gradual, phased approach to integration. By introducing AI technology incrementally, manufacturers can monitor its performance in real-world conditions and make adjustments as needed. This slow integration process allows manufacturers to build confidence in the AI system’s reliability and identify potential issues early on before they escalate into serious problems.

Starting with pilot projects or limited deployments enables manufacturers to assess the performance of AI in controlled environments and refine the system based on feedback, preventing future legal risks.

Stress Testing

Manufacturers should conduct thorough stress tests and scenario simulations to evaluate how the system behaves under different conditions. Stress testing allows manufacturers to identify potential vulnerabilities in the AI system and rectify issues before they lead to costly mistakes.

Ongoing Monitoring and Continuous Improvement

Once integrated into manufacturing, AI systems should be continuously monitored to identify and address issues early. Regular audits can ensure the system functions as intended. Since AI can evolve and learn, manufacturers should collaborate with providers to update and refine the AI software, ensuring it remains accurate, safe, and effective. By committing to continuous improvement, manufacturers can ensure that their AI systems adapt to changing production environments and legal requirements, reducing the likelihood of errors and liabilities.

Education and Training on AI Systems

Another potential way to mitigate legal risks is by investing in training and education for employees who interact with AI systems. Workers should understand how AI technologies work, their capabilities, limitations and the potential risks associated with their use to ensure AI is used properly and not inappropriately or improperly relied upon.

Conclusion

The integration of AI into manufacturing has the potential to revolutionise the industry, offering improved efficiency, quality control, and innovation. However, manufacturers should be mindful of the legal risks that accompany this technology.

As AI becomes more deeply integrated into manufacturing processes, the risk of liability for AI-driven errors grows. However, manufacturers can take several proactive steps to manage these risks effectively. By reassessing contracts, ensuring adequate insurance coverage, working with reputable suppliers, stress-testing AI systems, and investing in education and training, manufacturers can reduce the likelihood of costly mistakes and minimise the financial and legal consequences of AI errors.

How can we help?Manufacturing Outlook Report

For more information about legal action for manufacturers don’t hesitate to get in touch with Simon Key (Partner) or Dominic Simon (Senior Associate) in our expert Dispute Resolution team. Please contact the team in Derby, Leicester or Nottingham on 0800 024 1976 or via our online form.

Contact us

The post AI in Manufacturing: Who is to Blame for Mistakes? appeared first on Nelsons.



Source link

by Naomi Cramer

Naomi is a highly skilled NZ Court lawyer with more than 25 years & is Family Law Expert in Child Care Custody Disputes.

error: Content is protected !!