AI in Business: Legal Risks and Mitigation Strategies

23 January 2025by Naomi Cramer
AI in Business: Legal Risks and Mitigation Strategies


Artificial Intelligence (AI) presents a wealth of opportunities, with generative AI chatbots such as ChatGPT and CoPilot becoming part of everyday life. An increasing number of organisations are also exploring AI to assist with repetitive tasks, enhance customer experiences, and to support decision-making.  However, significant legal risks must be considered when using AI in business. This article highlights some of the key legal risks associated with AI and the mitigation strategies companies can implement.

Legal Risks Associated with AI in Business

When investing in AI, it is crucial to understand the risks:

  • Intellectual property: AI can raise questions about intellectual property rights, particularly with AI-generated content or inventions. Companies must navigate the complexities of ownership and copyright in the context of AI. AI systems often require large amounts of data that they are trained upon in order to be able to analyse and deliver comprehensive outputs. The quality of the data used is important to ensure that the resulting outputs are high quality.  Where the data used to train the AI is sourced from third-party materials, there may potentially be copyright issues in relation to the copying or use of those protected materials. Issues may arise from the machine learning process itself, from users uploading copyright materials into the system, which generates works based on that material, or from the output resembling copyrighted works.  If the appropriate licences are not in place from the owner of the copyrighted works the use of the outputs generated by the AI system may result in copyright infringement.
  • Data Privacy and Security: AI systems often require large amounts of data, raising concerns about data privacy and security. Where the data being used by the AI system involves personal data, GDPR requirements will need to be considered. Unauthorised access or misuse of personal data by the AI system can lead to legal repercussions and damage a business’ reputation.
  • Accuracy: AI systems can generate inaccurate outputs, particularly generative AI models. Models trained on inaccurate or incomplete data are more likely to produce inaccuracies (also known as hallucinations). Therefore, it is important to consider the quality of the data sources used to train AI models. An AI system that ends up providing poor-quality outputs may end up being costly for a business and lead to a potential dispute with the supplier.
  • Liability Issues: Determining liability when AI systems malfunction can be complex. The “black box” nature of AI, where decision-making processes are not always transparent, can make it difficult to pinpoint responsibility when something does go wrong. Without a tightly drafted contract, an AI supplier may seek to avoid liability when something goes wrong, leaving your organisation without a satisfactory remedy.
  • Regulatory Compliance: If your business operates in a regulated sector, it is important to ensure that any AI product or service used complies with applicable regulatory requirements.
  • Bias and Discrimination: AI algorithms can inadvertently perpetuate biases present in training data, leading to discriminatory outcomes. This can result in legal challenges and harm to a business’s brand image.

Mitigation Strategies

To mitigate the risk, it is important that businesses have robust AI policies and procedures in place:

  • Carry out pre-contractual due diligence:   Before investing in an AI solution, it is important to make sure that the solution being offered will work for your business and that the AI solution has been designed to comply with Auckland laws.  Carrying out pre-contractual due diligence when selecting an AI provider can help mitigate the risks.  For instance, potential suppliers can be asked to complete questionnaires and provide information to determine what the AI tool is capable of and to help identify and assess any operational or legal risks at an early stage. If there are a number of AI suppliers offering similar solutions, this may help you identify which solutions are appropriate for your organisation.
  • Comprehensive AI contracts: Once an AI supplier has been selected, it is important to ensure that you have a robust contract in place.  Contracts with AI product or service suppliers clearly should define roles and responsibilities for AI development and deployment within your organisation, deal with issues relating to intellectual property rights to ensure appropriate licences are in place that ensure your organisation can lawfully use AI-generated outputs for your business objectives, to ensure data protection measures are in place to protect personal data and to appropriately allocate risks and liabilities to ensure that your organisation has adequate protection in place in case things go wrong.
  • Implement Responsible AI Practices: Undertake regular audits of AI systems to ensure they operate fairly and transparently. Monitor for biases and make necessary adjustments.
  • Ensure Data Privacy and Security:  AI systems should be designed to minimise the use of personal data and comply with data protection requirements.  The AI system should implement robust data protection measures to safeguard sensitive information, including encryption and access controls. AI systems should be designed to assist the user in complying with data subject requests such as requests for information or data deletion.  It is often sensible to undertake a data protection privacy assessment when considering the use of an AI system that will process personal data to help evaluate and assess the risks and mitigations. Once the AI system is in place, undertaking regular data security assessments can also be helpful to help identify and mitigate any issues.
  • Develop a Comprehensive AI Policy: A well-defined AI policy outlines ethical principles and guidelines for AI use within the organisation. It should address data privacy, intellectual property, bias mitigation, and transparency.
  • Establish Clear Accountability: Define clear roles and responsibilities for AI development and deployment to identify who is accountable within your organisation in case of AI-related issues.
  • Engage in Continuous Learning and Improvement: Continuously monitor and update AI systems to adapt to new challenges and improve performance. Stay informed about regulatory changes and industry best practices.

Conclusion

AI offers immense potential for companies to innovate and improve their operations. However, it also brings legal risks that must be carefully managed.

At Nelsons, we support companies in mitigating these risks. Our expert legal team can provide comprehensive advice on pre-contractual due diligence, contracting, data protection, intellectual property, and the development of effective AI policies.

 

This article is for information only and does not constitute legal/financial advice. Please contact us for advice tailored to your specific position. Some of the content presented on our website has been generated with the assistance of Artificial Intelligence (AI). We ensure that all AI-generated content meets our high standards for accuracy and relevance.



Source link

by Naomi Cramer

Naomi is a highly skilled NZ Court lawyer with more than 25 years & is Family Law Expert in Child Care Custody Disputes.

error: Content is protected !!