Ask around the office and you’ll find that many of your people have used an AI chatbot to generate a document for work purposes. But is this putting your business and client data at risk? Here are a few things to consider with your employees using these applications, along with what you can do to help mitigate these risks.
Risk of data inaccuracies
AI chatbots go out and seek information related to a topic but what comes back may not always be accurate. This means you may be passing on false or misleading information to your clients, suppliers or even employees. In some cases, it is believed, it could even be discriminatory when used in conjunction with generating HR-related documents. AI chatbots may inadvertently reveal sensitive company or client data if they are not properly configured or secured.
This could lead to regulatory violations, data breaches and financial losses. AI may also struggle to understand moral complex queries, especially in situations that require deep understanding
of cultural sensitivities and human emotions.
Risk of client data being exposed
Any sensitive or client data that is put into a generative AI product could be collected by the software owner or third parties, therefore putting it at risk of being inappropriately shared.
There is also the potential that sharing client data through one of these applications could breach The Privacy Act 2020. Storing client data in the cloud may expose it to vulnerabilities especially
if the cloud provider’s security is compromised.
Risk of plagiarism
If you are using AI to generate materials, there is a potential risk of plagiarism –especially if they go back into the public domain. For example, if you are using AI chatbots to create an article, which you then use on your website as your own material. Parts of that content could have directly been lifted from others’ work, which is why lawsuits have started to happen in the US around plagiarised work involving AI.
This includes the unauthorised use of another party’s intellectual property, notably designs, ideas, or text. If the plagiarised material is used for commercial gain, it can lead to intellectual property infringement claims.
Risk around data ownership
If your people are using their own AI chatbot accounts to generate work-related documents, they become the copyright owner of the work and not you. It also means that the employee could legally share their generated documents with other parties without your consent. How can you mitigate these risks? What are your options to help mitigate the risks of AI use in the workplace?
■ Number one, understand how and when your employees are using generative AI.
■ Create a company policy around the use of AI chatbots, which sets out clear guidelines for how they are to be used, or not used, as the case may be. It is important that this is well communicated so your employees understand the importance of why you are creating such a policy.
■ Develop an incident response plan that includes instructions in case of a data breach, including containment strategies.
■ Discuss with your CIO or IT support provider about blocking the use of certain sites so that employees don’t accidentally use a non-approved application.
■ If there is a need for these tools get an Enterprise account. Whilst it will cost more, there is less risk of client data being leaked as information put into the system remains private to the organisation and is therefore not shared with third parties.
■ Conduct regular security assessments such as audits and address potential security weaknesses before they can be exploited.
■ Encrypt sensitive data.
■ Talk to your insurance broker about your use of these tools and what liability cover you may need as a backstop for any breaches.
If you need advice on your liability exposures relating to generative AI, please feel free to drop us a line.