Compliance Challenges Arising from the Use of ChatGPT and Artificial Intelligence

On November 30, 2022, OpenAI launched ChatGPT, and the artificial intelligence chatbot quickly became the talk of the corporate world. With over 100 million users, ChatGPT is one of the fastest growing applications of all time. Since its onset, businesses faced issues related to employee use of ChatGPT to draft emails and letters, to perform research, to code, to generate ideas, to review resumes, and much more. This blog post covers a number of challenges employers must consider in order to perform a proper risk assessment as the use of ChatGPT becomes more prominent.

Data Security

As employees share an organization’s sensitive and confidential information with ChatGPT, employers are at risk of security breaches. In the case of a data breach, consumers’ private information, as well as the business’s data, are at risk. Further, if chat history is not disabled on the chatbot, any information entered into the machine becomes learned data that could be given to another user. It is important to consider policies surrounding the use of such artificial intelligence to keep data secure.

Copyright Issues

ChatGPT does not give references to the data it generates for its users. Employers may be at risk for copyright violations if employees utilize copyrighted material provided by the chatbot without citation. Further, if employees utilize ChatGPT to generate work product and software code on behalf of their employers, the employer may lose valuable protections under applicable trade secret laws and may not be able to protect the work through copyright depending on the jurisdiction.

Bias and Discrimination

ChatGPT’s data is dependent on its training that comes from large collections of human-written text data. For this reason, the language processing model of ChatGPT has inherent bias issues that are prevalent in the information it gives to its users. Businesses should be wary of consulting ChatGPT on employment decision-making because it could lead to discrimination complaints.

Inaccurate Information

ChatGPT runs on the data that it learns during its training phase, therefore it may generate inaccurate information for its users. The AI tool may use online data with incorrect facts and outdated information. Employees who use ChatGPT for work-related information must review and verify that information before accepting it as accurate.

With the rising ubiquity of ChatGPT and other artificial intelligence tools, companies should create policies and train employees on the proper use of such tools at work.

SHARE: LinkedIn Twitter Facebook Email

Recent Posts