Creating a generative AI policy for businesses

Guest Contributor
June 28, 2023


By Yogi Schulz

Yogi Schulz has more than 40 years of information technology experience in various industries; his specialties include IT strategy, web strategy and systems project management. His forthcoming book, co-authored by Jocelyn Schulz Lapointe, is “A Project Sponsor’s Guide for Projects: Managing Risk and Improving Performance.”

Last year ChatGPT took the world by storm. This product, along with the growing list of its chatbot competitors, are examples of generative AI technology using large language models. These AI software services, such as Bard, Bing, ChatGPT, DALL-E, and Midjourney, quickly revolutionized the field of natural language processing and were widely adopted with astonishing speed.

ChatGPT went viral primarily because of its ease of interaction and ability to accelerate daily work significantly and improve productivity in many settings. These benefits save vast amounts of time and money. Generative AI technology’s most significant impact is among paraprofessionals, applied sciences, middle management, and educational services.

Given the rapid and often ad hoc adoption of generative AI software services, it is now time for business leaders to establish policies to ensure their responsible use, as well as drawing attention to risks. Whether, when, and how to adopt generative AI should not be left to individual employee judgment.

“A short, well-crafted policy for generative AI can help a company achieve the benefits of using this widely applicable technology,” says Ronnie Scott, Chief Technology Officer at Charter, a Canadian information technology service provider. “Implementing a policy will also minimize many of the risks this new technology introduces.” 

Why is employee use of generative AI a concern?

Generative AI offers many potential benefits to organizations. However, many concerns associated with its use arise because generative AI does not possess intelligence, despite superficially compelling examples to the contrary. The concerns include:

Errors – Generative AI often creates content blending accurate and erroneous statements. These errors are called hallucinations, occurring because the model’s training data also contains this blend. In addition, generative AI output is highly sensitive to small changes in the prompt text provided. This sensitivity can result in wide variations in output undermining confidence.

Safety – A frequent use of generative AI is to develop software. If the software operates devices such as autonomous vehicles, robots, or manufacturing plants, generative AI will contribute software defects leading to unsafe or dangerous situations.

Intellectual property – Generative AI can be used to create seemingly new content, infringing on the intellectual property rights of others. The infringement arises because the copyrighted material was included in the model’s training data and can lead to lawsuits and damages.

Misuse – Generative AI can create realistic but fake content. Examples of fake content are fake news, misleading analytical reports, and highly realistic, misleading images known as deepfakes. Malicious individuals can use such fake content to spread misinformation and disinformation to mislead and manipulate public opinion.

Bias – Generative AI models are biased because the data used to train the models is often biased. Examples include age, racial, ethnic, gender, and political biases. The discovery of bias sometimes leads to discrimination lawsuits and damages. Skewed or biased training data can also lead to faulty analysis and dubious recommendations.

Privacy – Generative AI can be used to create realistic fake content about actual individuals. Examples are fake text, images, or videos, violating individuals’ or customers’ privacy or defaming them. Privacy violations can lead to fines under the European Union’s General Data Privacy Regulation or lawsuits.

Ethics – Generative AI use raises broader ethical concerns about a company’s culture, reputation, and alignment with its stated mission, values, and goals. Examples of this problem crop up  when companies are accused of unethical behaviour, based on how they employ generative AI results. Examples also arise in art or literature, where the use of generative AI raises ethical questions about the authenticity and originality of the content.

What are the elements of an effective generative AI policy?

In many ways, a generative AI policy extends and clarifies a business’s existing policies. The policy goal is to:

  • Educate employees about generative AI opportunities and risks.
  • Raise awareness of company policies.
  • Avoid constraining innovation.
  • Ensure the responsible use of generative AI.
  • Reduce the risks associated with this technology.

Business leaders should consider the following elements in developing the generative AI policy for their organization:

Employee responsibility for work products – Employees are always responsible for their work products’ quality, accuracy and completeness. The emergence of generative AI as a new, additional tool that supports employee work does not change this fundamental responsibility.

Generative AI is an additional tool. It does not replace research or analysis as previously conducted.

Generative AI sometimes creates erroneous and irrelevant statements. Employees must be vigilant in removing these from their work products.

Generative AI routinely incorporates copyrighted material owned by others in its results. Employees must remove this material from work products or take steps to license it.

“The dog ate my homework” has never been an acceptable excuse. Using generative AI does not introduce a new excuse for inadequate work.

Peer review – Whether or not generative AI is used, the company should have and should continue with well-established review and governance processes to ensure accurate and acceptable work products.

Using generative AI is not a reason to avoid or circumvent the company’s review processes. In fact, given the limitations of generative AI, its output must be reviewed and defended with these additional considerations: 

  • Appropriateness of using generative AI for the problem space.
  • Explainability and verifiability of results
  • Relevance and adequacy of training data.
  • Sufficiency of model sophistication.
  • Inadvertent use of copyrighted material.

Intellectual property and trade secrets – Employees are always responsible for safeguarding the company’s IP and trade secrets. Generative AI introduces a new risk of losing or at least revealing these assets because the AI software service logs all prompts and conversations while not accepting any responsibility for the confidentiality of those inputs.

The company’s IP and trade secrets policy should remain consistent, even with the use of generative AI.

Transparency about the use of generative AI –  Employees are expected to be transparent with their peers and supervisors about how their work products are developed. This long-standing policy applies equally to their use of generative AI.

The use of generative AI must maintain compliance with relevant industry regulations and legal obligations and is subject to this long-standing policy.

Acceptable terms of use – Generative AI use is limited to business-related purposes and aligned with the company’s ethical standards.

What should be avoided in a generative policy?

Business leaders, when formulating their generative AI policy, should avoid potential issues that discourage use and constrain innovation. These issues include:

Restricting access to generative AI – Companies should encourage, rather than restrict, access to generative AI services subject to the company’s policy. If some employees want to use highly proprietary corporate data with generative AI, operating a proprietary generative AI environment is advisable. Using the generative AI environment of a cloud service provider may not provide sufficient intellectual property protection.

Onerous review processes – The company should avoid introducing onerous review and audit processes when using generative AI. Such processes slow work and stifle innovation.

A static generative AI policy – Generative AI is a rapidly evolving technology, given the enormous amount of attention and investment it is receiving. As a result, the generative AI policy will be modified and improved as the landscape changes.

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events











Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.