Even if you have a rule against it, your employees probably use ChatGPT for work. 

We know this from observation and a few useful stats.

More than 92% of Fortune 500 companies are using ChatGPT. That’s according to OpenAI CEO Sam Altman in 2023.

College-educated workers and young professionals (64% of users are 18-35) are the biggest users of ChatGPT (source).

Never has a new tool been adopted into the workplace so quickly from the bottom up. AI is rewriting how work is done, and the vast majority of this new hybrid-AI work is happening with the help of ChatGPT.

Pretending your teams are not using AI is not an option. If you have not already given them guidelines or training, now is the time.

These are powerful tools that can help your organization. But, it’s important to include the good with the bad. It’s not as simple as “do this, don’t do that.” How AI is affecting workers and companies is nuanced and requires high-level thinking.

This list of considerations applies to all AI tools. I center this approach on ChatGPT because that makes it more accessible and more easily understood since that’s what most people are using. Adapt for your use as needed.

Companies that best adapt to AI impacts will consider and respond to all of these considerations.

Start Small

AI is a long game that will be with us for the foreseeable future, but quick wins are important here. Quick wins will help mitigate risks right away while setting a tone that’s supportive of innovation.

Best Practice Training

This is a great place to start if you have not already hosted company-wide training on ChatGPT best practices. A simple one-hour workshop with specific use cases that every department can learn from helps everyone understand what’s possible and what to avoid.

Quick Policy Guidance

Your employees should have a general idea of what they can and cannot use ChatGPT for when it comes to workplace use. The sooner you get quick policy guidance out the better. 

It may take your organization months to cement more specific and in-depth AI-use policies and guidance. In the interim, you want to make sure you have something out there for the team to understand risks and opportunities.

Decide Who Is In Charge

People are using AI all across your organization, from interns to the C-suite. 

Start by defining a Center of Excellence team (CoE) that will provide leadership, best practices, research, support, and training on AI for the whole organization.

Your innovation, marketing, sales, and legal teams should be represented, as well as someone from leadership.

This ensures a company-wide plan and perspective on AI and ChatGPT to guide your organization and avoid inconsistent policies across departments and teams.

That team will start with research to make sure they have a baseline of knowledge, understand stakeholder opinions, identify pain points, align on goals, evaluate tools, and start to identify model use cases.


Do you have company guidelines for AI use?
Click To Tweet


Model Best Use Cases

A recent study of law students that were allowed to use ChatGPT showed they did faster work, higher quality work, and enjoyed their work more. 

The best way to help your team members get the most out of ChatGPT is to identify where ChatGPT can best help your teams and create best practices around those opportunities. 

These examples should be simple to follow. Set clear usage recommendations to mitigate risks, ensuring ChatGPT is used as a complement to human skills, not a replacement.

For example, your Customer Service team can use AI tools to quickly gather information and troubleshoot customer issues, but having a process defined for them will help mitigate potential issues while maintaining accuracy and high quality support. 

Define what AI can do well in this process, where it should be avoided, and how it will be quality checked.

So beyond helping you figure out what to cook for dinner with the 4 random items left in your refrigerator, what is ChatGPT really good at? You know, for work stuff. 

Let’s spotlight some of the best use cases where ChatGPT excels:

  • Research: Gather and summarize information on various complex topics
  • Ideation: Assist in brainstorming with creative suggestions and alternative perspectives
  • Drafting: Produce content outlines and other project scopes
  • Editing: Checking for spelling, grammar, consistency, and other errors
  • Analysis: Extracting key insights and patterns in large sets of data or text
  • Summarizing: Condense lengthy documents, reports, or meeting notes
  • Training: Real-time Q&A support for employee skill development.
  • Technical Support: Resolving common issues and software roadblocks

Not only do these use cases improve productivity and creative output, they also have the potential to take a bit of the “blah” out of the average employee’s work day. Any time you can reduce repetitive and draining tasks, your workforce will be happier.

Imagine the car you’re driving gets a new feature every few miles you make it down the highway. Eventually you’ll need to pull over and make sure you still understand how the turn signal works.

Name The Risks

ChatGPT presents a litany of opportunities and improvements for your team members, but we should ignore or shrug off the real risks associated with the use of AI tools.

Risks of Using AI Tools

1. Bad Info

You probably shouldn’t listen to ChatGPT above your doctor yet. AI tools are getting better each day and hallucinating less, but they’re not foolproof. We see a steady stream of examples in the media of lawyers, business leaders, and politicians disseminating incorrect or misleading information.

Does your company have a process in place to check against bad info in all internal and external communications.

2. Poor Content

Don’t let ChatGPT do your homework for you. 

Overreliance on ChatGPT for any copywriting or communication is simply a bad practice. Mainly because it’s not usually that good —yet. 

ChatGPT typically writes content that’s overly verbose, formal, and with specific turns of phrase that human writers tend not to include. For instance, anytime I see a corporate blog post with a closing paragraph that starts with “In conclusion” I immediately second guess how much of that content was written by a human writer. 

This will evolve and improve over time. Which makes it even more important to make sure you have processes in place today to ensure quality outputs from your teams.

Are there expectations for employees to review and rewrite any content that is generated by AI.

3. Sophisticated Cyber Threats

That unique tone of voice your CEO uses in emails? ChatGPT can probably replicate that pretty easily. 

As your company gets better at these tools, so do bad actors. Phishing attacks are easier than ever, even for non-native English speakers. Consider that if you have executives with public facing written, spoken, or video content that anyone could utilize that data to replicate their voice and make trouble.

With the rise of AI threats, is your organization increasing attention to cyber threats?

4. Data Security Challenges

Most leaders probably don’t want their finance team feeding profit and loss data into ChatGPT to save 5 minutes of their time analyzing some data.

All the information your team members enter into ChatGPT and other AI tools automatically creates data security issues that many companies have not grappled with. 

A few risks to make sure your organization is considering:

  • Data Privacy: ChatGPT is trained on data provided during interactions, raising concerns about inadvertent exposure of proprietary or confidential information
  • Data Security: Potential for data breaches or unauthorized access
  • Compliance Issues: Violations of data protection regulations like GDPR or HIPAA
  • Misuse Risk: Inadvertent training AI models without proper consent

5. AI Bias

AI tools, including ChatGPT, can inadvertently reflect and amplify biases present in their training data, raising ethical concerns. This can lead to ChatGPT reflecting cultural, gender, racial or other biases present in society. It could also include outdated language or concepts that human editors may be better at catching. 

Be aware of these biases and have a plan to mitigate them in your AI-driven processes.

6. Protect your IP

If your business has a website of sufficient size, AI tools have likely already crawled your content and may be using it in their datasets. 

Consider restricting any data or content that provides unique value to your business. This may include unique processes, research, or community knowledge. 

On the flip side of the coin, it may be helpful for you to make sure AI tools like ChatGPT have access to promotional material, content marketing, and support resources. These can help your organization surface relevant content for queries from your customers or prospects.

Continuing Education

Imagine the car you’re driving gets a new feature every few miles you make it down the highway. Eventually you’ll need to pull over and make sure you still understand how the turn signal works.

This goes beyond technical know-how. It’s about cultivating an AI-ready mindset. Regular workshops, hands-on training sessions, and continuous learning opportunities are an opportunity to level up a team that’s not just AI-literate but AI-adaptive.

This is a big shift. To avoid getting overwhelmed, take it slow but be intentional. And remember, the goal is to augment, not replace, human intelligence with AI. That’s your opportunity.

The post Training Your Workforce for ChatGPT: A Comprehensive Guide appeared first on Convince & Convert.