We’re in the midst of the Artificial Intelligence “Wild Wild West” right now. Organizations and individuals are eager to jump in and give generative AI a try, as they explore how to use AI tools to be more competitive, innovative and efficient. But there are few guardrails and little consensus on how to manage these powerful new tools.

Even though AI tools are saving time on various tasks and creating efficiencies, AI is not without risk. Some threats like malicious actors creating deep fakes with AI tools have been well-publicized. But there are also risks when using AI tools for even the simplest day-to-day work tasks. Data privacy, data security, ethical issues (including unintentional data bias), privacy issues, copyright infringement, and even unintentionally using inaccurate information are all concerns you need to guard against. 

Here are some best practices to help you leverage the great power of AI while minimizing the risks.

The Shadow IT in Your Business

Recent studies report that more than 60% of knowledge workers are already using generative AI tools. Individuals are getting more comfortable using these tools and are curious about how they can help with tasks like writing emails, taking notes, or drafting sales letters. 

There are several Gen AI tools that are becoming commonplace in many workplaces and workflows. Professionals from sales teams and creatives to programmers are using AI tools like ChatGPT, Microsoft Copilot, Google Gemini (formerly Bard), and Perplexity, plus AI components of Adobe, Canva, and GitHub have all gained in popularity in the last two or three years.

But a type of shadow IT involving AI tools is developing at some companies, as employees try out AI tools on their own. In some cases, someone may find what they think is just a fun or easy to use tool, not realizing that it’s AI-driven.

If your teams aren’t using AI properly with the right guardrails in place, with secure platforms, information can be released inadvertently. 

Let’s use the popular tool ChatGPT as an example. There is a significant difference between using the enterprise version of ChatGPT versus the free version. The enterprise version provides organizational controls and enterprise-level security and privacy. This means you can establish an organizational account, and provide individual logins for your employees, so that the tool can use your own information in a closed system that’s private to your organization to learn from it and analyze it.

However, using the free version of ChatGPT, your employees could inadvertently be putting your company’s private documents or proprietary information such as contracts, company data, intellectual property, or customer data into the public model and out into the internet. Other AI tools could potentially put personal information at risk, including HR tools. 

Without a policy and guardrails in place, your organization could be introducing new risks to your systems and environments. Plus, your clients and partners will likely want assurances that you’ve established an AI policy or guidelines to make sure their information is protected.

Key Considerations for Your AI Policy 

It’s time for a comprehensive AI policy. Your policy should include usage guidelines, an approval process, and governance to adequately address any risks and concerns. 

Your first and probably most critical starting point is awareness of the AI tools already being used in your business. This means conducting an audit of which tools are being used, who’s using them, how they’re using them, and what data is involved. This audit will include polling your employees and departments as well as checking systems to find out what’s being downloaded and used. 

You might end up being surprised about what’s being used already when you conduct this research.  

In the meantime, have your IT team, including your data protection staff, start to create the vital components of your AI policy, including:

Approval process: Create and implement an approval process for each new tool or model. Make sure that your employees know that any generative AI tool must be on the  approved list of tools and that they have access to this list.  If an employee finds a tool they want to try that’s not on the approved list, they can’t just launch it on their own; they need to run it up the ladder. 

At Tarkenton, we have an application process. The employee submits an approval application with one of our IT department’s data stewards. Each tool needs to be evaluated by the data steward with a risk assessment and impact analysis, along with ethics and compliance checks. Based on the report, we decide whether to add the tool to our approved list. Your business will likely need to address individual cases of AI tools and usage as they come up, but your approved list should provide a good basis for most cases.

Governance and Controls: Establish and implement governance. This might mean installing network controls allowing IT to identify what people are using  and whether anything brings up a red flag. Incident response mechanisms should also be in place like any other security measure. 

From an AI tool perspective, one of the main areas of concern will be the data policy – making sure that company and customer data is safe and secure at all times. It’s imperative that systems safeguards are put into place when these tools are being used. 

Part of governance is determining who on your teams can have authority and access to these tools. For instance, will access be restricted to just employees, or can outside team members, such as freelancers or consultants, access these tools?

Training and education: Training, educating and upskilling your employees is imperative. You want your employees to use these tools safely and securely but also with good results. You also want to be sure employees understand when they are using AI, as sometimes these tools are built into programs and software, and we are not even aware we are using AI. 

Also make sure they have opportunities to get training on these tools to understand what they can and can’t do. Part of training may include the need to fact check information that’s being used to generate content. Even with the most detailed prompts, Gen AI can still give back false or misleading information.

Corporate Accounts: You’ll want to create and maintain organizational paid accounts for each AI tool on your approved list, which each employee can access with their own secure login. This way there is no reason for an employee to use their own (potentially vulnerable) version of a tool. 

Regardless of how detailed your AI policy is, it’s important to realize that the entire AI landscape is rapidly evolving. New tools and models are being released almost continuously. It makes sense to review your AI policy, including your list of approved AI tools, at least once a year to adjust for new tools, new needs, and new opportunities.

Maintaining a thorough and detailed AI Policy will help your business adapt and gain the maximum benefit from these innovations, and protect you from the dangers of this new Wild Wild West.

***

I’ve always loved being part of something new. When I first entered the NFL in 1961, quarterbacks never scrambled, but that didn’t last long! As my career went on, I was there for the birth of the West Coast Offense among many other innovations that changed the game. I still love seeing the way football continues to change and grow today!

Nothing stands still. I’ve also been there for the many innovations that have reshaped how we do business in my lifetime. We had the computer revolution, and then the rise of the Internet change everything about how we work. Now, we’re seeing AI could be just as big – or bigger.

How should your company approach the AI Revolution? Our team has some insights for how you can benefit from new technology while being responsible about the risks.

 

 

Fran Tarkenton