Soaring investment from big tech companies in artificial intelligence and chatbots – amid massive layoffs and a growth decline – has left many chief information security officers in a whirlwind.
With OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard and Elon Musk’s plan for his own chatbot making headlines, generative AI is seeping into the workplace, and chief information security officers need to approach this technology with caution and prepare with necessary security measures.
The tech behind GPT, or generative pretrained transformers, is powered by large language models (LLMs), or algorithms that produce a chatbot’s human-like conversations. But not every company has its own GPT, so companies need to monitor how workers use this technology.
People are going to use generative AI if they find it useful to do their work, says Michael Chui, a partner at the McKinsey Global Institute, comparing it to the way workers use personal computers or phones.
“Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful,” Chui said.
“Throughout history, we’ve found technologies which are so compelling that individuals are willing to pay for it,” he said. “People were buying mobile phones long before businesses said, ‘I will supply this to you.’ PCs were similar, so we’re seeing the equivalent now with generative AI.”
As a result, there’s “catch up” for companies in terms of how the are going to approach security measures, Chui added.
Whether it’s standard business practice like monitoring what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace, experts think there are certain areas where CISOs and companies should start.
Start with the basics of information security
CISOs – already combating burnout and stress – deal with enough problems, like potential cybersecurity attacks and increasing automation needs. As AI and GPT move into the workplace, CISOs can start with the security basics.
Chui said companies can license use of an existing AI platform, so they can monitor what employees say to a chatbot and make sure that the information shared is protected.
“If you’re a corporation, you don’t want your employees prompting a publicly available chatbot with confidential information,” Chui said. “So, you could put technical means in place, where you can license the software and have an enforceable legal agreement about where your data goes or doesn’t go.”
Licensing use of software comes with additional checks and balances, Chui said. Protection of confidential information, regulation of where the information gets stored, and guidelines for how employees can use the software – all are standard procedure when companies license software, AI or not.
“If you have an agreement, you can audit the software, so you can see if they’re protecting the data in the ways that you want it to be protected,” Chui said.
Most companies that store information with cloud-based software already do this, Chui said, so getting ahead and offering employees an AI platform that’s company-sanctioned means a business is already in-line with existing industry practices.
How to create or integrate a customized GPT
One security option for companies is to develop their own GPT, or hire companies that create this technology to make a custom version, says Sameer Penakalapati, chief executive officer at Ceipal, an AI-driven talent acquisition platform.
In specific functions like HR, there are multiple platforms from Ceipal to Beamery’s TalentGPT, and companies may consider Microsoft’s plan to offer customizable GPT. But despite increasingly high costs, companies may also want to create their own technology.
If a company creates its own GPT, the software will have the exact information it wants employees to have access to. A company can also safeguard the information that employees feed into it, Penakalapati said, but even hiring an AI company to generate this platform will enable companies to feed and store information safely, he added.
Whatever path a company chooses, Penakalapati said that CISOs should remember that these machines perform based on how they have been taught. It’s important to be intentional about the data you’re giving the technology.
“I always tell people to make sure you have technology that provides information based on unbiased and accurate data,” Penakalapati said. “Because this technology is not created by accident.”