The CISO Dilemma
How can information security leaders ensure that employee usage of artificial intelligence tools to help increase organizational productivity is not putting sensitive company data at risk of leakage? A look at most LinkedIn or news feeds will make most of us wonder which AI apps are safe to use, which aren’t, which of them should be blocked, and which should be limited or restricted.
As reported by Mashable, Samsung employees inadvertently leaked confidential company information to ChatGPT1 on at least three separate occasions in April, prompting the company initially to restrict the length of employee ChatGPT prompts to a kilobyte. Then, they decided on May 1st to temporarily ban employee use of generative AI tools on company-owned devices, as well as on non-company-owned devices running on internal networks.
Whether it’s the need to protect intellectual property (IP), personally identifiable information (PII), or any other private or confidential data, the Samsung example is good reason for CISOs everywhere to scrutinize the risks of similar data leakage events happening to their organizations. It also creates an opportunity to put technology, policies, and processes in place to prevent them. This means having sound data loss prevention and application control capabilities in place that enable administrators to discover organizational usage of AI tools, assess the risks those tools present to data security, determine whether they are safe or need to be blocked, and then block them or allow usage – either in full or limited capacity.
How can a CISO prevent sensitive business data leaks caused by ChatGPT usage?
The only ironclad way to do this is to not use ChatGPT at all. Since that’s likely not the way of the future as most organizations increase their understanding of the risks vs. the rewards of using it, however, a better and more forward-thinking approach for the CISO would be to focus instead on how to ensure safe ChatGPT usage. So, let’s consider the use cases for how to get there.
Solving four ChatGPT risk control use cases
As their organizations try to leverage AI to increase efficiency and stay on the bleeding edge of innovation, information security leaders worldwide must ask themselves: How do I allow my employees to use AI to be more productive while I ensure that they don’t accidentally put sensitive enterprise or customer data into the wrong hands?
The data loss prevention (DLP) and application visibility and control capabilities in Cisco Umbrella’s cloud access security broker (CASB) can help.
Discover ChatGPT usage
It is often said that the first purpose of a CASB is to enable visibility of shadow IT activity so that security admins can better understand which cloud applications its organization’s users are accessing – or trying to access. Umbrella includes ChatGPT in its application database for discovery purposes and designates it as high-risk because of how easily corporate intellectual property (IP) and other sensitive information can be leaked through it. An admin can see who is using it, how frequently, and where.
Assess ChatGPT usage risk
Having a DLP “monitor” rule for “all destinations” enables security admins to understand how ChatGPT is being used in their organization. An existing rule in Umbrella will automatically recognize ChatGPT without need for reconfiguration, but admins can also create a very specific new rule comprised of the most concerning types of data classifications to help them better identify events, volume of usage, and more detail about that usage. This better understanding of ChatGPT risk and usage can help inform any changes needed to DLP policy.
Block ChatGPT usage
ChatGPT is not just discoverable in Umbrella, but also controllable thanks in part to a new AI control category, and it can be blocked through both Umbrella’s DNS-layer security policy and secure web gateway (SWG) policy. Heavily regulated and risk-averse organizations that often have numerous compliance requirements will find this capability particularly helpful, as will many software companies that need safeguards in place to protect source code.
Allow safe ChatGPT usage
Ensuring safe usage – and, more importantly and specifically, determining for what purposes to allow it – can increase employee productivity without sacrificing data security. Umbrella’s DLP functionality provides this clarity to help thoughtfully manage ChatGPT risk.
To protect source code, you can use Umbrella’s built-in source code identifier to detect any of almost 30 programming languages, and it will prevent code exfiltration. You can apply this to specific users, or to specific groups in the organization. Another option is to use Umbrella’s indexed document matching functionality to fingerprint source code and gain super accurate detection of it in real time to also prevent exfiltration.
There’s no question that generative AI tools prevalence will increase over time, accompanied by constant scrutiny of the risks and rewards of leveraging them. By having correctly configured DLP rules in Umbrella, CISOs and security admins alike can enjoy peace of mind knowing that ChatGPT usage is happening in such a way that the risk of confidential company information being leaked is mitigated.
What should you do next?
If you’re already using Umbrella DNS Security Essentials or DNS Security Advantage, you can start managing ChatGPT risk at the DNS-layer today. If you don’t have the advanced data loss prevention and application control capabilities that come with the more robust Umbrella SIG Essentials (as an add-on) or SIG Advantage (standard), visit our packages comparison page to learn more about them, and then talk to your Cisco account team about upgrading.
Watch our on-demand AI Risk vs. Reward: The CISO Dilemma webinar, which includes over 15 minutes of product demo about the four use cases above. For a more comprehensive overview and demo of Cisco Umbrella, watch our on-demand demo today!
1Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT