What is generative artificial intelligence (AI)?
Generative AI is an artificial intelligence system capable of generating text, images, or other media in response to natural language processing.
Notable generative AI systems, which fall under the category of machine learning, include ChatGPT, Bing Chat, and Google Bard.
What are the pros of using generative AI?
Rapid content creation capabilities
such as for marketing newsletters and blogs, that provide immediate tangible value and efficiency.
Improved customer experience
through chatbots that offer human-like responses to customer inquiries or agent assistance toward better human-to-human service.
through machine learning that analyzes customer purchasing history and online behavior to improve product recommendations.
Smart enterprise search
and knowledge management systems that open the exchange of knowledge previously bogged down by inefficient tools and processes.
Faster product development
especially within industries where multiple years of research and development can delay launches of new products.
What are the cons of using generative AI?
Content quality created through generative AI can vary widely, depending on the quality of the data used to train the model.
Biases that exist in the data used to train the model can perpetuate in the content output, which could be discriminatory or offensive.
From an ethical standpoint, generative AI can be used to create content that can be used to spread misinformation or deceive people.
Intellectual property issues
Content produced from generative AI models may infringe on intellectual property rights, such as copyrights or trademarks, due to models not knowing better about the kind of output to deliver.
Possible data leaks
Sensitive company data leakage may occur through generative AI tools due to employees inadvertently entering source code or some other IP without realizing the vulnerability.
30% of large organizations’ outbound marketing messages will be synthetically generated by 2025, up from only 2% in 2022.
How to mitigate ChatGPT risk with Cisco Umbrella
Umbrella includes ChatGPT in its application database for discovery purposes and designates it as high-risk because of how easily corporate intellectual property and other sensitive information can be leaked through it. Assessing Chat GPT risk is going to be unique to every organization because it’s largely based on the organization’s risk appetite or policies, and therefore what the ChatGPT users are trying to do with it. Some organizations are more risk-averse than others, so their risk assessment will be different from an organization that is more risk-tolerant.
Besides ChatGPT usage being discoverable with Umbrella, it’s also controllable, and it can be blocked through both Umbrella’s DNS-layer security policy and its secure web gateway policy. On the other hand, being able to ensure safe ChatGPT usage – and, more importantly and specifically, determining for what purposes to allow it – can increase employee productivity without sacrificing data security. Umbrella’s data loss prevention functionality provides this clarity to help thoughtfully mitigate ChatGPT risk.
Modern cybersecurity made easy
Cisco Umbrella offers flexible, scalable, cloud-delivered security for your users and data – on and off your network – combining security functions into one solution, so you can extend data protection to devices, remote users, and distributed locations anywhere.