Introduction
Machine learning-based data loss prevention (DLP) file classifiers provide a fast and effective way to identify sensitive data in real-time, empowering organizations with granular, real-time DLP policy controls. Netskope Advanced DLP offers a wide range of predefined file classifiers, such as passports, driver’s licenses, checks, payment cards, screenshots, source code, tax forms, and business agreements. Although these predefined classifiers are remarkable in their own right, they are necessarily somewhat generic when considering the enormous diversity of sensitive data across different industries and organizations. To better address company-specific or industry-specific documents, including identity documents, HR files, or critical infrastructure images, Netskope has developed a novel patented approach that allows customers to train their own classifiers while maintaining data privacy. This innovation enables organizations to focus on protecting their most critical information.
This training process, known as Train Your Own Classifier (TYOC), is designed to be efficient, requiring neither a large amount of labeled data nor time-consuming training of a supervised classification model.This capability is made possible through the use of cutting-edge contrastive learning techniques. Customers can upload a small set of example images (approximately 20-30) to the Netskope Security Cloud. These examples are then used to extract important attributes and train a customized classifier using Netskope’s machine learning engine.
Once the custom classifier is trained, it is deployed into the customer’s own tenant to detect sensitive information anywhere they use Netskope DLP including email and Endpoint DLP. Importantly, the original samples are not retained and the trained classifier is not shared with any other customers, ensuring the protection of the customer’s sensitive data throughout the process.
Image Similarity and Contrastive Learning
TYOC solves a problem of image similarity by using techniques of contrastive learning.
Image similarity addresses the challenge of identifying images that resemble a reference image, even when there are minor differences in aspects such as color, orientation, cropping, and other characteristics. This process can be effectively managed using advanced contrastive learning techniques.
Contrastive learning is designed to extract meaningful representations by contrasting pairs of similar (positive) and dissimilar (negative) instances. It is based on the concept that similar instances should be positioned closer in a learned embedding space, whereas dissimilar instances should be placed further apart. Contrastive learning involves training image models through unsupervised learning by augmenting each image in a manner that preserves its semantic content. This augmentation includes operations such as random rotations, color distortions, and crops, ensuring that the cropped area remains a significant portion of the original image. These augmented samples are used to train a convolutional neural network (CNN)-based image encoder model. This encoder takes an image as input and produces a feature vector, also known as a representation or embedding.
Netskope TYOC combines a pre-trained image encoder built by Netskope with a small number of training images provided by a customer. The combination enables the Netskope security cloud to perform image similarity ranking on customer-relevant files with performance similar to what is achieved by built-in (predefined) file classifiers.
Training with Contrastive Learning
The encoder model learns to identify similarities between images by establishing that matched pairs of images, referred to as positive pairs, exhibit the highest similarity. Conversely, unmatched pairs or negative pairs – drawn from the remainder of the image dataset – are assigned the lowest similarity. We illustrate this concept through examples of positive and negative pairs below.