At Netigate, transparency and customer privacy are our top priorities – particularly regarding the use of AI and generative technologies such as Large Language Models (LLMs). This policy clearly explains how we handle Customer Data (as defined in our Terms of Service) concerning AI-driven solutions and details the measures we employ to protect customer information.
The sole use of Customer Data within our AI systems is for quality assurance and product improvement, specifically in two scenarios:
In these cases, we strictly utilise anonymised customer data, which cannot be traced back to identifiable persons or entities and is not considered to be PII. This anonymisation process ensures that personal identifiable information (PII) remains fully confidential and inaccessible to testing staff.
We only process Customer Data in accordance with our contractual agreements with customers. Any further usage of data beyond the initial scope agreed upon with customers is based solely on our legitimate interests, primarily aimed at improving our services and ensuring their reliability and performance. This processing strictly adheres to applicable data protection laws, including the General Data Protection Regulation (GDPR). Detailed information on our Technical and Organizational Measures (TOMs) for data protection can be found here.
Protecting your privacy while providing high-quality services is our core commitment. We:
Our AI service partners, including Azure and OpenAI on Azure, follow stringent EU-based data retention policies, deleting all customer-related data immediately after use, except where retention is legally necessary. Additionally, our suppliers do not use any customer data to train their AI models, and we maintain Data Processing Agreements (DPAs) with them to ensure strict adherence to data protection standards.
We employ a robust anonymisation pipeline that ensures customer data remains entirely anonymous before processing within AI or analytics systems. This pipeline consists of:
All metadata linking text back to original users or sources—such as user IDs and timestamps—is removed at the outset. Multiple interactions from the same user are handled separately to prevent correlation.
Our system uses comprehensive dictionaries and rule-based detection (not generative AI) to identify and anonymize personal data, including names, addresses, and location references. Identified data is replaced with tokens such as <PERSON> or <ADDRESS>. For locations, size-based categories (e.g., <CITY (Large)>) may be applied without revealing specifics.
To enhance accuracy and minimize false positives, statistical checks refine our dictionary-based methods. These checks primarily update dictionaries and detection rules, rather than analyzing live data.
Original:
“We drove to Stockholm for a quick getaway. The city is always busy, and we stayed at Johanna’s apartment on Kungsgatan. We had a great time until the faucet broke and we had to call the landlord, Oskar, for help.”
Anonymised:
“We drove to <CITY (Large)> for a quick getaway. The city is always busy, and we stayed at <PERSON>’s apartment on <ADDRESS>. We had a great time until the faucet broke and we had to call the landlord, <PERSON>, for help.”
Through this approach, we ensure comprehensive privacy while maintaining service excellence and innovation.
Prevention of De-Anonymisation or Re-Identification
Through the multi-step approach mentioned above—metadata removal, dictionary-based PII detection and replacement, and statistical refinement—Netigate ensures that no identifiers remain within the data. Because each layer of anonymisation is designed to break the link between responses and specific individuals, any attempts to correlate or match this data with external sources will fail. In other words, the combination of stripping metadata, tokenising personal details, and isolating testing environments guarantees that anonymised data cannot be reverse-engineered to uncover respondents’ PII.