AI Usage & Safety Policy

Last updated: February 3, 2026

At Polcode, we believe Artificial Intelligence (AI) is a powerful accelerator for creativity, engineering, and operational efficiency. However, AI should serve our team and our clients, not replace the judgment, security, or expertise that defines our work. 

We are committed to using AI tools in a conscious, transparent, and safe manner. This policy outlines how we govern the use of AI in client projects and our internal operations. 

Our Core Philosophy: Human-in-the-Loop 

We use AI as a drafting, research, and coding assistant - never as a final decision-maker. 

  • We Own the Output: Whether a line of code is written by a human engineer or suggested by an AI-powered tool, Polcode assumes full responsibility for its quality, security, and functionality. We do not blame tools for errors. 

  • Verification is Mandatory: Every output generated by AI (code, copy, or strategy) is reviewed, tested, and validated by a qualified team member before it enters the client environment or becomes a final deliverable. 

Data Privacy & The Sanitization Standard 

We utilize the Business tiers of commercial AI providers. While these tiers offer enhanced privacy protections (such as not using data for model training by default), we apply an additional layer of safety through our internal "Sanitization First" protocol. Our AI usage aligns with applicable data protection and AI-related regulations and is designed around GDPR-ready configurations.

  • No PII or Secrets: We do not input Personally Identifiable Information (PII), unencrypted client credentials (API keys, passwords), or core trade secrets into Large Language Models (LLMs). 

  • Anonymized Context: When we need AI to assist with a specific client challenge, we sanitize the inputs using generic variable names, dummy data, or abstract descriptions to ensure your proprietary context remains secure. 

  • Synthetic Data: For automated workflows and testing, we use synthetic (fake) data rather than live production customer data. 

Our Tool Stack 

Transparency is the foundation of trust. We operate on the Business commercial plans for our vetted toolset to ensure data privacy controls are in place. Before incorporating AI into a project, we discuss it with our clients, giving them the final say on how these tools are used. 

Client Control & Opt-Out 

We understand that every client has a different risk tolerance regarding AI. 

  • Client Approval First: AI tools are used in client projects only where contractually permitted or with explicit client approval. 

  • Default Usage Scope: When approved, AI is applied to well-defined activities such as drafting boilerplate code, formatting data, summarizing information, or preparing internal documentation, always under human supervision. 

  • Your Right to Opt-Out: If your organization prohibits or restricts the use of generative AI, please inform your Delivery Manager. We will strictly follow a “No-AI” protocol for your project or specified deliverables.