AI Transparency
AI Transparency Policy
Mode Standard designs, builds, and operates intelligent systems. This AI Transparency Policy explains how we use artificial intelligence and how we approach AI implementation for our clients.
Last updated: 14 January 2026
1. Introduction
Mode Standard designs, builds, and operates intelligent systems. This AI Transparency Policy explains how we use artificial intelligence and how we approach AI implementation for our clients.
What this Policy covers: How Mode Standard uses AI in our own operations; How we design and build AI systems for clients; AI capabilities and limitations we work within; Data handling in AI contexts; Ethical principles and governance; Human oversight and accountability; Our commitment to transparency.
Why transparency matters: AI systems can be opaque, unpredictable, and misunderstood. Organisations deploying AI have a responsibility to be clear about: What AI can and cannot do; How AI systems make decisions (at the right level of detail); What data they use and why; Who is accountable when things go wrong; How systems are governed, monitored, and improved.
Read this Policy alongside our Privacy Policy, Terms and Conditions, and Cookie Policy.
2. Our Approach to AI
What we mean by "AI": We use "AI" and "intelligent systems" to describe technologies including: Large Language Models (LLMs) for natural language processing; Machine learning models for pattern recognition and prediction; Computer vision for image and video analysis; Multi-agent systems for workflow automation; Decision-support systems combining rules and ML; Retrieval-Augmented Generation (RAG) for knowledge systems.
We distinguish between: Generative AI (systems that create content - text, images, code); Analytical AI (systems that analyse, classify, or predict); Automation (systems that execute tasks based on defined logic).
Our philosophy: Practical over theoretical - We focus on what works reliably today. We don't deploy experimental approaches for critical business functions without appropriate controls and agreement. Augmentation, not replacement - AI amplifies human capability. It does not replace human judgement, accountability, or oversight for important decisions. Transparent about limitations - We are explicit about what AI cannot do. We don't oversell capability or promise certainty where it doesn't exist. Governed by design - Governance, monitoring, and human oversight are designed in from day one - not bolted on later.
3. How Mode Standard Uses AI
We use AI in our own operations. Here's what we use it for and how we govern it.
Internal productivity and operations: We use AI for drafting and refining proposals, reports, and documentation; Analysing requirements and structuring problem statements (with permission where client data is involved); Prototyping and exploring technical approaches; Code assistance and development productivity; Research and information synthesis; Content ideation and editing; Note-taking and documentation organisation.
How we govern internal use: We do not input client confidential information into public AI tools without explicit permission. All AI-assisted outputs are reviewed by humans before external or client-facing use. Staff are trained on appropriate AI use, limitations, and data handling. We apply approval checks for introducing new AI tools and vendors.
How we select tools: We evaluate tools based on: Data protection and security standards; UK GDPR alignment and contractual terms; Whether data is used for training; Vendor stability and reputation; Practical productivity benefit. Specific tools change as technology evolves. You can request details of current tool categories at hello@modestandard.com.
Website and communications: Website copy may be drafted or refined using AI tools (human reviewed). Marketing materials may use AI for ideation or editing (human reviewed). Email drafting may be AI-assisted (human reviewed). We do not: Impersonate humans using AI in direct client communications; Deploy chatbots without clear disclosure; Generate fake testimonials or fabricate case studies; Present AI-generated content as fully human-created where that would materially mislead decision-making.
At the time of writing, our website does not provide interactive AI features (such as chatbots or AI-driven personalisation). If we add AI features, we will update this Policy and provide clear disclosure.
Client engagement process: During discovery and design, we may use AI to help analyse client-provided information where permitted and appropriate. AI may accelerate prototyping and exploration of solution options. All outputs are validated by human experts. Client data protection: Client data is processed under formal agreements and instructions. We do not use client data to train public models without explicit written agreement. Confidential information is processed only in appropriate environments with agreed controls. We document AI processing of client data within engagement documentation.
4. How We Build AI Systems for Clients
Design principles: Fit for purpose - We match the solution to the problem. If rules-based logic is better, we recommend that. Human-in-the-loop - For consequential decisions (e.g., hiring, credit, health, legal), we design systems that support humans, not replace them. Explainability - We prioritise explainable approaches, especially in regulated or high-stakes contexts. Testable and measurable - We define success metrics, test plans, and monitoring from the outset. Governed from day one - Security, privacy, and accountability are built in-by design.
Our implementation methodology - 3De Framework: Decode. Design. Deploy. Evolve. Decode (Understand the now): Map current processes and identify where AI adds genuine value; Assess data availability, quality, and compliance requirements; Evaluate risks, limitations, and ethical considerations; Prioritise opportunities using our Impact-Friction approach; Produce an AI readiness and implementation plan.
Design (Plan and prototype): Define target outcomes and success metrics; Design system architecture and data flows; Select appropriate technologies and vendors; Prototype to validate approach; Define governance, testing, and monitoring requirements.
Deploy (Integrate and operationalise): Build production-grade systems with security, logging, and monitoring; Integrate with existing systems and workflows; Train teams on use, oversight, and limitations; Enable alerting and feedback loops; Document systems comprehensively and hand over responsibly.
Evolve (Improve and expand): Monitor performance against defined metrics; Capture feedback and failure modes; Improve prompts/models/logic and governance controls; Scale what works and retire what doesn't.
Technologies and vendors: We may work with a range of providers, including (where appropriate): OpenAI, Anthropic, Google, Microsoft (Azure OpenAI), and open-source models (e.g., Llama); Cloud AI platforms (e.g., AWS and Google Cloud services); Open-source tooling (e.g., Hugging Face); Specialist APIs for vision, speech, or search. We maintain vendor independence. We recommend technologies based on client needs, risk profile, and operational fit-not commissions.
What we tell end users: When systems interact with end users, we design for: Clear disclosure - Users know when they're interacting with AI; Capabilities and limits - Plain-language explanation of what it does and doesn't do; Human escalation - A human route is available where appropriate; Feedback and controls - Clear mechanisms for feedback and review; Opt-out where feasible - Users can decline AI interaction where practical.
5. AI Capabilities and Limitations
What AI is good at today: Pattern recognition across large datasets; Summarisation and transformation of information; Drafting and generating content at scale (with review); Classification, routing, and triage; Assisting with repetitive or high-volume tasks; Identifying anomalies and inconsistencies.
What AI is not good at: Genuine understanding or human judgement; Reliable "common sense" across contexts; Ethical decision-making; Taking accountability; Consistent performance in novel edge cases; Explaining its internal reasoning in a reliable, auditable way (for many models).
Known risks: Hallucinations and confident errors; Bias and unfair outcomes; Data dependency and performance drift; Security risks (prompt injection, adversarial misuse); Privacy risks and data leakage; Opaqueness in model behaviour. For critical use cases, we design controls to reduce risk and require human verification where appropriate.
6. Data and Privacy in AI Systems
Client data: We apply: Data minimisation (only what's required); Purpose limitation (only what's agreed); Retention and deletion (defined, enforceable, and documented). Security measures typically include: Encryption in transit and at rest (where applicable); Authentication and access controls; Audit logs and monitoring; Regular security reviews appropriate to the system.
Training data vs inference data: Training data builds or fine-tunes models. We do not use sensitive client data for training without explicit consent and agreement. Inference data is submitted during usage (prompts, queries, uploads). We document handling and retention for each system and vendor used.
Data processing agreements: Where vendors process client data, we ensure: Appropriate data processing terms and agreements are in place; Data flows are documented; Retention and controls are understood and communicated.
Personal data and high-stakes contexts: For special category data or high-stakes decisions, we require: A clear lawful basis and additional controls; DPIAs where required; Human oversight and review processes; Regular testing for accuracy and fairness.
7. Ethical Principles and Governance
Our ethical commitments: Fairness - design to avoid discrimination and harmful bias; Transparency - clear disclosure and honest limitations; Accountability - humans remain accountable for outcomes; Privacy - privacy-by-design and data minimisation; Safety - safeguards against misuse and harm; Value - solve real problems, not AI theatre.
Governance framework: For client implementations we establish, proportionate to risk: Named owners (business and technical); Policies (acceptable use, incident response, change control); Monitoring and testing (performance, accuracy, bias where relevant); Documentation (architecture, data flows, risks, mitigations, runbooks).
Risk management: We assess: Operational risk (failure, downtime, drift); Ethical risk (bias, discrimination, harmful outputs); Legal/regulatory risk (GDPR, sector rules, IP); Reputational risk (trust and public impact); Security risk (breach, manipulation, misuse).
8. Human Oversight and Accountability
Human-in-the-loop (where stakes require it): AI provides recommendations; humans make final decisions. Clear escalation paths for uncertain cases. Override capability (logged and reviewed). Ongoing sampling and review of outputs.
Accountability structures: Every AI system has named owners responsible for: Performance and reliability; Compliance and governance; Handling user concerns; Continuous improvement.
No "algorithm made me do it": AI is a tool. Organisations deploying it remain responsible for: Choosing to deploy it; Selecting vendors and settings; Monitoring performance; Addressing failures and harm.
9. Transparency with Clients and End Users
Client transparency: We provide (as appropriate): System documentation and data flow visibility; Testing outcomes and known limitations; Monitoring and incident processes; Governance materials and operator training.
End-user transparency: Where relevant, end users receive: Clear AI disclosure; Plain-language explanation of what the system does; Information about data usage; Support routes and (where applicable) human review.
Regulatory alignment: We design systems with UK GDPR and relevant sector requirements in mind, and adapt approaches as regulation develops.
10. Intellectual Property and AI
AI-generated content: AI-generated outputs may have unclear copyright status in some contexts. We document when content is AI-assisted where that matters. We do not claim human authorship for AI-generated work where that would mislead. Client ownership and usage rights are governed by contracts.
Training data and IP: Client data remains client property. We don't use client data to train public models without permission. Our methodologies and frameworks remain Mode Standard IP (see Terms and Conditions).
11. Continuous Improvement
We improve systems through: Monitoring and feedback; Incident tracking and root cause analysis; Testing changes before rollout; Updating models/tools as vendors improve them (with appropriate governance).
We review this Policy at least annually, and sooner if: Our AI use changes materially; New regulations take effect; Significant incidents or best-practice shifts occur.
12. Limitations of This Policy
This Policy describes our approach and commitments. It: Does not override contractual agreements; Is not legal advice; Does not guarantee outcomes (AI can fail despite best practice). Any warranties are defined in client contracts, not in this Policy.
13. Questions, Concerns, and Feedback
General policy questions: Email hello@modestandard.com with subject "AI Transparency Policy".
Concerns about AI implementation: Email hello@modestandard.com with subject "AI Implementation Concern".
End users of systems we've built: Please contact the organisation operating the system. If you cannot get an answer, contact us and we will direct you appropriately.
14. Additional Resources
ICO AI guidance: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/
Alan Turing Institute: https://www.turing.ac.uk/research/research-areas/ai
UK Government publications: https://www.gov.uk/government/publications/
15. Contact Information
Email: hello@modestandard.com
Company: Mode Standard Limited
Registered Office: 71-75 Shelton Street, Covent Garden, London WC2H 9JQ
Company Number: 16849743
For AI-specific queries, mark your email "Attention: AI Transparency".
Document Control: Version 1.0 | Effective 14 January 2026 | Last reviewed: 14 January 2026