Back to Library REF: TEMPLATE_POLICY_v2
Governance Framework

Employee AI Acceptable Use Policy

Standard operating procedure for the safe, compliant, and effective use of Generative AI tools in the workplace.

1

Core Principles

1. Accountability

Humans are fully responsible for all AI-generated output. "The AI made a mistake" is not a valid defense for errors, bias, or plagiarism. You must verify everything.

2. Transparency

We do not hide AI use from clients or colleagues when it materially impacts the deliverable. If a report is 80% AI-generated, it must be disclosed.

2

Data Security Classification

🚫 Red Zone

Strictly Prohibited

Do NEVER paste this data into public models (ChatGPT, Claude, Gemini).

  • PII: Customer names, emails, addresses, SSNs.
  • Financials: Raw P&L, bank details, unreleased earnings.
  • Credentials: Passwords, API keys, AWS secrets.
  • IP: Unreleased source code, patent drafts, trade secrets.
⚠️ Yellow Zone

Caution Required (Sanitize First)

Allowed ONLY if specific identifiers are removed/anonymized.

  • Internal Comms: Meeting notes (remove names).
  • Strategy Docs: Marketing plans (remove client names).
  • Code Snippets: Generic functions (remove proprietary logic).
Green Zone

Approved for Use

Safe to use with public models.

  • Public Data: Information already on our website.
  • Ideation: Brainstorming blog topics or outlines.
  • Drafting: Writing generic emails or social posts.
  • Editing: Fixing grammar or tone of existing text.
3

Operational Protocols

Policy 06

Code Generation

AI-generated code must be reviewed by a senior engineer. Do not blindly copy-paste. Check for security vulnerabilities and hallucinations in logic.

Policy 07

Fact Checking

LLMs hallucinate. Any statistic, quote, or fact generated by AI must be cross-referenced with a primary source before publishing.

Policy 08

Bias & Tone

Review outputs for gender, racial, or cultural bias. Ensure the tone aligns with our Brand Voice guidelines (Direct, Minimal, Confident).

Policy 09

Tool Approval

Do not sign up for new "Shadow AI" tools with corporate credentials. Only IT-approved vendors (OpenAI Enterprise, Copilot) are permitted.

Policy 10

Training Data Exclusion

Ensure "Chat History & Training" is disabled in settings where possible (e.g., ChatGPT Settings). We do not opt-in to training public models with our data.

This policy is a living document. It will evolve as models evolve.

VER: 2.1 | OWNER: CISO / OPS | REVIEW: QUARTERLY

Need help implementing this governance?

Book a Governance Audit →