Employee AI Acceptable Use Policy
Standard operating procedure for the safe, compliant, and effective use of Generative AI tools in the workplace.
Core Principles
1. Accountability
Humans are fully responsible for all AI-generated output. "The AI made a mistake" is not a valid defense for errors, bias, or plagiarism. You must verify everything.
2. Transparency
We do not hide AI use from clients or colleagues when it materially impacts the deliverable. If a report is 80% AI-generated, it must be disclosed.
Data Security Classification
Strictly Prohibited
Do NEVER paste this data into public models (ChatGPT, Claude, Gemini).
- PII: Customer names, emails, addresses, SSNs.
- Financials: Raw P&L, bank details, unreleased earnings.
- Credentials: Passwords, API keys, AWS secrets.
- IP: Unreleased source code, patent drafts, trade secrets.
Caution Required (Sanitize First)
Allowed ONLY if specific identifiers are removed/anonymized.
- Internal Comms: Meeting notes (remove names).
- Strategy Docs: Marketing plans (remove client names).
- Code Snippets: Generic functions (remove proprietary logic).
Approved for Use
Safe to use with public models.
- Public Data: Information already on our website.
- Ideation: Brainstorming blog topics or outlines.
- Drafting: Writing generic emails or social posts.
- Editing: Fixing grammar or tone of existing text.
Operational Protocols
Code Generation
AI-generated code must be reviewed by a senior engineer. Do not blindly copy-paste. Check for security vulnerabilities and hallucinations in logic.
Fact Checking
LLMs hallucinate. Any statistic, quote, or fact generated by AI must be cross-referenced with a primary source before publishing.
Bias & Tone
Review outputs for gender, racial, or cultural bias. Ensure the tone aligns with our Brand Voice guidelines (Direct, Minimal, Confident).
Tool Approval
Do not sign up for new "Shadow AI" tools with corporate credentials. Only IT-approved vendors (OpenAI Enterprise, Copilot) are permitted.
Training Data Exclusion
Ensure "Chat History & Training" is disabled in settings where possible (e.g., ChatGPT Settings). We do not opt-in to training public models with our data.
This policy is a living document. It will evolve as models evolve.
Need help implementing this governance?
Book a Governance Audit →