Standard operating procedure for the safe, compliant, and effective use of Generative AI tools in the workplace.
Humans are fully responsible for all AI-generated output. "The AI made a mistake" is not a valid defense for errors, bias, or plagiarism. You must verify everything.
We do not hide AI use from clients or colleagues when it materially impacts the deliverable. If a report is 80% AI-generated, it must be disclosed.
Do NEVER paste this data into public models (ChatGPT, Claude, Gemini).
Allowed ONLY if specific identifiers are removed/anonymized.
Safe to use with public models.
AI-generated code must be reviewed by a senior engineer. Do not blindly copy-paste. Check for security vulnerabilities and hallucinations in logic.
LLMs hallucinate. Any statistic, quote, or fact generated by AI must be cross-referenced with a primary source before publishing.
Review outputs for gender, racial, or cultural bias. Ensure the tone aligns with our Brand Voice guidelines (Direct, Minimal, Confident).
Do not sign up for new "Shadow AI" tools with corporate credentials. Only IT-approved vendors (OpenAI Enterprise, Copilot) are permitted.
Ensure "Chat History & Training" is disabled in settings where possible (e.g., ChatGPT Settings). We do not opt-in to training public models with our data.
This policy is a living document. It will evolve as models evolve.
Need help implementing this governance?
Book a Governance Audit →