Simplifying IT
for a complex world.
Platform partnerships
- Google Cloud
- Microsoft
This policy establishes how [Client Name] will use artificial intelligence (AI) systems safely and responsibly.
It applies to all employees, contractors, and third parties accessing systems or data.
Systems covered include: generative AI, copilots, LLM chat tools, AI agents, model APIs, fine-tuned models, and any automation substantially guided by AI.
AI as Outsourcing: Treat AI tools as external processors subject to third-party risk management.
Data Minimization by Default: Share only what is needed and redact sensitive information.
Process Readiness First: Update business processes before or with AI adoption.
Experiment with Guardrails: Encourage pilots in controlled sandboxes.
Human Judgment on High-Impact Outputs: Require structured review for decisions affecting people, money, or safety.
Strict Prohibited and Sensitive Use Controls: Ban harmful or regulated uses without explicit approval.
AI use is permitted only through approved company-managed platforms and accounts. Use of free or consumer-grade AI tools (e.g., public chatbots or AI apps that may use inputs for model training) is strictly prohibited for any company or client data.
Use only company-provided, enterprise-grade AI tools configured not to share or train on submitted data.
Never input confidential, customer, or proprietary data into personal, public, or unapproved AI tools.
Confirm that each approved tool provides:
Data protection and encryption in transit and at rest.
Explicit assurance that submitted content is not used for public or general model training.
Vendor compliance with security and privacy standards (SOC 2, ISO 27001, or equivalent).
Ensure all AI tool access uses company-managed identities and MFA.
Log all AI interactions for traceability and auditability.
For questions about encryption and other IT-related items, please speak with your vCIO at Advanced Data.
Approved AI Tools and Ownership
Approved AI Tools List: [Insert approved company AI platforms, e.g., OpenAI Enterprise ChatGPT, Microsoft Copilot, Google Vertex AI, etc.]
Tool Owner / Administrator: [Insert responsible department or role, e.g., IT Security or AI Program Manager.]
Policy: Minimize, mask, or remove data before it reaches AI systems.
Default deny for PII, PHI, PCI, authentication secrets, and confidential IP unless explicitly approved.
Use automatic redaction or synthetic data for testing and training.
Outputs must be classified and stored according to source data rules.
Never paste credentials, private keys, or security configs into AI tools.
AI is introduced only where a named process owner accepts responsibility for outcomes and maintenance.
Each use case has documented purpose, data classes, risk assessment, metrics, rollback plan, and human checkpoints.
Maintain living process documentation and review it regularly.
Policy: Encourage learning through controlled pilots; unauthorized “shadow AI” is not permitted.
Conduct pilots in isolated or sandbox environments.
Use de-identified or lowest-classification data.
Require basic security and privacy review before start.
Promote to production only after approval and documented success.
Policy: AI outputs that could affect customers, finances, people, safety, or security must be reviewed by a qualified human.
High-Impact Examples
Customer communications, legal or HR decisions, financial transactions, security actions, and code deployments.
Reviewer Checklist
Verify facts and sources; check for hallucinations or bias.
Evaluate alternatives and ethical considerations.
Confirm security and privacy impacts.
Record approver name, date, and final decision.
Prohibited
Generating or facilitating illegal activity, fraud, self-harm, biological design, or unsafe medical/legal/financial advice.
Use of unapproved or free AI tools for any company or client data.
Sensitive (Pre-Approval Required)
Employment and HR decisions.
Medical or benefits determinations.
Legal analysis or litigation strategy.
M&A or other material non-public information.
Safety-critical operations or infrastructure.
Approvals must come from the relevant domain owner (HR, Legal, Compliance, or Executive leadership).
9. Training & Awareness
Annual training on safe prompting, data minimization, and reporting procedures.
CyberWatch customers can receive this directly as a part of their Security Awareness Training
Quick-reference guides covering “what data can be shared,” “when human review is required,” and “how to request a pilot.”