AI Policy
1. Purpose
Defines how Sensei teams may use AI tools to develop Sensei IQ while safeguarding client data, IP, and security.
2. Scope
Applies to all Sensei staff and contractors working on Sensei IQ and related assets. Covers all AI tools used, including GitHub Copilot, Claude, and Microsoft Copilot. Client-specific AI use must follow both this policy and the client’s policy.
3. Approved AI Tools
- GitHub Copilot: IDE code suggestions, PR reviews, and test generation within Sensei repositories.
- Microsoft Copilot for Microsoft 365: Summarizing and drafting content in Sensei’s or client tenants (for use in a client tenant, the Sensei Engagement Lead account must be licensed and usage must adhere to client policies).
- Claude: Ideation and problem-solving, with strict data rules.
- Additional tools require approval from leadership.
4. Principles
- Human accountability for AI use and output.
- AI provides input, not authority.
- Security and privacy take priority.
- Transparency around AI involvement is required.
5. Permitted Use
Allowed uses (with data handling rules):
- Generate or refactor code, tests, and deployment scripts.
- Draft specifications, design documents, communications, and meeting summaries.
- Suggest solution options and test cases.
- Any artifact generated through an AI agent must be reviewed and accepted by a human before merging, sharing, or publishing.
6. Prohibited Use
Not allowed:
- Entering secrets or identifiable client information into public AI tools.
- Uploading client content outside Sensei or client tenants.
- Merging unreviewed AI code or bypassing controls.
- License or IP violations, or replicating competitors’ features.
- When unsure, consult leadership.
7. Data Handling Rules
- Prefer Microsoft-hosted copilots for internal and client work.
- For public AI tools, only use anonymized or synthetic data, avoid sensitive architecture details.
- Use GitHub Copilot only in Sensei enterprise environments and do not include sensitive values in prompts for suggestions.
8. Code and Content Review
- AI-generated deliverables must meet or exceed current review standards.
- Code: Subject to static analysis, human review, and security checks.
- Docs/Comms: Checked by a human for accuracy, tone, confidentiality, and compliance, key facts must be verified.
9. Logging and Transparency
- Disclose AI use in pull requests and major deliverables.
- Encourage sharing effective prompts internally.
10. Incidents and Reporting
Handle AI-related incidents per existing response processes: contain, notify leadership, document, and remediate as needed.
11. Policy Review
Annual review, or as needed based on tool changes, compliance updates, or significant incidents.
References
Responsible AI and Governance
- Microsoft Responsible AI Standard v2 – General Requirements
https://aka.ms/responsibleaistandard - Microsoft Responsible AI Principles and Approach
https://www.microsoft.com/en-us/ai/principles-and-approach - What is Responsible AI? (Azure Machine Learning)
https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai - NIST AI Risk Management Framework (AI RMF)
https://www.nist.gov/itl/ai-risk-management-framework
Secure Use of Copilots and AI Tools
- Data, Privacy, and Security for Microsoft 365 Copilot
https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy - Data Privacy and Security for Microsoft 365 Copilot Extensibility
https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/data-privacy-security - GitHub Copilot Security, Privacy, and Data Controls
https://learn.microsoft.com/en-us/enterprise/copilot - Power Platform Copilot Data Security and Privacy FAQ
https://learn.microsoft.com/en-us/power-platform/faqs-copilot-data-security-privacy
Data Handling, Security, and Least Privilege
- Enhance security with the principle of least privilege (Microsoft Identity Platform)
https://learn.microsoft.com/en-us/entra/identity-platform/secure-least-privileged-access - Security planning for LLM-based applications
https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/generative-ai/mlops-in-openai/security/security-plan-llm-application
Code Review, SDLC, and Quality
- Microsoft Security Development Lifecycle (SDL)
https://learn.microsoft.com/en-us/security/sdl - Azure DevOps – Code quality and security scanning
https://learn.microsoft.com/en-us/azure/devops/repos/git/pull-request?view=azure-devops
Incident Response
- Azure Well-Architected – Incident Management
https://learn.microsoft.com/en-us/azure/well-architected/design-guides/incident-management - NIST 800-61 Computer Security Incident Handling Guide
https://csrc.nist.gov/pubs/sp/800/61/r2/final