Skip to main content

AI Policy

1. Purpose

Defines how Sensei teams may use AI tools to develop Sensei IQ while safeguarding client data, IP, and security.

2. Scope

Applies to all Sensei staff and contractors working on Sensei IQ and related assets. Covers all AI tools used, including GitHub Copilot, Claude, and Microsoft Copilot. Client-specific AI use must follow both this policy and the client’s policy.

3. Approved AI Tools

  • GitHub Copilot: IDE code suggestions, PR reviews, and test generation within Sensei repositories.
  • Microsoft Copilot for Microsoft 365: Summarizing and drafting content in Sensei’s or client tenants (for use in a client tenant, the Sensei Engagement Lead account must be licensed and usage must adhere to client policies).
  • Claude: Ideation and problem-solving, with strict data rules.
  • Additional tools require approval from leadership.

4. Principles

  • Human accountability for AI use and output.
  • AI provides input, not authority.
  • Security and privacy take priority.
  • Transparency around AI involvement is required.

5. Permitted Use

Allowed uses (with data handling rules):

  • Generate or refactor code, tests, and deployment scripts.
  • Draft specifications, design documents, communications, and meeting summaries.
  • Suggest solution options and test cases.
  • Any artifact generated through an AI agent must be reviewed and accepted by a human before merging, sharing, or publishing.

6. Prohibited Use

Not allowed:

  • Entering secrets or identifiable client information into public AI tools.
  • Uploading client content outside Sensei or client tenants.
  • Merging unreviewed AI code or bypassing controls.
  • License or IP violations, or replicating competitors’ features.
  • When unsure, consult leadership.

7. Data Handling Rules

  • Prefer Microsoft-hosted copilots for internal and client work.
  • For public AI tools, only use anonymized or synthetic data, avoid sensitive architecture details.
  • Use GitHub Copilot only in Sensei enterprise environments and do not include sensitive values in prompts for suggestions.

8. Code and Content Review

  • AI-generated deliverables must meet or exceed current review standards.
  • Code: Subject to static analysis, human review, and security checks.
  • Docs/Comms: Checked by a human for accuracy, tone, confidentiality, and compliance, key facts must be verified.

9. Logging and Transparency

  • Disclose AI use in pull requests and major deliverables.
  • Encourage sharing effective prompts internally.

10. Incidents and Reporting

Handle AI-related incidents per existing response processes: contain, notify leadership, document, and remediate as needed.

11. Policy Review

Annual review, or as needed based on tool changes, compliance updates, or significant incidents.


References

Responsible AI and Governance

Secure Use of Copilots and AI Tools

Data Handling, Security, and Least Privilege

Code Review, SDLC, and Quality

Incident Response