# AI Usage Policy

**Owner:** [FILL IN — name and role of the program owner]
**Last updated:** [DATE]
**Reviewed quarterly. Next review:** [DATE]

---

## 1. Why this exists

[Company] supports the responsible use of AI tools in day-to-day work. This policy exists so you can use AI productively without having to ask permission every time, and so we can scale that usage without introducing new risks for our customers, our data, or our team.

If you're not sure whether something is covered, default to asking. The escalation contact is in section 5.

## 2. Allowed tools

The following tools are approved for use across the company:

- **[Tool name]** — [what it's used for, e.g. "general drafting and synthesis"]
- **[Tool name]** — [what it's used for]
- **[Tool name]** — [what it's used for]

Adding a new tool requires a request to the program owner. The turnaround is one business week.

## 3. Banned use cases

Regardless of which approved tool you're using, the following are prohibited:

- Pasting customer personally identifiable information (PII) into any AI tool unless that tool is explicitly listed as approved for PII (see section 2).
- Pasting confidential financial information, unannounced product details, or material non-public information into any AI tool.
- Using AI-generated content in customer-facing communication without the human review described in section 4.
- Using AI to make hiring, firing, performance review, or compensation decisions without explicit human judgement on top.

## 4. Review before shipping

The following outputs require a human review before they ship:

- **Customer-facing copy.** Email campaigns, public posts, support responses sent to a named customer.
- **Code shipping to production.** Any AI-generated code that lands in our main branch must be reviewed by a human engineer who can speak to it.
- **Contracts, statements of work, or legal documents.** AI is fine as a drafting aid. The final version is reviewed by [FILL IN].
- **Decisions with material business impact.** Hiring, firing, pricing changes, contract decisions. AI may inform these; humans decide them.

The review bar is "would a competent human have caught this?" Not a formal process — a sanity check by someone with context.

## 5. Data classification

We treat data in three buckets:

- **Public.** Already published. Free to use in any approved tool.
- **Internal.** Not published, but no harm if leaked. Free to use in approved tools that are listed for internal data.
- **Confidential.** Customer data, financials, hiring information. Use only in tools listed as approved for confidential data. When in doubt, treat as confidential.

If you're unsure how to classify a specific piece of information, ask.

## 6. Escalation

For any question this policy doesn't cover:

- **Day-to-day questions:** [Name, email, Slack handle]
- **Security questions:** [Name, email, Slack handle]
- **Legal questions:** [Name, email, Slack handle]

A "this isn't covered" message is a feature, not a failure. The policy gets updated when those messages reveal gaps.

## 7. Learning expectations

We expect everyone to maintain a working understanding of AI tools relevant to their role. The current curriculum lives at [FILL IN — internal URL]. New starters complete the foundational track within their first 90 days; team-specific deep-dives are expected within the first 180 days.

If you find a gap in the curriculum, tell the program owner. The curriculum gets updated quarterly.

---

## Changelog

- **[DATE]:** Initial publication.

---

*This template is published by [174 Solutions](https://174solutions.com) under the assumption that you'll fork and customize it for your organization. It is not legal advice. Take it to your own legal and security teams for review before publication.*
