AI Acceptable Use Policy Template for Small Business
By Tom Hermstad · HD Tech

Quick Answers: AI Acceptable Use Policies
What is an AI acceptable use policy?
An AI acceptable use policy (AI AUP) is a written document that defines how employees may and may not use artificial intelligence tools in their work. It specifies approved AI tools, prohibited data types, required review steps before using AI output, and consequences for violations. An AI AUP protects businesses from data leakage, liability, and compliance failures caused by unsupervised employee AI use.
Does my small business need an AI acceptable use policy?
Yes — if any employee uses any AI tool (ChatGPT, Copilot, Gemini, Grammarly, etc.) for work purposes. Without a written policy, you have no standard for what data can be shared with AI tools, no recourse if an employee leaks client information via AI, and no documentation if a client or regulator asks how you handle their data. A one-page policy written today prevents significant liability tomorrow.
What should an AI policy prohibit?
At minimum, an AI policy should prohibit entering into any AI tool: client names combined with financial or personal data, employee personnel records, trade secrets or proprietary processes, attorney-client privileged communications, medical or health information, passwords or authentication credentials, and any data covered by NDA. These categories cover the most common sources of breach liability from AI misuse.
Can I use a free AI policy template?
Yes — free templates exist as starting points, but they require customization for your industry, the AI tools your team actually uses, and your specific compliance obligations (HIPAA, CMMC, state privacy laws). A generic policy that has never been reviewed for your specific business is better than nothing, but should be reviewed annually and after any significant AI tool adoption.
Why This Cannot Wait Until "Later"
By the time you finish reading this, at least one of your employees has probably used an AI tool for something work-related today. Grammarly checks their email. ChatGPT rewrites their proposal. Google Gemini summarizes a client document. Most of this happens without IT knowing and without any policy governing it.
This is not hypothetical risk management. In 2023, Samsung engineers leaked semiconductor process data to ChatGPT — the data was retained by OpenAI and the incident prompted Samsung to ban ChatGPT company-wide. Your business likely does not have Samsung's legal resources to respond to a similar incident. The policy prevents the incident.
The AI Acceptable Use Policy Template
Below is a working template you can adapt. Replace bracketed items with your specifics.
[COMPANY NAME] — AI Acceptable Use Policy
Effective Date: [DATE] | Last Reviewed: [DATE]
1. Purpose
This policy governs the use of artificial intelligence tools by all [Company Name] employees, contractors, and vendors to protect confidential information, ensure compliance with applicable laws, and maintain the trust of our clients.
2. Approved AI Tools
The following AI tools are approved for work use: [List specific tools — e.g., Microsoft Copilot for M365, Grammarly Business]. Using any AI tool not on this list for work purposes requires prior written approval from [IT/Management].
3. Prohibited Data — Never Enter Into Any AI Tool
The following data types must never be entered into any AI system, including approved tools, without explicit written authorization:
• Client names combined with financial, health, or legal information
• Employee personnel records, salaries, or performance data
• Trade secrets, proprietary processes, or competitive strategy
• Passwords, authentication credentials, or API keys
• Medical records or health information (HIPAA-covered data)
• Information covered by a non-disclosure agreement
• Social Security numbers or government ID numbers
4. Required Review Before Use
AI-generated content must be reviewed for accuracy by a qualified employee before use in any client deliverable, filing, communication, or published material. Employees are responsible for the accuracy of AI-assisted work product.
5. New Tool Approval Process
To request approval for a new AI tool, submit a request to [IT/Management] with: the tool name, intended use, data types that would be entered, and a link to the vendor's privacy policy and data processing agreement. Approval or denial will be communicated within [5 business days].
6. Violations
Violations of this policy may result in disciplinary action up to and including termination, and may be reported to relevant regulatory bodies where required by law.
7. Annual Review
This policy will be reviewed and updated at least annually, and whenever a significant new AI tool is adopted company-wide.
I have read, understand, and agree to comply with this AI Acceptable Use Policy.
Employee Signature: _______________ Date: _______________
How to Implement This in One Week
Day 1: Customize the template for your business — fill in approved tools, your specific prohibited data categories, and your approval process. Day 2: Have your attorney or compliance contact review it (30 minutes is enough for most small businesses). Day 3: Send to all employees with a signature page and a 2-week deadline. Day 7: Follow up with anyone who has not signed. Archive signed copies.
What Regulators Are Looking For
If your business is subject to HIPAA, your AI policy needs to explicitly cover PHI. Under the HIPAA Security Rule, covered entities must have written policies governing technology use, and "AI tools" qualify. If you are a defense contractor under CMMC, NIST 800-171 control 3.13.10 (establish and manage cryptographic keys) and related system access controls require documented policies for any system handling CUI — including AI tools used to process CUI-related information.
Tom Hermstad, President of HD Tech: "The businesses that have a policy already written are the ones who call us calmly. The ones who call in a panic are the ones who found out the hard way that their bookkeeper had been entering client financials into a free AI tool for six months. One page. That is all it takes to be ahead of 90% of your competitors on this."
Enforcing the Policy Without Becoming Big Brother
The goal is not surveillance — it is literacy. Most employees using AI tools improperly do not know they are doing anything wrong. Training beats enforcement. A 20-minute annual AI safety session covering "here is what you can use, here is what you can never type in" is more effective than monitoring software and creates a culture of accountability rather than fear.
Frequently Asked Questions
Minimum annually — AI tools and their data practices change fast. Additionally, update whenever you adopt a new AI tool company-wide, whenever a significant new regulation applies to your industry, or whenever an employee raises a situation the current policy does not cover. Treat it like your employee handbook: living document, not a one-and-done.
Increasingly, yes — or it will soon. Several major cyber insurers added AI-related questions to their 2025–2026 renewal applications. Having a written AI AUP demonstrates risk management and may affect your premium. Check your renewal questionnaire for AI-related questions, and have your policy ready to reference.
Treat it like any other policy violation — review the incident, document it, determine the severity (was client data actually exposed? was a third party harmed?), take appropriate action, and update the policy or training if the violation reveals a gap. For data-exposure incidents, you may have breach notification obligations depending on your state and industry.
With caution. AI can assist with job posting drafts, onboarding documentation, and policy writing — but employee personnel data, performance reviews, compensation information, and medical records must never enter a general AI tool. Use HR-specific platforms with proper data agreements (ADP, Rippling, Gusto all have AI features with appropriate data handling) rather than general-purpose chatbots.
For most small businesses without complex compliance requirements, yes — a clear one-page policy with a signature line is sufficient. Add complexity only where your industry demands it (HIPAA, CMMC, financial services regulations). The best policy is one your team will actually read and remember, not one that fills a binder and sits on a shelf.
Get Your AI Policy Reviewed — Free
HD Tech can review your current AI tool usage, identify data risks specific to your business, and help you finalize an acceptable use policy that actually fits how your team works. Takes less than an hour. No obligation.
Schedule Your Free IT and AI Risk Review or call 877-540-1684.
Serving Small Businesses Across Orange County and Los Angeles
HD Tech provides managed IT and cybersecurity services — including AI policy development — to small businesses in Irvine, Seal Beach, Anaheim, Santa Ana, Newport Beach, Huntington Beach, Long Beach, and throughout Southern California.

Tom Hermstad
President & CMO, HD Tech
Tom Hermstad has led HD Tech since 1995, building one of Southern California's most trusted managed IT and cybersecurity firms. He specializes in helping Orange County businesses eliminate IT headaches and stay ahead of evolving cyber threats — in plain English.
