What Anthropic's Accidental Leak Teaches You About Getting Better AI Results
Recently, Anthropic — the company behind Claude AI — had what they're calling a human error. Someone on their team accidentally published the internal system prompt for Claude Code, their AI coding assistant, to a public website. Reports describe the document as containing 512,000 lines of internal instructions. Security researchers and journalists started pulling it apart immediately.
The headlines focused on the competitive intelligence angle: what proprietary details had been exposed, what it meant for Anthropic's business. That's a reasonable story. But if you're a business owner using AI tools, there's a more useful story buried in this incident — one that explains exactly why the AI you're using keeps producing generic, off-brand, inconsistent outputs, and what to do about it.
AI Doesn't Just Know Things — It Operates on Instructions
The most common misconception about AI tools is that they work like a very smart search engine: you ask a question, the AI retrieves an answer from somewhere in its training data. That's not how it works.
Modern AI assistants run on two things simultaneously. First, a trained model — everything the AI learned from its training data. Second, a system prompt — instructions written by the company deploying the AI that tell it how to behave right now, in this specific product, for this specific use case.
Anthropic's leaked document is their system prompt for Claude Code. It tells Claude what its personality should be, what it will and won't do, how to handle ambiguous situations, specific safety rules, how to respond to different types of requests, what to prioritize, what to decline. 512,000 lines of that. For one product.
Anthropic didn't write hundreds of thousands of lines of instructions because they had time to burn. They wrote them because without precise, detailed instructions, AI behavior is inconsistent. Without instructions, AI defaults to average — doing approximately what most people in most situations would probably want. Which is exactly good enough for a demo and exactly not good enough for a real business workflow.
The lesson from this leak isn't about corporate security. It's this: a company that builds world-class AI spent an enormous amount of effort on written instructions to make their AI behave correctly and consistently. Instructions are how AI gets calibrated. Instructions are the difference between AI that produces generic output and AI that's actually useful.
Why This Directly Affects Your Business
Every AI tool you're using right now — ChatGPT, Claude, your CRM's AI assistant, the AI feature in your email tool — is operating on a system prompt written by whoever built that product. Those instructions were written for a general audience. Millions of different users, thousands of different industries, a huge range of contexts and use cases.
They were not written for your business.
When you type a question into ChatGPT, the AI isn't just responding to your question. It's responding to your question through the lens of OpenAI's instructions about how ChatGPT should behave generally. Those instructions optimize for being broadly helpful — not specifically helpful for a 12-person HVAC company, or a commercial cleaning business, or a consulting firm with a specific client base and service standards.
That's why the customer email ChatGPT drafts sounds like a corporate template instead of your team. That's why the response to a prospect's question doesn't mention your actual pricing, your guarantee, or the specific thing that makes your service different. The AI doesn't know those things. It fills in the gaps with whatever the average business does, because average is the only safe default when you have no information about a specific context.
This isn't a flaw in the technology. It's a feature of how general-purpose AI products are deployed. AI tools are built for scale, which means they're built for generic. The path to getting non-generic output is giving the AI the specific context it needs to calibrate to your situation.
Most business owners know the outputs feel generic. Most don't know why — and more importantly, most don't know it's fixable without switching tools or buying anything new.
What Changes When You Give AI Your Own Instructions
Most AI tools give you a way to add custom instructions on top of their defaults. ChatGPT has Custom Instructions. Claude has a similar feature. Any AI workflow you build can have a system prompt you control. This is the lever most people ignore.
Here's a concrete example of what changes when you use it.
A 12-person commercial cleaning company was using ChatGPT to help draft responses to new client inquiries. Without custom instructions, the responses were technically fine — professional, polite, covered the basics. They also sounded identical to every other commercial cleaning company in the country. Nothing in the emails gave a prospect a reason to pick this company over a competitor.
We spent two hours writing a business context document: what the company does (commercial buildings over 10,000 square feet — no residential), who their clients are, what distinguishes them from competitors (a specific service guarantee with real, named terms), common questions clients ask and how to handle them, the tone the team uses in client communication (direct and confident, not formal), and explicit rules about what not to say (no vague quality language without specifics to back it up).
That document went into their ChatGPT Custom Instructions. The change in output was immediate. Responses referenced their actual guarantee by name. The tone matched how the owner actually talks to clients. Follow-up emails included specific details instead of generic filler like "we'd love the opportunity to work with you."
The underlying AI didn't change. The model didn't update. The instructions changed. That's all it took.
The output quality of any AI tool is a function of two things: the model's capability and the instructions it's operating on. Almost everyone focuses entirely on model capability — is this tool smart enough? — and spends zero time on instructions. That's where most of the variance in output quality actually lives.
How to Write Instructions That Actually Work
You don't need 512,000 lines. A focused 400- to 600-word context document covers most of what your AI tools need to start producing useful, consistent, on-brand outputs. Here's what to include.
Who you are and what you do, precisely. Not "we're a marketing agency." Something like: "We're a 7-person marketing agency that works exclusively with B2B software companies between $2M and $20M in revenue. We focus on content strategy, SEO, and LinkedIn marketing. We don't do paid advertising." The more specific the description, the better the AI's defaults when filling in gaps you haven't explicitly covered.
Your clients. Who they are, what they care about, their typical questions, the language they use. If your clients are plant managers in manufacturing facilities, the AI should know that. If they're dental practice owners, it should know that too. The same information framed for a plant manager reads differently than the same information framed for a dental practice owner — and the AI will make that adjustment automatically if you tell it who you're talking to.
Your tone in concrete terms. "Professional but approachable" means nothing to an AI. Try instead: "Direct and confident. We don't hedge or over-qualify. We write like a knowledgeable person talking to another knowledgeable person — not like a legal document. We don't use filler phrases like 'great question' or 'absolutely.'" Better yet, paste in two or three examples of messages that represent how your team actually communicates. Concrete examples beat descriptions every time.
Explicit rules. The AI follows explicit instructions more reliably than inferred ones. "Never mention competitors by name" is clearer than any amount of general framing. "Always include a specific next step at the end of every email draft" removes ambiguity. "If a prospect mentions budget constraints, don't offer a discount — ask what their timeline is" gives the AI actual business logic to operate on instead of guessing.
Key facts the AI needs to get right. Service guarantees and their specific terms. Response time commitments. Service areas or minimum project sizes. Anything that, if the AI gets it wrong in a client communication, creates a real problem. If it's not in the instructions, the AI will either guess or omit it — and wrong guesses in client-facing output cost you credibility.
The Mistakes That Kill Results
Knowing what to include gets you halfway there. Knowing what goes wrong is the other half.
Instructions that are too abstract. "Be helpful and professional" adds nothing the AI isn't already trying to do. Instructions work in direct proportion to how specific and concrete they are. Every vague instruction is an opening for the AI to default to generic.
Only describing what you don't want. "Don't sound corporate" without explaining what you do want leaves the AI with a narrowed constraint but no direction. Pair every negative rule with a positive one. Don't be formal — be direct. Don't give vague answers — give specific, actionable guidance. Don't hedge — be confident. The negative tells the AI what to avoid; the positive tells it where to go instead.
Setting it once and never revisiting it. Your business changes. New services, new client types, updated pricing, evolved standards. Instructions should be treated as a living document. A 30-minute review every quarter — does this still reflect how the business actually operates? — prevents the slow drift where your AI is operating on information that's six months out of date.
Using the same instructions for everything. The context that works well for drafting client emails won't necessarily work for generating internal reports or brainstorming marketing copy. Your top two or three AI use cases deserve their own tailored instructions. A bit of extra upfront time pays back immediately in output quality and consistency.
Why This Matters Even More If You're Building AI Workflows
If you're using AI tools manually, bad instructions are annoying — you get generic output, you edit it, you move on. The cost is friction and extra editing time.
If you're running automated AI workflows — systems that handle lead responses, qualify prospects, send follow-ups, process intake forms, generate client-facing reports — bad instructions become an operational problem. There's no one reviewing every output before it goes out. The AI is acting on behalf of your business, and what it produces goes out with your name on it at scale.
Every AI workflow I build for clients starts with a detailed business context document. Not as a formality. Because without it, automated outputs will be inconsistent, off-brand, and wrong often enough to damage client relationships. The model quality matters. The workflow architecture matters. But the instructions are what determines whether each individual output is actually useful or a liability.
The same principle Anthropic applied at the scale of a billion-dollar AI product — detailed, specific, explicit written instructions to make behavior consistent and reliable — applies at the scale of your 8-person service business. The tool is different. The principle is identical.
Where to Start This Week
Set aside 90 minutes this week. Open a blank document and write a business context document using the framework above: who you are precisely, who your clients are, your tone in concrete terms, explicit rules, and key facts the AI needs to get right. Aim for 400 to 600 words — long enough to cover the essentials, short enough that you'll actually finish it.
Load it into your primary AI tool's custom instructions. Then run the same five tasks you normally use AI for and compare the output to what you were getting before. The difference will be obvious.
If you want a second set of eyes on your AI instructions, or you're thinking about building actual automated workflows and want to get the foundation right from the start, that's the exact conversation I do this work for.
Book a free 60-minute call here. We'll look at what you're currently doing with AI, find where the instructions gaps are costing you results, and map out the fastest path to getting outputs that actually sound like your business.