Prompt Engineering for Marketing: How We Get Consistent Results From Claude API
The difference between useful AI output and useless is almost always the prompt. Not the model. Not the temperature setting. The prompt.
This is both encouraging and frustrating. Encouraging because it means you can improve your results significantly without changing any of the underlying tools. Frustrating because most people figure it out by trial and error over months when the framework can be taught in an afternoon.
Here is how we structure prompts for marketing work and why each element matters.
The Framework: 5 Components
Every prompt we write for production use has five components. Remove any one of them and the output quality drops.
Role. Tell the model who it is. Not vaguely ("you are a marketing expert") but specifically ("you are a direct response copywriter who specialises in Meta Ads for Australian hospitality businesses, with ten years of experience writing for conversion-focused campaigns at the awareness and consideration stages of a funnel"). The role primes the model's output style, vocabulary, and assumptions. A more specific role produces more specific output.
Context. Tell the model what it needs to know. This includes the brand, the audience, the offer, the campaign objective, the platform, and any relevant background that would change how a human expert approached the task. Context is where most prompts are too thin. People give the model the task without giving it the information needed to do the task well.
Format. Tell the model exactly how to structure the output. If you want three ad variations with headline, primary text, and a call to action separated clearly, specify that. If you want JSON output with specific field names, include the schema. If you want a numbered list rather than paragraphs, say so. Models are excellent at format compliance when you specify the format clearly and less reliable when you leave it implicit.
Constraints. Tell the model what not to do. This is the most commonly skipped component. "Do not use exclamation marks. Do not use the word 'unforgettable'. Keep primary text under 125 characters. Do not make claims about health outcomes." Constraints dramatically narrow the output space toward what is actually usable for your context.
Examples. Show the model one or two examples of the output quality and style you are targeting. Not to have it copy them, but to calibrate it. If you tell the model the brand voice is "direct, warm, and slightly irreverent" and then show it an example that demonstrates that, it has a concrete reference rather than an abstract instruction. Examples are where the output jumps from adequate to good.
System Prompts vs User Prompts
When you are working with the Claude API, you have two prompt surfaces: the system prompt and the user prompt. They are not interchangeable.
The system prompt is for persistent context: who the model is, what the brand voice is, what the constraints are, what formats are expected, and what examples it should calibrate against. This is the information that applies to every call in this context.
The user prompt is for the specific task: what you need right now, with any task-specific variables. A user prompt in a campaign copy workflow might simply be: "Write a cold audience ad for the new weekend brunch menu. The hero dish is the Korean fried chicken benedict at $22. Target: food-forward Adelaide locals aged 28-40."
Everything else, the role, the brand context, the format, the constraints, the examples, lives in the system prompt and does not need to be repeated every time.
This structure produces consistent output across many calls. If every call has to re-establish everything from scratch in the user prompt, the outputs vary more than you want and the prompts become long and unwieldy.
For campaigns where we are generating a high volume of copy variations, the system prompt might be 800 words and the user prompt might be 40 words. That ratio is intentional.
The Most Common Mistakes
Four mistakes account for most of the cases where marketing prompts produce frustrating output.
Mistake 1: asking for the output before establishing context. "Write me a Meta ad for this restaurant" tells the model almost nothing. What audience? What stage of the funnel? What is the specific offer? What is the brand voice? What platform placement is this for? Without context, the model writes for the most generic version of your category.
Mistake 2: not specifying the format. "Give me some ad copy options" produces an output format that may or may not be useful. "Give me 4 ad copy variations, each with a 40-character headline and 125-character primary text, separated by a horizontal rule, with no other commentary" produces something you can directly paste into a brief.
Mistake 3: no negative constraints. The model will use language that is common in your category unless you tell it not to. "Unforgettable dining experience", "crafted with care", "your story starts here." These are defaults. Naming what not to use forces the model to find less generic language.
Mistake 4: using a single prompt for everything. If you use one general prompt for all copy work, you are not getting the benefit of specificity. A prompt optimised for awareness-stage Facebook ads is different from one optimised for retargeting copy, which is different again from one optimised for Google responsive search ad headlines. Separate prompts for separate tasks.
Real Examples From Our Workflow
For the An Nam Quan campaign, our system prompt included a 300-word brand voice description, a note about the target customer (lapsed first-time visitors and Adelaide food enthusiasts), the constraint that copy should never reference "authentic" because of its overuse in the Vietnamese food category, a format specification for three variations, and two examples of previous copy that performed well.
The user prompt for a new batch: "New promotion: the combination set for two at $68, available Tuesday to Thursday. Audience: people who visited once and have not returned in 90-plus days."
The output from that combination is copy that is specific to the offer, tonally consistent with the brand, and different from what a generic prompt would produce. That campaign ran with a $0.12 cost per click. The prompt structure did not create that result on its own, but it is part of why the creative was strong enough to earn it.
For the TakeoffIQ extraction pipeline, system prompt design was even more critical. The model had to adopt the role of an experienced quantity surveyor, output structured JSON with specific field names, report a confidence level for every item, and flag ambiguous elements for review rather than guessing. Each of those behaviours required explicit specification. None of them emerged from a vague instruction to "analyse this blueprint."
How to Get Started
If you are using Claude or any other model for marketing work and the output is inconsistent or generic, start with the system prompt.
Write the role in specific, expert terms. Add the brand context: voice, audience, constraints. Add format requirements in precise terms. Add two examples. Test it against a specific user prompt. Iterate on the system prompt, not the user prompt, until the output is consistently at the quality level you need.
The investment in a well-designed system prompt pays back every time it is used. A prompt you spend two hours building and refining might run hundreds of times across a campaign. The quality improvement compounds.
If you want to see what this looks like applied to real client campaigns rather than examples, our approach to AI-assisted marketing work is built on exactly this kind of structured prompt design.

