AI·17 March 2025·8 min read

How We Automate Ad Copy Generation Using Claude API

The full pipeline: brand voice doc, product data, audience context, Claude API, structured output, human review, and live. The prompt structure and what we do with the output.

By Jay

How We Automate Ad Copy Generation Using Claude API

How We Automate Ad Copy Generation Using Claude API

Ad copy generation is one of the highest-value applications of AI in a marketing agency. Not because AI writes better copy than a skilled human, but because volume and variation are the things that actually move campaign performance, and AI can produce variations at a speed that humans cannot match. Here is the exact pipeline we use, from brand input to live campaign.

Why Ad Copy Volume Matters

Most campaigns we inherit are running 1 or 2 variations of each ad. The account manager wrote the copy at campaign launch, got the client to approve it, and it has been running ever since. That is a testing environment with effectively no data.

Google and Meta both reward copy testing. Meta's algorithm distributes budget toward the best-performing creative dynamically. Google's responsive search ads use performance data to determine which headline combinations to serve more often. But these systems need variation to learn from. 2 headlines give the algorithm almost no signal. 10 to 15 give it something to work with.

The barrier to testing more copy is not desire. It is time. Writing 12 distinct headline variations for a restaurant campaign, then writing body copy variations that work with each, then doing the same for every ad group, is genuinely time-consuming when done well. AI changes that constraint.

The Full Pipeline

The pipeline has 6 stages. Each one matters, and skipping any of them degrades the output.

Stage 1: Brand voice document. Before any generation, we build a brand voice document for the client. This is a 400 to 600 word document covering: what the business does, its tone (formal vs conversational vs playful), its key value propositions, any language it avoids, and 3 to 5 example sentences that represent its voice. This document goes into the system prompt for every generation call. Without it, the output is generic. With it, the output stays on-brand across variations.

Stage 2: Product and service data. We compile a structured data block for whatever the campaign is promoting. For a restaurant, this includes: specific dishes, price points, booking link, opening hours, location, any seasonal offerings, and any specific offers running. For an e-commerce campaign, it includes product names, specific features, price, shipping terms, and any guarantees. Real specifics produce specific copy. General inputs produce general outputs.

Stage 3: Audience context. We define the target audience for this specific ad group. Not "people who like food," but the specific audience segment: "Couples aged 30 to 55 in the Unley and Mitcham area looking for a special-occasion dinner restaurant." This goes into the user prompt alongside the brief.

Stage 4: Claude API call with structured output. The generation prompt asks Claude to produce copy in a structured JSON format that maps directly to the platform's ad format. For Google responsive search ads, the schema includes 15 headline fields (max 30 characters each) and 4 description fields (max 90 characters each). For Meta, it includes the primary text, headline, and description. The structured output format means the result can be parsed programmatically without manual extraction.

Stage 5: Human review and selection. Every piece of AI-generated copy goes through a human review before it enters a campaign. The reviewer is checking for 4 things: accuracy (does every claim reflect reality?), character limit compliance (does the copy fit the platform constraints?), brand voice consistency (does it sound like the client?), and competitive differentiation (does it say something the competitor ads do not?). Anything that fails any of these criteria gets cut or rewritten.

Stage 6: Staged live deployment. Approved copy goes into campaigns in batches. We do not upload all 12 headline variations at once. We add the AI-generated variations alongside the existing human-written copy, set appropriate budget for the ad group, and let the platform's algorithm begin distribution. After 2 to 3 weeks, we review performance and remove the lowest-performing variations.

The Prompt Structure

The system prompt carries the brand voice document and the global constraints. The user prompt is structured as a brief that changes for each generation. A simplified version looks like this:

Write Google responsive search ad copy for the following campaign.

Target audience: Couples aged 30-55 looking for a special occasion restaurant in Adelaide's inner south.

Campaign objective: Reservations via OpenTable

Current promotion: None

Product data:
- Restaurant: Greek Street Unley
- Cuisine: Modern Greek
- Price point: $$ to $$$
- Booking: OpenTable link
- Key differentiators: Authentic recipes, extensive Greek wine list, private dining room for groups

Output format: JSON with fields "headlines" (array of 15, max 30 characters each) and "descriptions" (array of 4, max 90 characters each).

Do not include exclamation marks in more than 4 headlines. Include at least 3 headlines with a direct call to action.

The character limit instructions are explicit in the prompt because Claude, like any LLM, will not reliably self-enforce platform-specific character constraints without being reminded. We also run a post-generation validation step in code that checks every field against the limit before displaying results for human review.

Handling Google Ads Character Limits

The 30-character headline limit in Google responsive search ads is genuinely restrictive. It rules out a large percentage of natural language phrases. "Book Your Table at Greek Street Tonight" is 39 characters. Too long.

Our approach is to generate more than we need and filter. We ask for 20 headline candidates knowing that 5 or 6 will be over the limit and another 2 or 3 will be weak after review. The target is to end up with 12 to 15 strong, compliant headlines from a generation of 20.

We also use a length-specific generation pass for headlines when the first pass produces too many over-limit results. The prompt in that pass explicitly constrains to short phrases: "Generate 10 Google Ads headlines for this campaign. Each headline must be under 25 characters. Prioritise short, punchy phrases over complete sentences."

This two-pass approach consistently produces more usable output than trying to get the model to hit 30 characters exactly in a single pass.

What We Do With the Output

The live campaign data feeds back into the next generation cycle. When we know that "Book a Table Tonight" outperformed "Reserve Your Spot Now" by 40% on click-through rate for this specific client, that information goes into the brief for the next copy generation. Over time, the inputs become more specific and the output quality improves.

This feedback loop is the difference between using AI for one-time copy generation and building a genuine content intelligence system. The AI is not doing anything clever with the performance data. We are. We interpret the data and update the inputs. The model responds to better inputs with better outputs.

If you want to see how this pipeline works for your campaigns, or if you are running Google or Meta ads without a systematic copy testing programme, talk to us or get in touch directly.

Claude APIad copyautomationpipeline
Skip the small talk