Service 03

Prompt
Engineering.

Structured, branded prompt libraries and multi-step chains that make Claude write in your voice and reason about your domain.

At a glance

What is Prompt Engineering?

Prompt Engineering is the practice of designing structured, branded instructions that make large language models like Claude produce consistent, on-brand, high-quality output every time. At JQ AI SYSTEMS this means voice specifications, anti-pattern libraries, domain knowledge documents, and multi-step prompt chains tailored to one team and one workflow. It is the layer underneath every automation JQ AI SYSTEMS ships.

What's included

Every engagement ships with.

Fit Check

Is this service right for you?

Both columns matter. Read them before booking.

This fits if…

  • Your team already uses Claude or ChatGPT but the output is inconsistent, generic, or needs heavy editing.
  • Brand voice matters enough that off-the-shelf AI tools fall short.
  • You produce content at a volume that makes prompt quality worth investing in.
  • You want the prompts to live in a documented library you can reuse, not get lost in chat histories.

This is not for you if…

  • You only use AI occasionally and editing the occasional output is not a real time cost.
  • You want "a magic prompt that fixes everything" rather than a structured library.
  • You are not willing to share examples of good and bad outputs so I can calibrate.
  • Your content does not have a consistent voice to preserve in the first place.
Process

How it actually runs.

01
Voice audit
You send me 5-10 pieces of content you are happy with plus 3-5 that sound wrong. I extract the voice patterns from the real material, not theory.
02
Specification
I write the voice specification and anti-pattern library based on the audit. You review and sign off before any prompts are built.
03
Prompt build
The production prompt library is built around the specification. Each prompt tested against real content inputs you supply.
04
Calibration
We run the prompts against fresh briefs together. Iterate on the ones that need it. Lock the final versions.
05
Handoff
Library delivered with documentation, a walkthrough, and instructions on how to extend it as your needs change.
Stack

Built with.

Claude Sonnet Claude Opus 4.6 GPT-4o Claude Cowork Claude Chat Obsidian Git
Live Example

See this in production.

A real system running right now, built on this exact service.

Case Study · Live

AI Social Media Operating System

A 4-agent Camille Guillain build where prompt engineering sits at the heart of every agent. Brand voice preserved across weekly briefings, client reports, and content pipelines for 4-7 clients simultaneously.

Read Case Study
FAQ

Before you book.

What is the difference between a prompt library and a custom GPT?
A custom GPT lives inside ChatGPT's interface and is tied to OpenAI. A prompt library is portable: the prompts work in Claude, in Python scripts, in Make.com, in any tool that can call an LLM. You own every prompt and can version them in Git like code. Custom GPTs are fine for casual use; prompt libraries are for production.
Why Claude instead of ChatGPT for prompt engineering work?
Claude handles long-context instructions more reliably, follows format constraints more strictly, and produces fewer of the AI-writing tells that make ChatGPT output easy to spot. For brand-voice work this matters: Claude is better at reading a 2000-word voice spec and actually applying it. I still use GPT-4o for certain tasks, but Claude is the default.
Can you calibrate prompts to bypass AI detection tools?
Partially. AI detectors are unreliable and inconsistent, so "bypassing" them is not a meaningful goal. What you can do is write prompts that produce output indistinguishable from human writing in the ways that matter: voice, rhythm, specificity, avoidance of AI cliches. That is what the anti-pattern library targets.
Do you write prompts I can paste into ChatGPT?
Yes. Every prompt in the library is plain text and portable. You can paste it into Claude, ChatGPT, Claude.ai, Anthropic console, or any API client. The library is built to be tool-agnostic at the top layer, even if I recommend Claude as the runtime.
How many prompts will I end up with?
Depends on your workflow, typically between 5 and 20 prompts. Most teams have fewer actual repeating tasks than they think. A good library has one prompt per task, not twenty variants of the same thing.
Free Consultation

Ready to build
prompt engineering?

Book a free 30-minute call. We map your use case, scope the build, and agree on a fixed quote before anything starts.

Book Free 30-min Call