When AI Inputs Become a Compliance Liability
When organisations evaluate AI tools, the conversation usually centres on model capability: accuracy, speed, hallucination rates. These matter. But they overlook a more operationally critical question: who controls the prompt?
In generic AI platforms, prompt creation is left entirely to individual users. Everyone asks the AI in their own words, with their own framing and assumptions. The result is a spectrum of outputs that vary significantly, not because the model changed, but because the instructions it received did.
In regulated environments, that variability is not a minor inconvenience. It is a systemic risk.
Prompts Are Policy
A prompt is not just a question – it is a set of instructions that determines the model’s tone, scope, regulatory language, and output structure. Treating prompt creation as an informal, individual activity is the equivalent of letting every user write their own queries against a live regulatory database.
The prompt is, in effect, a policy decision. And in life sciences, policy decisions require governance.
How CARA Addresses This
CARA enables organisations to define, approve, and deploy pre-optimised prompts at the platform level – surfaced to users through structured interfaces such as dropdowns or action buttons. Instead of improvising, users select a defined AI function aligned with best practice.
This ensures:
- Consistent regulatory language across documents and authors
- Alignment with internal SOPs without requiring users to memorise them
- Reduced variability between team members
- Auditable AI behaviour — a clear link between prompt and output
By embedding prompt governance into the platform, CARA transforms AI from an experimental tool into a controlled enterprise capability. AI behaviour becomes repeatable, predictable, and aligned with how a regulated business needs to operate.
Why This Matters for Compliance
Regulators are increasingly scrutinising how AI contributes to submissions, labelling, and clinical documentation. The ability to demonstrate that AI was guided by approved, version-controlled prompts is the foundation of a defensible AI governance position. When outputs trace back to a specific, approved prompt, you have an audit trail. When they are the product of ad hoc user improvisation, you do not.
Generic tools were not built with this in mind. CARA was.

