Hallucination
When an AI model generates output that is factually incorrect or fabricated — stated with confidence, with no indication that it is wrong. In operational contexts, hallucinations are not a theoretical risk: an AI that invents a product code, a supplier name, or a contract term causes real downstream errors.
What is Hallucination?
Hallucination is the term for when an AI language model produces output that is incorrect, fabricated, or unsupported by the input it was given — and presents that output as if it were accurate. The model does not know it is wrong. It generates the most plausible-sounding text given its training, and sometimes that text does not correspond to reality.
The word "hallucination" originally described rare edge cases. In production operational systems, it describes a failure mode that must be designed around from the start. The risk is not that the AI occasionally gets a grammar question wrong. The risk is that it invents a product code that does not exist in your ERP, calculates a unit price that does not match the contract, or references a delivery date that was never confirmed.
What Hallucination Looks Like in Operations
Concrete examples of hallucination risk in manufacturing and wholesale operations:
Product code fabrication: An AI agent extracting line items from a supplier invoice generates an ERP article code that is structurally plausible but does not exist in the item master. The entry fails at posting — or worse, matches a similar-looking code for a different product.
Contract term invention: Asked to summarise a supplier contract, the AI states a payment term of Net 30 when the contract specifies Net 45. The downstream payment run uses the wrong term.
Quantity transposition: The AI reads a PDF invoice and transposes a quantity — 1,200 units becomes 2,100. The 3-way match fails with no obvious cause.
Non-existent supplier reference: In a supplier lookup, the AI returns a confident answer referencing a contact name or email that does not exist in the CRM.
Mitigating Hallucination in Production Systems
Hallucination cannot be eliminated from language models, but it can be controlled with the right architecture:
RAG (Retrieval Augmented Generation): Ground the AI's outputs in actual documents and data. Instead of asking the model what a contract says, retrieve the relevant clause and ask the model to interpret it. The model operates on facts, not memory.
Validation against source systems: Any output that contains a code, ID, name, or quantity should be validated against the authoritative system before use. If the AI generates an ERP article code, check it exists before writing it.
Confidence thresholds and human review triggers: Design the system to flag outputs where the AI's confidence is low or where the extracted value falls outside expected parameters — and route those for human review rather than proceeding automatically.
Structured output enforcement: Constrain the AI to output structured data (JSON with defined fields and types) rather than free text wherever possible. Structured outputs are easier to validate and reduce the surface area for fabrication.
The goal is not a zero-hallucination system — that does not exist. The goal is a system where hallucinations are caught before they cause downstream errors, and where the catch rate is high enough for the output to be trusted in production.