Knowledge

AI Hallucination: Everything HR Leaders Need to Know

Written by Tim | Apr 2, 2026 4:40:39 PM

A few months ago, I was standing in front of a room of HR professionals, and I made a statement that visibly unsettled a few people:

“Hallucination is never going to be fixed.”

Hallucination is structural, and it’s baked into the very way large language models work. Once you understand that, everything about how you use AI safely changes.

At Innovation Visual, we've been building AI systems and agents for businesses for several years. One of the most important parts of our work is not the technology itself, but educating the leaders who will be responsible for overseeing it. And in HR, the stakes around this particular issue are especially high.

What Hallucination Actually Is

Most people who have used ChatGPT or similar tools have encountered hallucination, which is the AI confidently stating something that is entirely false, e.g. a case study that doesn't exist, a legal precedent that was never decided, a salary benchmark from thin air. The instinctive reaction is to assume this is a teething problem, a flaw that will eventually be engineered out.

It won't.

Here’s Why

At their core, large language models (LLMs) are prediction engines. They have consumed vast quantities of text, including books, websites, academic papers, forums, and code, and from that data, they have learned the statistical relationships between words, phrases, and ideas. When you give an LLM a prompt, it is not searching a database or consulting a reference library. It is predicting, token by token, what the most plausible next piece of text should be.

“The model doesn't look things up. It predicts what a plausible answer would look like, and that is a fundamentally different thing.”

When the model has been trained on data that covers the topic you're asking about, that prediction is often excellent. But when it hasn't or when you're asking about something very recent, very niche, or very specific, it will still produce a confident, well-structured answer. It simply can't signal uncertainty the way a human expert would by saying 'I don't know, let me check'. Its entire architecture is built around generating plausible-sounding text.

A Real-World AI Warning

I shared a vivid example of this during a recent seminar. The former Chief Constable of the West Midlands Police lost his position after referencing, in formal parliamentary testimony, a football match that had never taken place. The information had been generated by an AI system and had passed through without adequate verification. It sounded completely credible. The match simply didn't exist.

In that instance, the consequences were career-ending and reputationally severe. In an HR context, the consequences could be just as serious, or worse, because HR decisions directly affect people's livelihoods, their careers, and their legal rights.

The HR-Specific AI Risk

Consider for a moment the kinds of outputs HR professionals might ask an AI to produce:

  • Salary benchmarking data for a specific role in a specific geography

  • Policy documents drawing on current employment legislation

  • Interview scoring rationales or candidate assessments

  • Redundancy process documentation referencing statutory requirements

  • References to relevant case law or tribunal precedents

In every single one of these cases, a hallucinated detail, e.g. a fabricated salary figure, a misquoted statutory notice period, an invented tribunal ruling, could result in financial loss, legal challenge, or discriminatory outcomes.

And from a governance perspective, under UK and EU law, you remain responsible for decisions made using AI, even if the AI generated the faulty information. The EU AI Act is explicit on this. Ignorance of how the technology works is not a defence.

Why Prompting Is Your Primary Mitigation

The good news is that hallucination is largely manageable through the quality of your prompting and the constraints you build into your AI systems. This is something we work on extensively with clients.

At the most basic level, a well-structured prompt dramatically reduces the probability of hallucination. We use a framework we call CICO as the foundation of good prompting:

  • Context

  • Instructions

  • Constraints

  • Output

When applied to tasks where factual accuracy matters, the Constraints element is where you do the most important work:

  • Instruct the model to only use information from sources you provide, not its general training data

  • Ask it to flag where it is uncertain or where it cannot find a specific answer in the provided sources

  • Tell it explicitly to say 'I don't have this information' rather than estimate or infer

  • For any factual claims (salary data, legislation, dates, statistics), it is required to cite the source within the response

For more sophisticated deployments, the AI agents we build for clients go further. We connect the model to verified, current data sources via retrieval-augmented generation (RAG), so that when it needs to draw on specific information, it is genuinely looking it up rather than predicting from training data. This is the approach we take when building anything mission-critical.

Building Human Oversight into Your Processes

There is a broader organisational point here too. One of the principles I return to repeatedly when working with HR and senior leadership teams is the importance of what is called 'human in the loop', i.e. building checkpoints into AI-assisted processes where a person reviews and approves before any decision or output with consequences is acted upon.

This doesn't mean that every AI-assisted piece of work needs a committee sign-off. But for anything with legal, financial, or people consequences — and in HR, that is a lot of things — there should be a clear protocol. Who checks the output? What do they check for? How is that review recorded?

“Understanding how AI hallucination works doesn't make AI less useful. It makes you a better and more responsible user of it, and that is exactly the kind of leadership HR functions need right now.”

The organisations I see getting this right are the ones that understand the technology well enough to design sensible guardrails around it. That requires investment in education, in policy, and in the kind of practical AI literacy that turns a potentially risky tool into a genuine strategic asset.

Innovation Visual AI Workshop for Leaders

Where to Go from Here

If you’re already thinking about how AI fits into your HR function, you’re asking the right questions. The next step is clarity around where AI can genuinely add value, where the risks sit, and how to put the right guardrails in place from the start.

That’s exactly what we help teams work through. If you want a structured view of where AI could (and should) be used in your organisation, our AI Opportunities Audit is a good place to start. Discover more about our AI opportunities audits.

Or, if it would be more useful to explore this with your wider leadership team, we run small, practical AI workshops for leaders focused on real use cases, risk, and implementation. Find out more about our AI workshops for leaders.

And if this isn’t something you personally own but you’re seeing the risks or opportunities emerging, feel free to pass this on to the right person internally. In most businesses, HR is right at the centre of how AI gets adopted responsibly, so even starting that conversation can make a real difference.