Hallucinations are factual inaccuracies or fabricated information delivered by an LLM. At Boosted.ai, we strive to mitigate Alfa’s hallucinations as much as possible.
Hallucination causes:
- Pattern-based generation: Rather than knowing things, LLMs predict the sequence of words based on data they are trained on
- Lack of real-world understanding: Models don’t understand information in the same way as humans, and they do not have access to real-time knowledge which can lead them to produce inaccurate or invented information
- Training data limitations: The quality and scope of info used to train the model can impact its accuracy.
How we mitigate them:
- Using RAG limits LLM hallucinations
- We verify and validate information retrieved from RAG, reducing the hallucination rate from ~5% to 1 in 10000 (0.01%)
- We’ve created a framework where users can be very explicit with their asks - the less the LLM has to interpret the less likely it is to make mistakes