DataStax CTO Discusses RAG’s Role in Reducing AI Hallucinations

Datastax Cto Discusses Rag's Role in Reducing Ai Hallucinations

DataStax CTO Discusses RAG’s Role in Reducing AI Hallucinations

Home » News » DataStax CTO Discusses RAG’s Role in Reducing AI Hallucinations
Table of Contents

Retrieval Augmented Generation (RAG) has change into very important for IT leaders and enterprises having a look to enforce generative AI. By the usage of a big language type (LLM) and RAG, enterprises can floor an LLM in undertaking knowledge, bettering the accuracy of outputs.

But how does RAG paintings? What are the use circumstances for RAG? And are there any genuine choices?

roosho sat down with Davor Bonaci, leader generation officer and govt vp at database and AI corporate DataStax, to learn how RAG is being leveraged out there all over the rollout of generative AI in 2024 and what he sees because the generation’s subsequent step in 2025.

What is Retrieval Augmented Generation?

RAG is a method that improves the relevance and accuracy of generative AI LLM type outputs by way of including prolonged or augmented context from an undertaking. It is what permits IT leaders to make use of generative AI LLMs for undertaking use circumstances.

Bonaci defined that whilst LLMs have “basically been trained on all the information available on the internet,” as much as a definite point in time, relying at the type, their language and normal wisdom strengths are offset by way of important and well known issues, corresponding to AI hallucinations.

SEE: Zetaris on why federated knowledge lakes are the long run for powering AI

“If you want to use it in an enterprise setting, you must ground it in enterprise data. Otherwise, you get a lot of hallucinations,” he mentioned. “With RAG, instead of just asking the LLM to produce something, you say, ‘I want you to produce something, but please consider these things that I know to be accurate.’”

How does RAG paintings in an undertaking atmosphere?

RAG provides an LLM connection with an undertaking knowledge set, corresponding to an information base, a database, or a report set. For example, DataStax’s primary product is its vector database, Astra DB, which enterprises are the usage of to enhance the construction of AI programs in enterprises.

In apply, a question enter given by way of a person would undergo a retrieval step — a vector seek — figuring out probably the most related paperwork or items of data from a pre-defined wisdom supply. This may come with undertaking paperwork, instructional papers, or FAQs.

The retrieved knowledge is then fed into the generative type as further context along the unique question, permitting the type to floor its reaction in real-world, up-to-date, or domain-specific wisdom. This grounding reduces the chance of hallucinations which may be deal breakers for an undertaking.

How a lot does RAG reinforce the output of generative AI fashions?

The distinction between the usage of generative AI with and with out RAG is “night and day,” Bonaci mentioned. For an undertaking, the propensity for an LLM to hallucinate necessarily manner they’re “unusable” or just for very restricted use circumstances. The RAG method is what opens the door to generative AI for enterprises.

“At the end of the day, they [LLMs] have knowledge from seeing things on the internet,” Bonaci defined. “But if you ask a question that is kind of out of the left field, they’re going to give you a very confident answer that may … be completely wrong.”

SEE: Generative AI has change into a supply of high-priced errors for enterprises

Bonaci famous that RAG tactics can spice up the accuracy of LLM outputs to over 90% for non-reasoning duties, relying at the fashions and the benchmarks used. For advanced reasoning duties, they’re much more likely to ship between 70-80% accuracy the usage of RAG tactics.

What are some RAG use circumstances?

RAG is used throughout a number of standard generative AI use circumstances for organisations, together with:

Automation

Using LLMs augmented with RAG, enterprises can automate repeatable duties. A commonplace use case for automation is buyer enhance, the place the device can also be empowered to look documentation, supply solutions, and take movements like canceling a price ticket or making a purchase order.

Personalisation

RAG can also be leveraged to synthesize and summarise massive quantities of data. Bonaci gave the instance of purchaser opinions, which can also be summarised in a customized manner this is related to the person’s context, corresponding to their location, previous purchases, or shuttle personal tastes.

Search

RAG can also be carried out to reinforce seek ends up in an undertaking, making them extra related and context-specific. Bonaci famous how RAG is helping streaming provider customers to find films or content material related to their location or pursuits, even though the quest phrases don’t precisely fit to be had content material.

How can wisdom graphs be used with RAG?

Using wisdom graphs with RAG is an “advanced version” of fundamental RAG. Bonaci defined that whilst a vector seek in fundamental RAG identifies similarities in a vector database — making it well-suited for normal wisdom and herbal human language — it has obstacles for sure undertaking use circumstances.

In a state of affairs the place a cell phone corporate gives multiple-tiered plans with various inclusions, a buyer inquiry — corresponding to whether or not global roaming is integrated — will require the AI to come to a decision. A data graph can lend a hand organise knowledge to lend a hand it work out what applies.

SEE: Digital adulthood key to good fortune in AI for cybersecurity

“The problem is the content in those plan documents are conflicting with each other,” Bonaci mentioned. “So the system doesn’t know which one is true. So you could use a knowledge graph to help you organise and relate information correctly, to help you resolve these conflicts.”

Are there any choices to RAG for enterprises?

The primary choice to RAG is fine-tuning a generative AI type. With fine-tuning, as a substitute of the usage of undertaking knowledge as a recommended, knowledge is fed into the type itself to create an influenced knowledge set to high the type to be used in some way that may leverage that undertaking knowledge.

Bonaci mentioned that, up to now, RAG has been the process extensively agreed upon within the trade as top-of-the-line method to make generative AI related for an undertaking.

“We do see people fine-tuning models, but it just solves a small niche of problems, and so it has not been widely accepted as a solution,” he mentioned.

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 
share this article.

ADVERTISEMENT

ADVERTISEMENT

Enjoying my articles?

Sign up to get new content delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.
Name