Hello. This is Narita , Fukui, and Peng from the Artificial Intelligence Laboratory.
To promote the use of generative AI at enterprises, Fujitsu has developed an "Enterprise-wide Generateve AI Framework Technology" that can flexibly respond to diverse and changing corporate needs and easily comply with the vast amount of data held by a company and laws and regulations. The framework was successively launched in July 2024 as part of Fujitsu Kozuchi (R&D)'s AI service lineup.
Some of the challenges that enterprise customers face when leveraging specialized generative AI models include:
- Difficulty handling large amounts of data required by the enterprise
- Generative AI cannot meet cost, response speed, and various other requirements
- Requirement to comply with corporate rules and regulations
To address these challenges, the framework consists of the following technologies:
- Fujitsu Knowledge Graph Enhanced RAG ( *1 )
- Generative AI Amalgamation Technology
- Generative AI Audit Technology
In this series, we introduce the "Fujitsu Knowledge Graph Enhanced RAG" every week. We hope this helps you to solve your problems. At the end of the article, we'll also tell you how to try out the technology.
Fujitsu Knowledge Graph Enhanced RAG Technology Overcomes the Weakness of Generative AI that Cannot Accurately Reference Large-Scale Data
Existing RAG techniques for making generative AI refer to related documents, such as internal documents, have the challenge of not accurately referencing large-scale data. To solve this problem, we have developed Fujitsu Knowledge Graph Enhanced RAG (hereinafter,Fujitsu KG Enhanced RAG) technology that can expand the amount of data that can be referred to by LLM from hundreds of thousands to millions of tokens to more than 10 million tokens by developing existing RAG technology and automatically creating a knowledge graph that structures a huge amount of data such as corporate regulations, laws, manuals, and videos owned by companies. In this way, knowledge based on relationships from the knowledge graph can be accurately fed to the generative AI, and logical reasoning and output rationale can be shown.
This technology consists of four technologies, depending on the target data and the application scene.
(1) Root Cause Analysis (Now Showing)
This technology creates a report on the occurrence of a failure based on system logs and failure case data, and suggests countermeasures based on similar failure cases.
(2) Question & Answer (published today)
This technology makes it possible to conduct advanced Q&A based on a comprehensive view of a large amount of document data such as product manuals.
(3) Software Engineering (to be published around 25th Oct.)
This technology not only understands source code, but also generates high-level functional design documents, summaries, and enables modernization.
(4) Vision Analytics (to be published around 1st Nov.)
This technology can detect specific events and dangerous actions from video data, and even propose countermeasures.
In this article, I will introduce (2) Question & Answer (hereinafter, Q&A) in detail.
What is the Fujitsu KG Enhanced RAG for Q&A?
In order to utilize generative AI in a company, it is important to handle a large amount of diverse data held by the company. Typically, generative AI doesn't learn company-specific data, so it relies on RAG techniques that augment generative AI capabilities by combining them with external data sources. While this approach improves answer accuracy, it has the drawback of being difficult to extract information and answer questions that require comparison and inference correctly .
In order to answer complex questions, a technique has been proposed in which important knowledge in many text documents is structured (knowledge graphed) and passed to LLM.
Example questions.
Which release date is earlier, the AI chatbot developed by OpenAI or the AI chatbot developed by Google?
A human can answer this question by reviewing reference documents such as Wikipedia. However, such documents are often large and complex, especially in business situations.
For example, if you search Wikipedia to answer the above questions, the related texts would look something like this:
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI. Launched in 2022 based on the GPT-3.5 large language model (LLM), it was later updated to use the GPT-4 architecture. ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.
Source: Wikipedia (ChatGPT)
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model (LLM) of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT. It was previously based on PaLM, and initially the LaMDA family of large language models.
Source: Wikipedia (Gemini (chatbot)))
If you can create a knowledge graph of just the information you need to answer these large documents,
{subject: "Google," relation: "developed," object: "Gemini"}
{subject: "OpenAI," relation: "developed," object: "ChatGPT"}
{subject: "Gemini," relation: "Launched," object: "2023"}
{subject: "ChatGPT," relation: "Launched," object: "2022"}
The answer will be obvious.
Fujitsu KG Enhanced RAG for Q&A is a technology that automatically structures (knowledge graphs) important knowledge from vast and complex text documents, and lists business-specific information in a unified format for comparison and reasoning, enabling answers to complex questions.
Learn more!
This chapter introduces the process of Fujitsu KG Enhanced RAG for Q&A.
Step1. Creating Graph Schemas
Step2. Knowledge extraction from business documents
Step3. Creating a Question Specific Knowledge Graph
Step4. Q&A
Step1. Creating graph schemas
The first step is to create a graph schema from the Q&A assumptions. A graph schema is a knowledge graph that extracts its characteristics from a question sentence. We think that the pattern of questions about a certain business document will be determined to some extent. For example, if the question is about a music band, the question is about the members and the nationalities of the members. Once the data is determined, the tendency of the question can be patterned to some extent.
To realize this idea, we extract the "types" of questions as a graph schema using hypothetical questions that are likely to be used in Q&A.
For example, suppose you have a record company's Q&A system, and users often ask the following questions:
- "Who are the members of group A?"
- "What songs did Group A compose?"
- "Where is Mr. B from?"
By abstracting information from such questions, you can characterize the questions.
Example.
group A → < group name >
Who → < person >
compose→ < Production >
songs → < Work >
Mr. B → < Person >
Where → < Location >
This is illustrated in the following graph schema:
By utilizing a graph schema that summarizes the patterns of questions, you can effectively extract the necessary knowledge graphs from a vast amount of knowledge graphs for answering questions.
Step2. Knowledge extraction from business documents
Next, create a Hyper-relational Knowledge Graph(Hereinafter, HRKG) from your business documents. Knowledge graphs are typically represented by triples (Express a simple sentence with three words: subject, predicate, and object). HRKG, on the other hand, uses hypertriples (Quadruple consisting of a subject, predicate, object and a link to the source document of the sentence).
Point.
The link to the source document of the hypertriple is used as an alternative processing in case the Q&A cannot be answered in the KG Enhanced RAG.
Step3. Creating a question-specific knowledge graph
In this section, we extract the parts of the knowledge graph that are useful for answering the question.
I will continue the explanation assuming the following question is entered:
Example questions.
Where was the band formed with Andrew Wood as lead singer?
- Generate question-specific graph schema
Check the question pattern by matching the question to the graph schema. To do so, first generate a graph schema using the same procedure as in Step 1 from the question sentence, compare it with the graph schema extracted from the hypothetical question collection, and extract the closest one.
This allows you to extract patterns that are close to your question from the patterns of your expected questions, and use them as a question-specific graph schema. - Extracting question-specific HRKGs By matching the generated question-specific graph schema with the HRKG, the HRKG that is closely related to the question is extracted as a Query-aligned HRKG.
Step4. Q&A
In this step, the query text and the Query-aligned HRKG are passed to the LLM to retrieve the answer. You can improve the accuracy of your answers by matching the question-specific graph schema to filter out triples that are not necessary and provide the LLM with only the most relevant triples. In this example, the answer is "Seattle".
To Try out the Fujitsu KG Enhanced RAG for Q&A
From here, I'll use the Fujitsu KG Enhanced RAG for Q&A.
The main steps are as follows:
Step1. Registering knowledge graphs from business documents used for questions
Step2. Register graph schema from hypothetical questions
Step3. Chat-style questions
Step 1. Registering knowledge graphs from business documents used for questions
- Upload a business document
In addition to text, you can also specify URLs for business document.
In this example, the Fujitsu Press Release Article is registered as a business document.
- Registers a knowledge graph from an uploaded business document
Point. Customizing Prompts
The prompt for extracting the knowledge graph can be customized to your needs.
Step 2. Register graph schema from hypothetical questions
- Upload the hypothetical questions
※. In this example, I uploaded a set of hypothetical questions that are provided by default.
- Register graph schema based on uploaded set of hypothetical questions
Step 3. Start a chat
You have completed the preliminary preparation.
Now it's time to start chatting.
- Open Chat Thread
Specifies the registered knowledge graph and graph schema and opens a thread for chat.
- Chat
As a question about business documents, "Who are the joint research partners of the technology released at the same time as the hallucination detection technology?" and I'll ask them.
This question is intentionally complex, and the following points are important:- We need to know what technology was released at the same time as the hallucination detection check technology.
- We need to know the organizations collaborating on this technology.
The correct answer is "Ben-Gurion University".
Point.
The article in the press release shows that the answer is valid.
Fujitsu’s new technology not only detects phishing URLs, but also increases the AI’s resistance against existing attacks tricking AI models into making a deliberate misjudgment to ensure highly reliable responses by the AI. The newly developed technology leverages a technique jointly developed by Fujitsu and Ben-Gurion University of the Negev at the Fujitsu Small Research Lab established at Ben-Gurion University. The technology leverages the tendency that hostile entities often attack a single type of AI model, and detects malicious data by processing information with various different AI models and evaluating the difference in rationale for the judgment result.
Furthermore, Fujitsu KG Enhanced RAG for Q&A has a feature that allows you to reference the triples used in the answer.
This also shows that the system accurately recognizes the business documents.
To Try out the KG Enhanced RAG for Q&A
Fujitsu Knowledge Graph Enhanced RAG for Q&A makes good use of knowledge graphs to answer questions with high accuracy.
If you are interested in using this in your business, please contact us here.
I would like everyone to take advantage of that effect.
*1:RAG technology Retrieval Augmented Generation. A technology that combines and extends the capabilities of generative AI with external data sources.