Please enable JavaScript in your browser.

fltech - 富士通研究所の技術ブログ

富士通研究所の研究員がさまざまなテーマで語る技術ブログ

Introducing The Fujitsu Neuro-Symbolic Explainer

Hello, I'm Dr. Joe Townsend from Fujitsu Research of Europe Ltd. Today, I would like to introduce an exciting new AI core engine – the Fujitsu Neuro-Symbolic Explainer – which is now available from Fujitsu Kozuchi (code name) - Fujitsu AI Platform.

This blog is the second part of a five-week blog series. The previous post, "Introducing Synthetic Image Generation", is here!

The Fujitsu Neuro-Symbolic Explainer is a technology that enables the extraction of validated explanations for image classifications made by AI. It is one of the AI core engines of Fujitsu Kozuchi, which enables the fast testing of cutting-edge AI technologies developed by Fujitsu.

For example, one image classification task for AI is to classify a landscape image as either a city or countryside. However, the reasoning behind each image classification is not explicit, but is hidden in the AI’s black-box. To overcome this issue, we have developed the Fujitsu Neuro-Symbolic Explainer.

* “Neuro-Symbolic” is an approach to AI that combines the learning capabilities of neural networks with the interpretable properties of symbolic AI (rule-based AI)

Benefits of the Fujitsu Neuro-Symbolic Explainer and how to use it

The principal benefit of the Fujitsu Neuro-Symbolic Explainer is that it can extract validated explanations of the reasons why AI made certain image classifications. First, the AI developers assign human-understandable labels to classification rules extracted from the AI using this technology. Then, by applying these rules, the AI users can understand not only the result of the image classification but also the reasons for the classifications.

(1) Benefits for AI developers and how to use it

AI developers upload their image classification AI model and the data used to train the model into the Fujitsu Neuro-Symbolic Explainer. The Fujitsu Neuro-Symbolic Explainer then extracts features of the image that the AI picks up during the classification process, along with rules that determine how those features relate to each other and to the output class.

The extracted features are called “symbols”. The Fujitsu Neuro-Symbolic Explainer displays groups of images corresponding to each symbol. The AI developers look at the images and label them, such as "Crowd" for symbol A and "Grass" for symbol B. This makes the features (symbols) that the image classification AI focused on understandable to humans.

Fujitsu Neuro-Symbolic Explainer also extracts the classification rules inside the AI. These classification rules are called symbolic rules. For example, they might be something like "Rule 1: A & NOT B → City", "Rule 2: NOT A & C → Countryside", and so on. With these classification rules and the symbols labeled as mentioned above, AI developers can obtain classification rules that humans can understand, such as "Crowd & NOT Grass → City" (“if there is a crowd and no grass, then the image is a city”).

(2) Benefits for AI users and how to use it

AI users input the image they want to classify, the image classification AI model, and the "human-understandable classification rules" created in (1) into the Fujitsu Neuro-Symbolic Explainer. The Fujitsu Neuro-Symbolic Explainer outputs the image classification result and the reason for the classification. For example, the AI users can obtain information that the classification result is a "City”, and the reason for the classification is that the image depicts a "Crowd" and "No Grass”.

Features of the Fujitsu Neuro-Symbolic Explainer technology

The Fujitsu Neuro-Symbolic Explainer has two features.

The first is the ability to automatically analyze a trained image classification AI and extract symbolic rules.

Feature-detecting parts of the AI - inside the “black box” - are extracted and represented as symbols. These symbols are combined into rules, like words in a sentence, so that these rules describe the behaviour of the AI in terms of those symbols.

This process is customisable, including options for:

  • Maximum length of extracted rules – shorter for simpler explanations.
  • How deep inside the AI to look

The second is the automatic extraction of visualization information necessary for interpreting the explainable, symbolic rules.

Rules become interpretable when the symbols that make them become interpretable. Although some human interaction is required to assign these labels, our technology simplifies this task with an interactive visualisation tool that highlights only the information most important for identifying that symbol, based on:

  • Images which do and do not represent the corresponding feature
  • Visualisation of evidence for that feature in those images

With this information, the developer can assign a label based on what they see.

Interested in testing Fujitsu Kozuchi?

This is intended for software developers looking to:

  • Provide their own customers with trustworthy, explainable image-recognition AI solutions, especially in fields where decisions have ethical or safety-critical consequences such as medicine, ADAS (Advanced Driver-Assistance System) or autonomous vehicles.
  • Strengthen the robustness of their own AI development process: The Fujitsu Neuro-Symbolic Explainer can expose faults in the AI’s reasoning (e.g. bias) before deployment so that these faults can be addressed before public release.
  • Prepare their AI solutions for audit and regulation: In fields such as medicine, technology is heavily regulated and AI-based solutions have been known to fail due to a lack of explainability.

For a demonstration or to testing our new Fujitsu Neuro-Symbolic Explainer, please contact us here.