Please enable JavaScript in your browser.

fltech - 富士通研究所の技術ブログ

富士通研究所の研究員がさまざまなテーマで語る技術ブログ

Report on the Scikit-Learn Interpretability Workshop and Development Sprint

f:id:fltech:20210226152217p:plain

Hello, Norbert Preining and Hiro Kobashi from Fujitsu Laboratories Ltd., Artificial Intelligence Laboratory here. From February 8th to 10th, the Scikit-learn Consortium-organized Workshop (on Interpretability) and DevSprint was held online. Since we participated as a representative of Fujitsu, we would like to briefly report on the workshop and dev sprint. Fujitsu is member of the Scikit-learn consortium (the only one from Asia), which is promoting the growth of Scikit-learn and its adoption in Asia.

Advisory Committee (Kobashi)

Prior to the workshop and dev sprint, the Advisory Committee Meeting was held. The Advisory Committee's mission is to advise the Scikit-learn managing committee to operate soundly (in terms of direction and finances). Hiromichi Kobashi participated as representative of Fujitsu in the meeting and the with the other participating organizations.

f:id:fltech:20210219093728j:plain

https://twitter.com/sklearn_inria/status/1361355579004977159

During the meeting, Fujitsu proposed to hold a development sprint in Japan. Dev sprints were originally held on-site and participants were primarily European-based. However, under the influence of this COVID-19, the dev sprints moved to be held online (the last on-site was held in Paris just a year ago, and Hiromichi Kobashi also participated). Now that it's held online, the distance issue for participation from Japan is gone, but there is still the matter of time difference, thus we proposed a dev sprint in Japan to solve this issue for participants from Asia.

By holding this event in Japan, we would like to aim for further popularization of scikit-learn and the acquisition of new contributors from Japan and Asia in general. Please look forward to it!

Workshop (Preining)

With interpretability of machine learning models ever increasing importance, the Scikit-learn Consortium organized a workshop with attached development sprint to discuss recent developments and research around interpretability.

The 2-day workshop was attended by about 30 researchers and developers, including teams from the machine learning groups of large companies and governmental agencies. The talks span a wide range from the general state of interpretability in scikit-learn, the presentation of new libraries, to relevance to and applications in real-world setups.

Workshop Program

f:id:fltech:20210219093840p:plain

(The workshop was held via the Discord platform)

After the Welcome and introduction message by Gaël Varoquaux from Inria and the scikit-learn Consortium, Guillaume Lemaitre (Inria, scikit-learn Consortium) reported on the current state and limitations of model inspection in scikit-learn. scikit-learn already provides several ways to inspect generated models for interpretability, and laid out path to overcome current limitations.

In the next talk "FACET: a new open-source library for global model explanations", Jan Ittner from BCG introduce us to a new open-source library for global model explanations, FACET. This library introduces a new algorithm to quantify dependencies and interactions between features in ML models. It is closely integrated with scikit-learn and adds a new, global perspective to the observation-level explanations provided by the popular SHAP approach.

The last talk of the first day was by Clara Bove from AXA titled "eXplainable AI Interfaces : a user perspective". Clara is both a machine learning researcher as well as a user interface designer, and presented their work on explaining decisions (by AI or statistics) to non-expert users using examples from car insurance and explanations of pricing.

The second day started with the presentation "ML Interpretability vs. the 'Real Word'" by Xavier Renard and Thibault Laugiel, both AXA. Their talk gave a ride across many fields of interpretability in machine learning, currently used algorithms and their shortcomings and diagnostic tools across different fields. They wrap up their talk with a very clear statement that while the field is still immature and there is disagreements, interpretability will be a key factor in the rise of AI governance and regulation.

The next talk on "Explainability of ML in the financial sector" by Laurent Dupont from the Banque de France, centered around Explainability of ML in the financial sector, and underlined the statement of the previous talk about the importance of interpretability in governance and regulation. Laurent presented several research activities of their lab, in particular how they use machine learning for their own evaluation of financial instruments and institutions, and the open questions they are facing, in particular how to ensure compliance with sectoral regulations and auditing of internal models.

The last talk of the workshop, "A Causal Perspective On Interpretability Methods" by Léo Dreyfus-Schmidt and Samuel Ronsin from Dataiku, gave an introduction of Causal perspective and in particular how to deal with the misuse of machine learning models and methods using Causal interference.

DevSprint (Preining)

During the following 2-day development sprint about 30 issues related to interpretability have been under consideration, with most of them resulting in many pull requests being merged or currently considered.

From Fujitsu, as a member of the Scikit-learn Consortium, four researchers and developers attended the workshop and the development sprint, and submitted 5 pull requests (1, 2, 3, 4, 5) that have been merged into the main repository.

Both the workshop and the development sprint have been a great success, both with respect to the necessary advances in interpretability, practical development of fix and new features, as well as team-work between the members in a virtual environment.

Conclusion

This is Fujitsu's third participation in a Development Sprint of the scikit-learn consortium, and we feel that Fujitsu is gradually become more and more present in Scikit-learn.

If anyone is interested in this kind of activities of Fujitsu Laboratories, Kobashi is always looking for casual interviews. Please contact us!