<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cecilia Panigutti</style></author><author><style face="normal" font="default" size="100%">Guidotti, Riccardo</style></author><author><style face="normal" font="default" size="100%">Monreale, Anna</style></author><author><style face="normal" font="default" size="100%">Pedreschi, Dino</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Explaining multi-label black-box classifiers for health applications</style></title><secondary-title><style face="normal" font="default" size="100%">International Workshop on Health Intelligence</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Explainable Machine Learning</style></keyword><keyword><style  face="normal" font="default" size="100%">Healthcare</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007/978-3-030-24409-5_9</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Today the state-of-the-art performance in classification is achieved by the so-called “black boxes”, i.e. decision-making systems whose internal logic is obscure. Such models could revolutionize the health-care system, however their deployment in real-world diagnosis decision support systems is subject to several risks and limitations due to the lack of transparency. The typical classification problem in health-care requires a multi-label approach since the possible labels are not mutually exclusive, e.g. diagnoses. We propose MARLENA, a model-agnostic method which explains multi-label black box decisions. MARLENA explains an individual decision in three steps. First, it generates a synthetic neighborhood around the instance to be explained using a strategy suitable for multi-label decisions. It then learns a decision tree on such neighborhood and finally derives from it a decision rule that explains the black box decision. Our experiments show that MARLENA performs well in terms of mimicking the black box behavior while gaining at the same time a notable amount of interpretability through compact decision rules, i.e. rules with limited length.</style></abstract></record></records></xml>