Explainable and trustworthy AI are closely connected, in part due to the EU definition of trustworthy AI as one that is lawful, ethical and robust. When the decisions an AI model suggests are explained, the reasons why this decision was suggested can be examined to see if they correspond to human values and understanding of the problem. Another issue is that machine learning is only as good the data it is fed. Therefore, technologists are continuously on the lookout for biases that may pop up in the behaviour of smart systems. Or biases that are added with malicious intent. Examples are recognition or profiling based on ethnicity or gender, or seeing as global what in effect are only local customs or behaviours, or even just temporary, commercial hypes. And last there’s the concern that people should always remain free in their choice to contribute or retract personal data, or to act upon the suggestions of AI systems. Another aspect to trustworthiness is that AI models take an input and always produce an output, regardless of how sure they are about the output. The degree of certitude is not often provided. But in healthcare applications, the point is to partially replace doctors by a computer. The algorithm gives medical advice that a doctor may then use. For this to work, the AI has to be as trustworthy as a doctor, which means it needs to express doubt.
Explainability is a hot topic within IDLab, leading to projects and applications in control processes of chemical industry, monitoring, e-learning, employing a variety of AI techniques, including computer vision, natural language processing, hybrid AI and surrogate modelling.