Trustworthy AI lab

Frankfurt Big Data Lab                           CVAI Lab 

We advocate for the development and use of responsible and trustworthy AI from a holistic perspective taking into account all stakeholders. Our core values are inspired by the EU Ethics Guidelines for Trustworthy AI, and we have been collaborating in the development of the Z-Inspection® process for assessing Trustworthy AI.
With this, we want to establish a "Mindful use of AI" (#MUAI).

Z-Inspection® is now listed in the new OECD Catalogue of AI Tools & Metrics.

We are also interested in actively developing new AI methods that can help evaluate and quantify such principles.

We are always eager to collaborate with new people, work on new use-cases or hear about your feedback. Send us an email to get in touch!

Our team is formed by computer scientists from Goethe University Frankfurt, and we are in constant dialogue with many collaborators from different domains. 

Core team:
Prof. Gemma Roig
Dr. Karsten Tolle
Mr. Dennis Vetter

Affiliated scientist:
Prof. Dr. Roberto Zicari

We are collaborating in a plethora of projects including the following EU funded ones:

eXplainable Artificial Intelligence in Healthcare Management
An Interdisciplinary Master’s Program at the Intersection of AI and Health Care.

Pan-European Response to the Impacts of COVID-19 and future Pandemics and Epidemics (PERISCOPE) 
Investigating the broad socio-economic and behavioural impacts of the COVID-19 pandemic, to make Europe more resilient and prepared for future large-scale risks.

We also offer student projects at bachelor and master level at Goethe University in our mission to create awareness of the importance of Trustworthy AI.

Zicari, R. V., Amann, J., Bruneault, F., Coffee, M., Düdder, B., Hickman, E., Gallucci, A., Gilbert, T. K., Hagendorff, T., van Halem, I., Hildt, E., Holm, S., Kararigas, G., Kringen, P., Madai, V. I., Mathez, E. W., Tithi, J. J., Vetter, D., Westerlund, M., & Wurth, R., on behalf of the Z-Inspection® Initiative. (2022). How to Assess Trustworthy AI in Practice (arXiv:2206.09887). arXiv. https://doi.org/10.48550/arXiv.2206.09887

Vetter, D., Tithi, J. J., Westerlund, M., Zicari, R. V., & Roig, G. (2022). Using Sentence Embeddings and Semantic Similarity for Seeking Consensus when Assessing Trustworthy AI. Proceedings of the Workshop on Imagining the AI Landscape after the AI Act (IAIL 2022) Co-Located with 1st International Conference on Hybrid Human-Artificial Intelligence (HHAI 2022). https://ceur-ws.org/Vol-3221/IAIL_paper1.pdf. (Also available on arXiv)

Allahabadi, H., Amann, J., Balot, I., Beretta, A., Binkley, C., Bozenhard, J., Bruneault, F., Brusseau, J., Candemir, S., Cappellini, L. A., Chakraborty, S., Cherciu, N., Cociancig, C., Coffee, M., Ek, I., Espinosa-Leal, L., Farina, D., Fieux-Castagnet, G., Frauenfelder, T., … Zicari, R. V. (2022). Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients. IEEE Transactions on Technology and Society, 3(4), 272–289. https://doi.org/10.1109/TTS.2022.3195114

Amann, J., Vetter, D., Blomberg, S. N., Christensen, H. C., Coffee, M., Gerke, S., Gilbert, T. K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Strümke, I., Zicari, R. V., & Madai, V. I., on behalf of the Z-Inspection® Initiative. (2022). To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health, 1(2), e0000016. https://doi.org/10.1371/journal.pdig.0000016

Zicari, R. V., Ahmed, S., Amann, J., Braun, S. A., Brodersen, J., Bruneault, F., Brusseau, J., Campano, E., Coffee, M., Dengel, A., Düdder, B., Gallucci, A., Gilbert, T. K., Gottfrois, P., Goffi, E., Haase, C. B., Hagendorff, T., Hickman, E., Hildt, E., … Wurth, R. (2021). Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier. Frontiers in Human Dynamics, 3, 40. https://doi.org/10.3389/fhumd.2021.688152

Zicari, R. V., Brusseau, J., Blomberg, S. N., Christensen, H. C., Coffee, M., Ganapini, M. B., Gerke, S., Gilbert, T. K., Hickman, E., Hildt, E., Holm, S., Kühne, U., Madai, V. I., Osika, W., Spezzatti, A., Schnebel, E., Tithi, J. J., Vetter, D., Westerlund, M., … Kararigas, G. (2021). On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls. Frontiers in Human Dynamics, 3, 30. https://doi.org/10.3389/fhumd.2021.673104

Zicari, R. V., Brodersen, J., Brusseau, J., Düdder, B., Eichhorn, T., Ivanov, T., Kararigas, G., Kringen, P., McCullough, M., Möslein, F., Mushtaq, N., Roig, G., Stürtz, N., Tolle, K., Tithi, J. J., van Halem, I., & Westerlund, M. (2021). Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Transactions on Technology and Society, 2(2), 83–97. https://doi.org/10.1109/TTS.2021.3066209 

Ethical Implication of AI: Assessing Trustworthy AI in Practice
— Series of Lectures —
This course introduces students to the ethical foundations of Trustworthy AI, and teaches how to assess Trustworthy AI systems in practice by using the Z-Inspection® process. 
Course materials and recordings from past lectures are openly available:

Winter 2022/23 @ Seoul National University
Summer 2022 @ Seoul National University
Summer 2020 @ Goethe University Frankfurt


* Trustworthy AI for Healthcare Management
— Online Course —
This MOOC was created as part of the EU project PERISCOPE. Itgives an introduction to trustworthy artificial intelligence and its application in healthcare. This includes modules on basics of artificial intelligence and an introduction to trustworthy and ethical applications of artificial intelligence. A dedicated lesson will present the Z-Inspection® process for assessing trustworthy AI, and real-world case studies will illustrate how to apply the knowledge.

Course available for free on coursera.org.

*  Guest lecture on Trustworthy AI in the Data Science 1 (Dr. Karsten Tolle)
Dr. Jean Enno Charton 
Director Bioethics & Digital Ethics
Merck (Darmstadt)

Titel (vorläufig): Verantwortungsbewusster Umgang mit Daten & KI in Unternehmen

Room: H IV (Bockenheim)
Date: 10th of July @ 10:15 - 11:45 Uhr

* Certified Z-Inspection® Teaching Experts:
Karsten Tolle, Goethe University Frankfurt, Germany
Dennis Vetter, Goethe University Frankfurt, Germany
Gemma Roig, Goethe University Frankfurt, Germany

We frequently have projects available for students interested in working with Trustworthy AI. Send us an email to get in touch!

Past projects:
* Systematic Evaluation of Graph Sampling Methods
Bachelor Thesis in co-operation with the Research Group for Theoretical Computer Science
In this work, we comprehensively evaluate the node2vec and the CrossWalk random walk-based sampling for generating graph embeddings of social networks. The resulting embeddings were systematically examined concerning their fairness and accuracy. We will shown that the configuration of the hyperparameters of node2vec and CrossWalk significantly affects the resulting graph representations in both directions and thus either increases or decreases the prediction accuracy or the fairness for selected features.


* The influence of Deep Neural Network architectures on the classification Fairness of face recognition tasks 
Master Thesis
In this work we will look at Deep Learning tasks with the focus on face recognition and facial attribute analysis. Models that use biased datasets lead to inherit this bias resulting in unfair performance. The goal of this work is to recognize unfair machine learning systems by defining different Fairness measures and evaluating current architectures, to find ways how these architectures can be adapted to perform with better fairness.


* Subjective evaluation of AI explainability methods and their applicability to chest x-rays
Research Project in co-operation with the Frankfurt Big Data Lab
The work presented in this report spans the implementation of a pre-trained deep neural network for image classification, the comparative evaluation of different explainability approaches as well as the integration and evaluation of a metric for quantitatively assessing the quality of provided explanations. Finally, the evaluation of explanation techniques is also performed with a real-world AI system.


* The influence of dataset biases on classification fairness
Bachelor Thesis
This work will explore the influence of dataset biases on machine learning outcomes, by selectively subsampling the datasets, and therefore, by artificially introducing biases to the models which are then trained on these datasets. Hence, this thesis is looking for an algorithmic solution to classification fairness. The effects will then be analyzed on different model types.


* Do Machine Learning classifiers have an innate fairness?
Bachelor Thesis
In this work, I will analyze different machine Learning classifiers in regard to
their fairness. I will come to the conclusion, that they indeed have innate
fairness and will give recommendations on what classifier to use to satisfy which fairness constraint.


* Ethical assessment of AI systems in healthcare: A use case
Master Thesis
The aim of this work is to examine an AI system in the healthcare sector for supporting processes in the detection of OHCA (Out-of-Hospital Cardiac Arrest) with regard to ethical issues. Since the present use case was developed and tested in Europe and thus European values and norms should be respected, the focus is placed on ethical principles of European guidelines for an ethically compliant AI system and specifically on the principles of fairness and explicability for a trustworthy AI system.

First World Z-inspection® Conference. Ateneo Veneto, March 10-11, 2023, Venice, Italy -CONFERENCE READER

The interdisciplinary meeting welcomed over 60 international scientist and experts from AI, ethics, human rights and domains like healthcare, ecology, business or law.

At the conference the practical use of the Z-Inspection® process to assess real use cases for the assessment of trustworthy AI were presented. Among them :

The Pilot Project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde – part of the Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands);

– The assessment of the use of AI in times of COVID-19 at the Brescia Public Hospital (“ASST Spedali Civili di Brescia“).


Two panel discussions on “Human Rights and Trustworthy AI” and “How do we trust AI?“ provided an interdisciplinary view on the relevance of data and AI ethics in the human rights and business context.

The main message of the conference was the need of a Mindful Use of AI (#MUAI).

This premiere World Z-Inspection® Conference was held in cooperation with Global Campus of Human Rights and Venice Urban Lab and was supported by Arcada University of Applied Science, Merck, Roche and Zurich Insurance Company.

DOWNLOAD CONFERENCE READER

Link to the Video: Conference Impressions

Mobirise

Made with Mobirise - Learn more