Exhibit:
„Who’s deciding here?!“
Lender: Fraunhofer IAIS
The topic of Artificial Intelligence (AI) has become an integral part of our daily lives. It assists doctors in making more precise diagnoses, optimizes resources for climate protection, and enables innovative solutions across various fields. However, alongside its many benefits, the technology also poses risks. A pivotal question arises: How can we ensure that AI operates in a trustworthy and fair manner?
Fraunhofer IAIS is conducting research on this very challenge as part of the “Human AI” competence pillar at the ADA Lovelace Center. At the core of their approach is a focus on humans: AI is not only analyzed on a technical level but is explicitly linked to the needs and perspectives of its potential users. The goal is to develop AI that is robust, transparent, and resource-efficient, while also safeguarding privacy and avoiding discrimination.
Interactive Learning – The Exhibit
With the interactive exhibit “Who’s Deciding Here?!”, visitors can explore the current limitations and possibilities of AI firsthand. It uses real-life scenarios to help demonstrate the impact of AI-driven decisions. For example, one scenario presents two applicants for the same job. Both candidates are the same age and equally educated – with the woman even more qualified than the man. Yet, in this case, the AI chooses the male applicant. Another example examines whether AI can reliably diagnose pneumonia. Here, the answer is: yes.
Through questions and answers, the exhibit highlights an essential takeaway: AI is not inherently fair or unbiased. Particularly for sensitive topics like job applications or medical diagnoses, the need for trustworthy and impartial systems is paramount.
The exhibit “Who’s Deciding Here?!” underscores the importance of developing trustworthy AI through an engaging and playful approach. Whether in health, education, or the economy, AI must be responsibly designed and implemented, as it increasingly influences critical areas of life.
Why Trustworthy AI Matters
Trustworthy AI is set to become a cornerstone for societal acceptance and innovation, says Dr. Stefan Kamin, Coordinator of the “Human AI” competence pillar. Real-world cases, like Amazon’s biased recruitment AI that discriminated against female applicants, showcase the importance of carefully considering data sources and training methods. AI systems can only be as fair as the data on which they are built.
This is where Fraunhofer IAIS steps in: Before embarking on a research project or collaborating with industry partners to develop specific AI solutions, scientists gather extensive knowledge about the target audience and their requirements. They employ both representative surveys and smaller focus groups to collect insights. These approaches ensure data quality, enabling AI systems to make decisions that are as objective and diversity-conscious as possible.