keyboard_arrow_up
Humanizing AI: A Human-centered Architecture to Developing Trustworthy Intelligent Systems

Authors

Muhammad Uzair Akmal, Selvine George Mathias, Saara Asif, Leonid Koval, Simon Knollmeyer and Daniel Grossmann, Technische Hochschule Ingolstadt, Germany

Abstract

The lack of trust and fairness in artificial intelligence (AI) systems driven by biases, misclassified data, lack of transparency, and limited interoperability, raises significant ethical concerns and socioeconomic impacts. This study presents a reference architecture for an AI pipeline aligned with Industry 5.0 principles, focusing on human-centered design, sustainability, social responsibility, and resilience. It enhances human-AI collaboration by involving four user types (data scientists, domain experts, organizations, and end users) who share decision-making responsibilities during the AI system development process. The architecture incorporates Active Learning (AL) to address data bias and misclassification issues and Transfer Learning (TL) to ensure model reusability in resource-constrained environments. Post-modeling Explainability gives stakeholders insight into model behavior and outcomes, fostering transparency and trust. Additionally, two user-ranked custom validation metrics evaluate the architecture and calculate Mean Average Precision (MAP) for Rankings. These metrics ensure the architecture design and outcomes adhere to ethical AI principles while promoting collaborative, responsible, and sustainable AI development.

Keywords

Artificial intelligence, Human-centric AI, Active learning, Transfer learning, Explainable AI, Intelligent systems, Industry 5.0

Full Text  Volume 15, Number 8