The benefits arising from Artificial Intelligence (AI) in terms of prediction accuracy, automation, new products and services or cost reduction are remarkable. But enterprises need to build trust and transparency in the data and algorithm used in AI systems to increase adoption. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Most experts in the field agree that AI systems should be less ambiguous to the end-users and subjects of algorithmic decision-making. In this seminar, four speakers from the University of Tartu and Wise will discuss the explainability and transparency of AI as well as the requirements and challenges of building a robust machine learning system.
📍03:54 Meelis Kull, Associate Professor in Machine Learning at UniTartuCS: Understanding the decisions of an AI system
📍29:20 Radwa El Shawi, Associate Professor of Big Data at UniTartuCS: Towards automatic concept-based explanations
📍53:40 Luca Traverso, Data Science Lead at Wise: Building robust and defensible machine learning systems in Fincrime
📍01:22:20 Rain Vagel, Data Scientist at Wise: Why did a model decide A? – Explaining machine learning models in regulated environments
📍1:43:08 on Panel discussion
Data Science Seminars are supported by the European Social Fund and University of Tartu ASTRA project PER ASPERA Doctoral School of Information and Communication Technologies.
More information about our seminars: https://cs.ut.ee/en/content/data-science-seminars