Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM

By David Yakobovitch

Bias in AI is becoming a concern as algorithms cause unfairness in many areas including hiring, loan applications and autonomous vehicles. Everybody expects AI to be accountable and calls for developing standards and governance systems to create balance. The idea of black boxes demonstrates the flaws of using AI since this technology cannot be scrutinized. Humans want an accountable technology and with AI being a black box, this means responsibility to control how algorithms work for better outcomes. AI can also cause destruction and make secret decisions, which cause negative implications on people’s lives and translates to using responsible AI systems. By integrating explainable AI into their AI models, businesses make accurate decisions, map patterns and optimize operations. Listen in, as I discuss Why Responsible AI is needed in Explainable AI Systems In this episode: Prof. Christoph Lutge, Director of TUM Institute for Ethics and AI (Germany) This episode is brought to you by For The People. You can grab your copy of For the People on Amazon today, or visit SIMONCHADWICK.US to learn more about Simon. 🚀 You could sponsor today's episode. Learn about your ad-choices ( http://www.humainpodcast.com/advertise/ ). 💙 Show your support for HumAIn with a monthly membership ( http://www.humainpodcast.com/membership ). 📰 Receive subscriber-only content with our newsletter ( http://humainpodcast.com/listen ). 🧪 Visit us online and learn about our trend reports ( https://www.humainpodcast.com/reports/ ) on technology trends and how to bounce back from COVID-19 unemployment. *About HumAIn Podcast:* The HumAIn Podcast is a leading artificial intelligence podcast that explores the topics of AI, data science, future of work, and developer education for technologists. Whether you are an Executive, data scientist, software engineer, product manager, or student-in-training, HumAIn connects you with industry thought leaders on the technology trends that are relevant and practical. HumAIn is a leading data science podcast where frequently discussed topics include ai trends, ai for all, computer vision, natural language processing, machine learning, data science, and reskilling and upskilling for developers. Episodes focus on new technology, startups, and Human Centered AI in the Fourth Industrial Revolution. HumAIn is the channel to release new AI products, discuss technology trends, and augment human performance. Advertising Inquiries: https://redcircle.com/brands Privacy & Opt-Out: https://redcircle.com/privacy

Listen to Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM now.

Listen to Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM in full in the Spotify app