This course explores the ethical challenges AI systems present and measures to build them responsibly via the Delft Design for Values methodology. It provides an avenue to implement ethical standards into AI design, ensuring mapped values align with programming and development, focusing heavily on practical scenarios, such as healthcare, under expert guidance from TU Delft professionals.
While a basic understanding of technical AI development is beneficial, it is not a strict requirement for this intermediate-level course.
This program is tailored for professionals involved in AI system development, alongside managers supervising such projects, aiming to infuse ethical considerations into their workflows.
Skills acquired from this course can be used to enhance decision-making in technological design, ensuring AI systems are both effective and ethically conscious, reducing risks and increasing trust in automated processes. Such practices are particularly valuable in sectors where AI impact is critical such as healthcare, government, and industry.
Introduction to the ethical challenges in AI, stakeholder values, and application of Design for Values in healthcare AI.
Focus on AI system trustworthiness, accuracy, reliability, and tools for enhancing system explainability.
Detailed discussion on data bias, algorithm fairness, and methods to monitor and mitigate biases.
Exploration of accountability in AI mishaps and the design of organisational structures for responsible AI use.
Strategies to manage conflicts between differing ethical values and the final assignment on ethical value design translation.