Data science and artificial intelligence (AI) are creating new opportunities to improve businesses' decision making, productivity, and competitiveness. However, data science and AI also create ethical and privacy concerns. For example, a classification algorithm can harm a sub-category of the population due to bias in the data used to develop and train the model. Data scientists and AI engineers often learn the concepts, tools, and techniques and then start to collect data and develop machine learning algorithms without realizing the unintended consequences of their data products. What obligation do data scientists and AI engineers have to be guardians of the data they collect and analyze? How do we ensure data and AI products' fairness, interpretability, privacy, and security? This course focuses on ethics, governance, and laws specifically related to data science and AI. This course aims to provide a framework to help students understand the value tradeoffs at stake as they collect data, develop algorithms, and deal with some of the consequences. We use case studies, examples, and simulations to facilitate learning, critical thinking, debates, decision making, and problem solving in the context of data science, AI ethics, and governance.