Engineering an Ethical AI System
To improve people’s well-being, we must improve the decisions made about them. Consequential decisions are increasingly being made by AI, like selecting who to recruit, who receives a home-loan or credit card, and how much someone pays for goods or services. AI systems have the potential to to make these decisions more accurately and at a far greater scale than humans. However, if AI decision-making is improperly designed it runs the risk of doing unintentional harm, especially to already disadvantaged members of society. Only by building AI systems that accurately estimate the real impact of possible outcomes on a variety of ethically relevant measures, rather than just accuracy or profit, can we ensure this powerful technology improves the lives of everyone.
This talk focuses on the anatomy of these ethically-aware decision-making systems, and some design principles to help the data scientists, engineers and decision-makers collaborating to build them. We motivate the discussion with a high-level simulation of the "selection" problem where individuals are targeted, based on relevant features, for an opportunity or an intervention. We detail the necessary considerations and the potential pitfalls when engineering an ethically-aware automated solution, from initial conception through to causal analysis, deployment and on-going monitoring.
Outline/Structure of the Talk
The talk will open with a few examples of how more and more decisions that directly impact humans are being made by algorithms. We briefly highlight the potential consequences of these automated systems and the need for a before we get to our motivating question:
“What is an ethical decision?”
It is a decision that is likely to cause good.
The talk will focus on exploring the anatomy of this answer and its implications for data-scientists, software engineers, managers, and senior leadership, using a running example of an autonomous decision-making system that must determine the fate of individuals.
This answer can be broken into three key components:
- Likely - requires a probabilistic model of a system
- Cause - requires a scientific understanding of how components within the system interact.
- Good - requires a comprehension of what outcomes are morally right or just
The remainder of the talk will utilise these three components to identify what skills are required and to structure a framework for systematically developing ethically-aware AI.
- Identify what matters
- Measure what matters
- Build levers into the system that affect what matters
- Estimate the impact of the lever settings on what matters
- Expose the trade-offs being made by levers on what matters
Each step will be reinforced by referring back to our running example - a scenario in which an automated decision-maker must decide whether or not someone is eligible for a loan based on the outputs of a data-driven algorithm. We will discuss potential sources of bias in the model and their ramifications on the behaviour of the system as a whole. We will demonstrate how seemingly innocuous discrepancies in the accuracy of the algorithm across different cohorts can lead to unethical selection policies. We will outline a selection of fairness measures and discuss why most are irreconcilable with one another before presenting some current state-of-the-art approaches to detecting and correcting unethical behaviour in an automated system.
Attendees will acquire an appreciation for the potentially discriminatory actions taken by decision-making algorithms that can arise due to various types of common biases in data, inaccurate assumptions or historical favouritism.
The audience will also gain an insight into how to detect unethical behaviour in an AI system as well as examples of steps that can be taken to mitigate or remove its effects, and the understanding that a well implemented, carefully governed system may make better, fairer, more consistent decisions than a human.
The talk will be aimed primarily at machine learning practitioners, but will be relevant to anybody involved in automated decision making. We will describe the process of incorporating ethics into decision-making algorithms, from conception, through to data collection, development and continued monitoring. Consequently, the talk may be of interest to anyone designing, building or overseeing an automated decision-making system that impacts people, including data-scientists, software engineers, managers, and senior leadership.
Prerequisites for Attendees
The prerequisite knowledge for the talk will be minimal. A basic understanding of what machine learning is might be helpful. One or two slides may delve into more technical content but understanding them would not be essential to grasping the key ideas of the talk.