schedule May 7th 02:40 - 03:10 PM place Green Room people 54 Interested

To improve people’s well-being, we must improve the decisions made about them. Consequential decisions are increasingly being made by AI, like selecting who to recruit, who receives a home-loan or credit card, and how much someone pays for goods or services. AI systems have the potential to to make these decisions more accurately and at a far greater scale than humans. However, if AI decision-making is improperly designed it runs the risk of doing unintentional harm, especially to already disadvantaged members of society. Only by building AI systems that accurately estimate the real impact of possible outcomes on a variety of ethically relevant measures, rather than just accuracy or profit, can we ensure this powerful technology improves the lives of everyone.

This talk focuses on the anatomy of these ethically-aware decision-making systems, and some design principles to help the data scientists, engineers and decision-makers collaborating to build them. We motivate the discussion with a high-level simulation of the "selection" problem where individuals are targeted, based on relevant features, for an opportunity or an intervention. We detail the necessary considerations and the potential pitfalls when engineering an ethically-aware automated solution, from initial conception through to causal analysis, deployment and on-going monitoring.

 
1 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

The talk will open with a few examples of how more and more decisions that directly impact humans are being made by algorithms. We briefly highlight the potential consequences of these automated systems.

The talk will focus on the considerations in creating an ethical AI and its implications for data-scientists, software engineers, managers, and senior leadership, using a running example of an autonomous decision-making system that must determine the fate of individuals.

Proposed approach:

  1. Identify what matters
  2. Measure what matters
  3. Build levers into the system that affect what matters
  4. Estimate the impact of the lever settings on what matters
  5. Expose the trade-offs being made by levers on what matters

Each step will be reinforced by referring back to our running example - a scenario in which an automated decision-maker must decide whether or not someone is eligible for a loan based on the outputs of a data-driven algorithm. We will discuss potential sources of bias in the model and their ramifications on the behaviour of the system as a whole. We will demonstrate how seemingly innocuous discrepancies in the accuracy of the algorithm across different cohorts can lead to unethical selection policies. We will outline a selection of fairness measures and discuss why most are irreconcilable with one another before presenting some current state-of-the-art approaches to detecting and correcting unethical behaviour in an automated system.

Learning Outcome

Attendees will acquire an appreciation for the potentially discriminatory actions taken by decision-making algorithms that can arise due to various types of common biases in data, inaccurate assumptions or historical favouritism.

The audience will also gain an insight into how to detect unethical behaviour in an AI system as well as examples of steps that can be taken to mitigate or remove its effects, and the understanding that a well implemented, carefully governed system may make better, fairer, more consistent decisions than a human.

Target Audience

The talk will be aimed primarily at machine learning practitioners, but will be relevant to anybody involved in automated decision making. We will describe the process of incorporating ethics into decision-making algorithms, from conception, through to data collection, development and continued monitoring. Consequently, the talk may be of interest to anyone designing, building or overseeing an automated decision-making system that impacts people, including data-scientists, software engineers, managers, and senior leadership.

Prerequisites for Attendees

The prerequisite knowledge for the talk will be minimal. A basic understanding of what machine learning is might be helpful. One or two slides may delve into more technical content but understanding them would not be essential to grasping the key ideas of the talk.

schedule Submitted 2 months ago