Explainable Artificial Intelligence (XAI): Why, When, and How?
Machine learning models are rapidly conquering uncharted grounds with new solutions by proving themselves to be better than the existing manual or software solutions. This has also given rise to a demand for Explainable Artificial Intelligence (XAI) that can be used by a human to understand the decisions made by the machine learning model. The need for XAI may stem from legal or social reasons, or from the desire to improve the acceptance and adoption of the machine learning model. The extent of explainability desired may vary with the aforementioned reasons and the application domain such as finance, defense, legal, and medical. XAI is achieved by choosing machine learning technique such as decision trees that lends well to explainability but compromise accuracy, or by putting additional efforts to develop a secondary machine model to explain the decisions of the primary model. Essentially this leads to a choice between the desired levels of explainability, accuracy, and development cost. In this talk, we present current thinking, challenges and a framework that can be used to analyze and communicate on the choices related to XAI, and make the decisions that can be used to provide the best XAI solution for the problem in hand.
Outline/Structure of the Case Study
- What is Explainable AI(XAI)?
- Why is XAI required?
- How to approach XAI?
- A framework to think about the impact of XAI?
- How would XAI impact ML projects?
The attendees will be able to understand the need for XAI and take a structured approach to consider the impact of explainability on ML projects and recommendation.
Practitioners , Decision makers and executive
schedule Submitted 11 months ago
People who liked this proposal, also liked:
Dr. Dakshinamurthy V Kolluru - ML and DL in Production: Differences and SimilaritiesDr. Dakshinamurthy V KolluruFounder and PresidentINSOFE
schedule 11 months agoSold Out!
While architecting a data-based solution, one needs to approach the problem differently depending on the specific strategy being adopted. In traditional machine learning, the focus is mostly on feature engineering. In DL, the emphasis is shifting to tagging larger volumes of data with less focus on feature development. Similarly, synthetic data is a lot more useful in DL than ML. So, the data strategies can be significantly different. Both approaches require very similar approaches to the analysis of errors. But, in most development processes, those approaches are not followed leading to substantial delay in production times. Hyper parameter tuning for performance improvement requires different strategies between ML and DL solutions due to the longer training times of DL systems. Transfer learning is a very important aspect to evaluate in building any state of the art system whether ML or DL. The last but not the least is understanding the biases that the system is learning. Deeply non-linear models require special attention in this aspect as they can learn highly undesirable features.
In our presentation, we will focus on all the above aspects with suitable examples and provide a framework for practitioners for building ML/DL applications.
Dr. Rohit M. Lotlikar - The Impact of Behavioral Biases to Real-World Data Science Projects: Pitfalls and GuidanceDr. Rohit M. LotlikarProfessor of Data SciencesINSOFE
schedule 11 months agoSold Out!
Data science projects, unlike their software counterparts tend to be uncertain and rarely fit into standardized approach. Each organization has it’s unique processes, tools, culture, data and in-efficiencies and a templatized approach, more common for software implementation projects rarely fits.
In a typical data science project, a data science team is attempting to build a decision support system that will either automate human decision making or assist a human in decision making. The dramatic rise in interest in data sciences means the typical data science project has a large proportion of relatively inexperienced members whose learnings draw heavily from academics, data science competitions and general IT/software projects.
These data scientists learn over time that the real world however is very different from the world of data science competitions. In the real-word problems are ill-defined, data may not exist to start with and it’s not just model accuracy, complexity and performance that matters but also the ease of infusing domain knowledge, interpretability/ability to provide explanations, the level of skill needed to build and maintain it, the stability and robustness of the learning, ease of integration with enterprise systems and ROI.
Human factors play a key role in the success of such projects. Managers making the transition from IT/software delivery to data science frequently do not allow for sufficient uncertainty in outcomes when planning projects. Senior leaders and sponsors, are under pressure to deliver outcomes but are unable to make a realistic assessment of payoffs and risks and set investment and expectations accordingly. This makes the journey and outcome sensitive to various behavioural biases of project stakeholders. Knowing what the typical behavioural biases and pitfalls makes it easier to identify those upfront and take corrective actions.
The speaker brings his nearly two decades of experience working at startups, in R&D and in consulting to lay forth these recurring behavioural biases and pitfalls.
Many of the biases covered are grounded in the speakers first-hand experience. The talk will provide examples of these biases and suggestions on how to identify and overcome or correct for them.