Integrating Digital Twin and AI for Smarter Engineering Decisions

With the increasing popularity of AI, new frontiers are emerging in predictive maintenance and manufacturing decision science. However, there are many complexities associated with modeling plant assets, training predictive models for them, and deploying these models at scale for near real-time decision support. This talk will discuss these complexities in the context of building an example system.

First, you must have failure data to train a good model, but equipment failures can be expensive to introduce for the sake of building a data set! Instead, physical simulations can be used to create large, synthetic data sets to train a model with a variety of failure conditions.

These systems also involve high-frequency data from many sensors, reporting at different times. The data must be time-aligned to apply calculations, which makes it difficult to design a streaming architecture. These challenges can be addressed through a stream processing framework that incorporates time-windowing and manages out-of-order data with Apache Kafka. The sensor data must then be synchronized for further signal processing before being passed to a machine learning model.

As these architectures and software stacks mature in areas like manufacturing, it is increasingly important to enable engineers and domain experts in this workflow to build and deploy the machine learning models and work with system architects on the system integration. This talk also highlights the benefit of using apps and exposing the functionality through API layers to help make these systems more accessible and extensible across the workflow.

This session will focus on building a system to address these challenges using MATLAB, Simulink. We will start with a physical model of an engineering asset and walk through the process of developing and deploying a machine learning model for that asset as a scalable and reliable cloud service.

 
8 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

What is a digital twin; guideline on building a Digital twin of the plant to generate sensor data and Simulate fault scenarios

Identifying key condition indicators

Apply MACHINE LEARNING for developing predictive models / RUL models

guidance on integrating prototyped algorithms to work in CLOUD/BUSINESS/Embedded system

Learning Outcome

  • Digital twin concept and application
  • Building ML models
  • Generate synthetic failure data
  • Deploying models to edge and to cloud

Target Audience

Team leads, Controls Engineer, Digital innovation, Data Scientists

Prerequisites for Attendees

Data Science, Physics-Based modeling

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Viral B. Shah
    keyboard_arrow_down

    Viral B. Shah - Growing a compiler - Getting to ML from the general-purpose Julia compiler

    45 Mins
    Keynote
    Intermediate

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Anuj Gupta
    keyboard_arrow_down

    Anuj Gupta - Continuous Learning Systems: Building ML systems that keep learning from their mistakes

    Anuj Gupta
    Anuj Gupta
    Scientist
    Intuit
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    Won't it be great to have ML models that can update their “learning” as and when they make mistake and correction is provided in real time? In this talk we look at a concrete business use case which warrants such a system. We will take a deep dive to understand the use case and how we went about building a continuously learning system for text classification. The approaches we took, the results we got.

    For most machine learning systems, “train once, just predict thereafter” paradigm works well. However, there are scenarios when this paradigm does not suffice. The model needs to be updated often enough. Two of the most common cases are:

    1. When the distribution is non-stationary i.e. the distribution of the data changes. This implies that with time the test data will have very different distribution from the training data.
    2. The model needs to learn from its mistakes.

    While (1) is often addressed by retraining the model, (2) is often addressed using batch update. Batch updation requires collecting a sizeable number of feedback points. What if you have much fewer feedback points? You need model that can learn continuously - as and when model makes a mistake and feedback is provided. To best of our knowledge there is a very limited literature on this.

  • Liked Paolo Tamagnini
    keyboard_arrow_down

    Paolo Tamagnini / Kathrin Melcher - Guided Analytics - Building Applications for Automated Machine Learning

    90 Mins
    Tutorial
    Beginner

    In recent years, a wealth of tools has appeared that automate the machine learning cycle inside a black box. We take a different stance. Automation should not result in black boxes, hiding the interesting pieces from everyone. Modern data science should allow automation and interaction to be combined flexibly into a more transparent solution.

    In some specific cases, if the analysis scenario is well defined, then full automation might make sense. However, more often than not, these scenarios are not that well defined and not that easy to control. In these cases, a certain amount of interaction with the user is highly desirable.

    By mixing and matching interaction with automation, we can use Guided Analytics to develop predictive models on the fly. More interestingly, by leveraging automated machine learning and interactive dashboard components, custom Guided Analytics Applications, tailored to your business needs, can be created in a few minutes.

    We'll build an application for automated machine learning using KNIME Software. It will have an input user interface to control the settings for data preparation, model training (e.g. using deep learning, random forest, etc.), hyperparameter optimization, and feature engineering. We'll also create an interactive dashboard to visualize the results with model interpretability techniques. At the conclusion of the workshop, the application will be deployed and run from a web browser.

  • Liked Govind Chada
    keyboard_arrow_down

    Govind Chada - Using 3D Convolutional Neural Networks with Visual Insights for Classification of Lung Nodules and Early Detection of Lung Cancer

    Govind Chada
    Govind Chada
    Student
    Cy Woods
    schedule 1 month ago
    Sold Out!
    45 Mins
    Case Study
    Intermediate

    Lung cancer is the leading cause of cancer death among both men and women in the U.S., with more than a hundred thousand deaths every year. The five-year survival rate is only 17%; however, early detection of malignant lung nodules significantly improves the chances of survival and prognosis.

    This study aims to show that 3D Convolutional Neural Networks (CNNs) which use the full 3D nature of the input data perform better in classifying lung nodules compared to previously used 2D CNNs. It also demonstrates an approach to develop an optimized 3D CNN that performs with state of art classification accuracies. CNNs, like other deep neural networks, have been black boxes giving users no understanding of why they predict what they predict. This study, for the first time, demonstrates that Gradient-weighted Class Activation Mapping (Grad-CAM) techniques can provide visual explanations for model decisions in lung nodule classification by highlighting discriminative regions. Several CNN architectures using Keras and TensorFlow were implemented as part of this study. The publicly available LUNA16 dataset, comprising 888 CT scans with candidate nodules manually annotated by radiologists, was used to train and test the models. The models were optimized by varying the hyperparameters, to reach accuracies exceeding 90%. Grad-CAM techniques were applied to the optimized 3D CNN to generate images that provide quality visual insights into the model decision making. The results demonstrate the promise of 3D CNNs as highly accurate and trustworthy classifiers for early lung cancer detection, leading to improved chances of survival and prognosis.

  • Liked Aditya Singh Tomar
    keyboard_arrow_down

    Aditya Singh Tomar - Building Your Own Data Visualization Platform

    Aditya Singh Tomar
    Aditya Singh Tomar
    Data Consultant
    ACT Insights
    schedule 1 month ago
    Sold Out!
    45 Mins
    Demonstration
    Beginner

    Ever thought about having a mini interactive visualization tool that caters to your specific requirements. That is the product I created when I started independent consulting. 2 years since, and I have now decided to make it public – even the source code.

    This session will give you an overview about creating a custom, personalized version of a visualization platform built on R and Shiny. We will focus on a mix of structure and flexibility to address the varying requirements. We will look at the code itself and the various components involved while exploring the customization options available to ensure that the outcome is truly a personal product.

  • Liked Deepak Mukunthu
    keyboard_arrow_down

    Deepak Mukunthu - Democratizing & Accelerating AI through Automated Machine Learning

    90 Mins
    Workshop
    Beginner

    Intelligent experiences powered by AI can seem like magic to users. Developing them, however, is pretty cumbersome involving a series of sequential and interconnected decisions along the way that is pretty time-consuming. What if there was an automated service that identifies the best machine learning pipelines for a given problem/data? Automated Machine Learning does exactly that!

    Automated ML is based on a breakthrough from our Microsoft Research division. The approach combines ideas from collaborative filtering and Bayesian optimization to search an enormous space of possible machine learning pipelines intelligently and efficiently. It's essentially a recommender system for machine learning pipelines. Similar to how streaming services recommend movies for users, Automated ML recommends machine learning pipelines for data sets.

    Just as important, Automated ML accomplishes all this without having to see the customer’s data, preserving privacy. Automated ML is designed to not look at the customer’s data. Customer data and execution of the machine learning pipeline both live in the customer’s cloud subscription (or their local machine), which they have complete control of. Only the results of each pipeline run are sent back to the Automated ML service, which then makes an intelligent, probabilistic choice of which pipelines should be tried next.

    By making Automated ML available through the Azure Machine Learning service (Python-based SDK), we're empowering data scientists with a powerful productivity tool. We also have Automated ML available through PowerBI so that business analysts and BI professionals can also take advantage of machine learning. For developers familiar with Visual Studio and C#, we now have Automated ML available in C#.Net. If you are a SQL data engineer, we have a solution for you as well. And stay tuned as we continue to incorporate it into other product channels to bring the power of Automated ML to everyone!

    This session will provide an overview of Automated machine learning, how it works and how you can get started! We will walk through real-world use cases, build ML models using Automated ML and go through the E2E ML process of training, deployment, inferencing and operationalization of models.

  • Liked Anuj Gupta
    keyboard_arrow_down

    Anuj Gupta - NLP Bootcamp

    Anuj Gupta
    Anuj Gupta
    Scientist
    Intuit
    schedule 1 month ago
    Sold Out!
    480 Mins
    Workshop
    Beginner

    Recent advances in machine learning have rekindled the quest to build machines that can interact with outside environment like we human do - using visual clues, voice and text. An important piece of this trilogy are systems that can process and understand text in order to automate various workflows such as chat bots, named entity recognition, machine translation, information extraction, summarization, FAQ system, etc.

    A key step towards achieving any of the above task is - using the right set of techniques to represent text in a form that machine can understand easily. Unlike images, where directly using the intensity of pixels is a natural way to represent the image; in case of text there is no such natural representation. No matter how good is your ML algorithm, it can do only so much unless there is a richer way to represent underlying text data. Thus, whatever NLP application you are building, it’s imperative to find a good representation for your text data.

    In this bootcamp, we will understand key concepts, maths, and code behind the state-of-the-art techniques for text representation. We will cover mathematical explanations as well as implementation details of these techniques. This bootcamp aims to demystify, both - Theory (key concepts, maths) and Practice (code) that goes into building these techniques. At the end of this bootcamp participants would have gained a fundamental understanding of these schemes with an ability to implement them on datasets of their interest.

    This would be a 1-day instructor-led hands-on training session to learn and implement an end-to-end deep learning model for natural language processing.

  • 480 Mins
    Workshop
    Intermediate

    In this session, data scientists from CellStrat AI Lab will present demos and presentations on cutting-edge AI solutions in :-

    • Computer Vision - Image Segmentation with FCN/UNets/DeepLab/ESPNet, Image Processing, Pose Estimation with DensePose
    • Natural Language Processing (NLP) - Latest NLP and Text Analytics with BERT, NER, Neural Language Translation etc to solve problems such as text summarization, QnA systems, video captioning etc.
    • Reinforcement Learning (RL) - Train Atari Video Games with RL, Augmented Random Search, Deep Q Learning etc. Apply RL techniques for gaming, financial portfolios, driverless cars etc. Train Robots with MuJoCo simulator.
    • Driverless Cars - Demo on multi-class roads datasets, path planning and navigation control for cars etc.
    • Neural Network Architectures - Faster and Smaller Neural Networks with MorphNet
  • 45 Mins
    Talk
    Intermediate

    This session will discuss Reinforcement Learning (RL) algorithms such as Policy Gradients, TD Learning and Deep-Q Learning. We will discuss how emerging RL algorithms can be used to train games, driverless cars, financial decision models and home automation systems.

  • Liked Sriram P
    keyboard_arrow_down

    Sriram P - Application Blueprint for Creating and using Predictive insights to drive business outcomes

    45 Mins
    Case Study
    Intermediate

    Predictive analytics is now the #1 feature on application roadmaps. Companies are responding to customer demands, competitive & market threats, defending their price & market segment by adding innovative predictive use cases to their applications.

    Every predictive analytics project comes with a set of challenges that one has to overcome to be able to incorporate insights into their existing application. This session will share some of the best practices and lessons learned in working with many customers to incorporate Predictive Analytics into their existing application. This session will highlight the challenges one would encounter before, during and after model creation & ways to address them to have a successful business application.

  • Liked Kathrin Melcher
    keyboard_arrow_down

    Kathrin Melcher / Paolo Tamagnini - The Magic of Many-To-Many LSTMs: Codeless Product Name Generation and Neural Machine Translation

    45 Mins
    Case Study
    Intermediate

    What do product name generation and neural machine translation have in common?

    Both involve sequence analysis which can be implemented via recurrent neural networks (RNN) with LSTM layers.

    LSTM Neural Networks are the state of the art technique for sequence analysis. In this presentation, we find out what LSTM layers are, learn about bidirectional and stacked LSTM and the difference between many-to-one, many-to-many, and one-to many-structures, and train many-to-many LSTM networks for both use cases.

  • Liked Deepak Mukunthu
    keyboard_arrow_down

    Deepak Mukunthu - Automated Machine Learning

    45 Mins
    Talk
    Beginner

    Intelligent experiences powered by AI can seem like magic to users. Developing them, however, is pretty cumbersome involving a series of sequential and interconnected decisions along the way that is pretty time-consuming. What if there was an automated service that identifies the best machine learning pipelines for a given problem/data? Automated Machine Learning does exactly that!

    With the goal of accelerating AI for data scientists by improving their productivity and democratizing AI for other data personas who want to get into machine learning, Automated ML comes in many different flavors and experiences. Automated ML is one of the top 5 AI trends this year. This session will cover concepts of Automated ML, how it works, different variations of it and how you can use it for your scenarios.

  • Liked Rishu Gupta
    keyboard_arrow_down

    Rishu Gupta - Simplifying AI Workflows: From Development to Deployment

    45 Mins
    Talk
    Intermediate

    As Deep Learning becomes more prevalent across industries, there is a growing need to make it broadly available, accessible, and applicable – not just for data scientists but to engineers and scientists with varying specializations. MATLAB being an integrated framework allows you to accelerate building consumer and industrial applications, while utilizing the capabilities of open-source frameworks like TensorFlow to train the deep learning networks. Some key aspects/challenges in building artificial intelligence (AI) applications include:

    • Curating labeled datasets for supervised learning, including data augmentation and generation
    • Applying traditional signal and image processing techniques to assist deep learning, and
    • Integrating models with embedded or enterprise systems.

    MATLAB is well-known for its strength in traditional engineering and scientific applications like image and signal processing, controls, and wireless system design. This talk demonstrates how you can work with various deep learning frameworks to make it easier to develop, deploy, and maintain AI-powered applications for many industrial use cases. Learn how you can:

    • Automate ground truth labeling for image, video, Lidar and sensor data
    • Apply physical models and simulations to augment training data, develop control algorithms and test the integrated system, potentially with hardware in the loop
    • Generate high-performance C++ and CUDA engines for embedded system and cloud deployment

    Also you accomplish this by interoperating with other deep learning frameworks at different points of your workflow:

    • Data interoperability (like Parquet format) lets you preprocess signals or automatically label data in one framework while training models in another
    • Model exchange formats (like ONNX) or importers let you evaluate and optimize a model trained in a different framework
    • Compilation and automated code generation makes it easy to integrate models in cloud environments and with embedded or enterprise systems.