Person Identification via Multi-Modal Interface with Combination of Speech and Image Data

location_city Bengaluru schedule Aug 10th 11:40 AM - 01:10 PM IST place Jupiter 1 people 22 Interested

Multi-Modalities

Having multiple modalities in a system gives more affordance to users and can contribute to a more robust system. Having more also allows for greater accessibility for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others. For example, MUST researchers are working on a personalized humanoid built and equipped with various types of input devices and sensors to allow them to receive information from humans, which are interchangeable and a standardized method of communication with the computer, affording practical adjustments to the user, providing a richer interaction depending on the context, and implementing robust system with features like; keyboard; pointing device; touchscreen; computer vision; speech recognition; motion, orientation etc.

There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.

  • Equivalence: information is presented in multiple ways and can be interpreted as the same information
  • Specialization: when a specific kind of information is always processed through the same modality
  • Redundancy: multiple modalities process the same information
  • Complimentarity: multiple modalities take separate information and merge it
  • Transfer: a modality produces information that another modality consumes
  • Concurrency: multiple modalities take in separate information that is not merged

Computer - Human Modalities

Computers utilize a wide range of technologies to communicate and send information to humans:

  • Vision - computer graphics typically through a screen
  • Audition - various audio outputs

Project Features

Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time.

Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people.

Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

Project Demos

Multi-Modal Interaction: https://www.youtube.com/watch?v=jQ8Gq2HWxiA

Gesture Detection: https://www.youtube.com/watch?v=rDSuCnC8Ei0

Speech Recognition: https://www.youtube.com/watch?v=AewM3TsjoBk

Assignment (Hands-on Challenge for Attendees)

Real-time multi-modal access control system for authorized access to work environment - All the key concepts and individual steps will be demonstrated and explained in this workshop, and the attendees need to customize the generic code or approach for this assignment or hands-on challenge.

 
 

Outline/Structure of the Workshop

Use-Case:

Person Identification and verification only based on one entity (either face recognition or voice recognition) have many limitations. For example, in insufficient light conditions or if there is any problem with the camera lens, person identification or verification will fail. Similarly, when background noise is too much, speaker identification will also be difficult. So in such cases, multi-modal system is required and is more robust and will provide a better performance.

This workshop will target person identification and verification using multi-modal analysis like using both computer vision and audio processing

Topic Breakdown:

  • Introduction to Multi-Modal Learning (MML) using Deep Learning (10 mins)
  • Demonstrating model performance for Person Identification using multi-modality and comparing the performance with that of individual modalities. (10 mins)
  • Hands-on assignment to carry out person identification using late fusion multi-modal technique using image and speech data (70 mins)

Requirement for attendees:

Access to Google Colab using their Google Accounts

1. Open Google Colab: https://colab.research.google.com/
2. Go to File -> Open Notebook
3. Select Github option
4. Paste the github notebook link: https://github.com/adib0073/ODSC_2019-Multi-Modal-Learning/blob/master/odsc_workshop_main.ipynb
5. Once the notebook is opened, connected to the run-time environment after signing in with your Google account
6. While running the notebook, uncomment the specific Google Colab environment lines of code
7. Change the required file path/directory as and when required to match the mounted Google Colab path.

GitHub link: https://github.com/adib0073/ODSC_2019-Multi-Modal-Learning

Learning Outcome

  • Introductory concepts about Multi-Modal Learning
  • Performance analysis for individual modality performance for face and speech recognition models
  • Combining modalities using Late fusion technique for Person Identification from image and audio data and analyzing the combined performance.

Target Audience

Anyone who has interest to build AI systems across the planet.

Prerequisites for Attendees

  • Basic Mathematics, Statistics and Programming with Python!
  • Basic knowledge on Neural-Nets
  • Basic knowledge on computer vision and audio-processing

Pre-requisite for preparing the assignment dataset preparation:

  • Image Data: JPG Image of size 250px X 250px : Image can be edited in any paint tool like paint brush
  • Audio Data: Key Phrase while recording: 'i am going to make him an offer he cannot refuse', Audio sampling rate: '16000 Hz', 'Mono- channel', '16 bit PCM' - suggested tool: Audacity
schedule Submitted 4 years ago

  • Dipanjan Sarkar
    keyboard_arrow_down

    Dipanjan Sarkar - Explainable Artificial Intelligence - Demystifying the Hype

    45 Mins
    Tutorial
    Intermediate

    The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

    A machine learning or deep learning model by itself consists of an algorithm which tries to learn latent patterns and relationships from data without hard-coding fixed rules. Hence, explaining how a model works to the business always poses its own set of challenges. There are some domains in the industry especially in the world of finance like insurance or banking where data scientists often end up having to use more traditional machine learning models (linear or tree-based). The reason being that model interpretability is very important for the business to explain each and every decision being taken by the model.However, this often leads to a sacrifice in performance. This is where complex models like ensembles and neural networks typically give us better and more accurate performance (since true relationships are rarely linear in nature).We, however, end up being unable to have proper interpretations for model decisions.

    To address and talk about these gaps, I will take a conceptual yet hands-on approach where we will explore some of these challenges in-depth about explainable artificial intelligence (XAI) and human interpretable machine learning and even showcase with some examples using state-of-the-art model interpretation frameworks in Python!

  • Dr. Saptarsi Goswami
    keyboard_arrow_down

    Dr. Saptarsi Goswami - Mastering feature selection: basics for developing your own algorithm

    45 Mins
    Tutorial
    Beginner

    Feature selection is one of the most important processes for pattern recognition, machine learning and data mining problems. A successful feature selection method facilitates improvement of learning model performance and interpretability as well as reduces computational cost of the classifier by dimensionality reduction of the data. Feature selection is computationally expensive and becomes intractable even for few 100 features. This is a relevant problem because text, image and next generation sequence data all are inherently high dimensional. In this talk, I will discuss about few algorithms we have developed in last 5/6 years. Firstly, we will set the context of feature selection ,with some open issues , followed by definition and taxonomy. Which will take about 20 odd minutes. Then in next 20 minutes we will discuss couple of research efforts where we have improved feature selection for textual data and proposed a graph based mechanism to view the feature interaction. After the talk, participants will be appreciate the need of feature selection, the basic principles of feature selection algorithm and finally how they can start developing their own models

  • Ishita Mathur
    keyboard_arrow_down

    Ishita Mathur - How GO-FOOD built a Query Semantics Engine to help you find food faster

    Ishita Mathur
    Ishita Mathur
    Data Scientist
    Gojek Tech
    schedule 4 years ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Context: The Search problem

    GOJEK is a SuperApp: 19+ apps within an umbrella app. One of these is GO-FOOD, the first food delivery service in Indonesia and the largest food delivery service in Southeast Asia. There are over 300 thousand restaurants on the platform with a total of over 16 million dishes between them.

    Over two-thirds of those who order food online using GO-FOOD do so by utilising text search. Search engines are so essential to our everyday digital experience that we don’t think twice when using them anymore. Search engines involve two primary tasks: retrieval of documents and ranking them in order of relevance. While improving that ranking is an extremely important part of improving the search experience, actually understanding that query helps give the searcher exactly what they’re looking for. This talk will show you what we are doing to make it easy for users to find what they want.

    GO-FOOD uses the ElasticSearch stack with restaurant and dish indexes to search for what the user types. However, this results in only exact text matches and at most, fuzzy matches. We wanted to create a holistic search experience that not only personalised search results, but also retrieved restaurants and dishes that were more relevant to what the user was looking for. This is being done by not only taking advantage of ElasticSearch features, but also developing a Query semantics engine.

    Query Understanding: What & Why

    This is where Query Understanding comes into the picture: it’s about using NLP to correctly identify the search intent behind the query and return more relevant search results, it’s about the interpretation process even before the results are even retrieved and ranked. The semantic neighbours of the query itself become the focus of the search process: after all, if I don’t understand what you’re trying to ask for, how will I give you what you want?

    In the duration of this talk, you will learn about how we are taking advantage of word embeddings to build a Query Understanding Engine that is holistically designed to make the customer’s experience as smooth as possible. I will go over the techniques we used to build each component of the engine, the data and algorithmic challenges we faced and how we solved each problem we came across.

  • Suvro Shankar Ghosh
    keyboard_arrow_down

    Suvro Shankar Ghosh - Learning Entity embedding’s form Knowledge Graph

    45 Mins
    Case Study
    Intermediate
    • Over a period of time, a lot of Knowledge bases have evolved. A knowledge base is a structured way of storing information, typically in the following form Subject, Predicate, Object
    • Such Knowledge bases are an important resource for question answering and other tasks. But they often suffer from their incompleteness to resemble all the data in the world, and thereby lack of ability to reason over their discrete Entities and their unknown relationships. Here we can introduce an expressive neural tensor network that is suitable for reasoning over known relationships between two entities.
    • With such a model in place, we can ask questions, the model will try to predict the missing data links within the trained model and answer the questions, related to finding similar entities, reasoning over them and predicting various relationship types between two entities, not connected in the Knowledge Graph.
    • Knowledge Graph infoboxes were added to Google's search engine in May 2012

    What is the knowledge graph?

    â–¶Knowledge in graph form!

    â–¶Captures entities, attributes, and relationships

    â–¶More specifically, the “knowledge graph” is a database that collects millions of pieces of data about keywords people frequently search for on the World wide web and the intent behind those keywords, based on the already available content

    â–¶In most cases, KGs is based on Semantic Web standards and have been generated by a mixture of automatic extraction from text or structured data, and manual curation work.

    â–¶Structured Search & Exploration
    e.g. Google Knowledge Graph, Amazon Product Graph

    â–¶Graph Mining & Network Analysis
    e.g. Facebook Entity Graph

    â–¶Big Data Integration
    e.g. IBM Watson

    â–¶Diffbot, GraphIQ, Maana, ParseHub, Reactor Labs, SpazioDati

  • Suvro Shankar Ghosh
    keyboard_arrow_down

    Suvro Shankar Ghosh - Real-Time Advertising Based On Web Browsing In Telecom Domain

    45 Mins
    Case Study
    Intermediate

    The following section describes Telco Domain Real-time advertising based on browsing use case in terms of :

    • Potential business benefits to earn.
    • Functional use case architecture depicted.
    • Data sources (attributes required).
    • Analytic to be performed,
    • Output to be provided and target systems to be integrated with.

    This use case is part of the monetization category. The goal of the use case is to provide a kind of DataMart to either Telecom business parties or external third parties sufficient, relevant and customized information to produce real-time advertising to Telecom end users. The customer targets are all Telecom network end-users.

    The customization information to be delivered to advertise are based on several dimensions:

    • Customer characteristics: demographic, telco profile.
    • Customer usage: Telco products or any other interests.
    • Customer time/space identification: location, zoning areas, usage time windows.

    Use case requirements are detailed in the description below as “ Targeting method”

    1. Search Engine Targeting:

    The telco will use users web history to track what users are looking at and to gather information about them. When a user goes onto a website, their web browsing history will show information of the user, what he or she searched, where they are from, found by the ip address, and then build a profile around them, allowing Telco to easily target ads to the user more specifically.

    1. Content and Contextual Targeting:

    This is when advertisers can put ads in a specific place, based on the relative content present. This targeting method can be used across different mediums, for example in an article online, about purchasing homes would have an advert associated with this context, like an insurance ad. This is achieved through an ad matching system which analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.

    1. Technical Targeting

    This form of targeting is associated with the user’s own software or hardware status. The advertisement is altered depending on the user’s available network bandwidth, for example if a user is on their mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.

    1. Time Targeting:

    This type of targeting is centered around time and focuses on the idea of fitting in around people’s everyday lifestyles. For example, scheduling specific ads at a timeframe from 5-7pm, when the

    1. Sociodemographic Targeting:

    This form of targeting focuses on the characteristics of consumers, including their age, gender, and nationality. The idea is to target users specifically, using this data about them collected, for example, targeting a male in the age bracket of 18-24. The telco will use this form of targeting by showing advertisements relevant to the user’s individual demographic profile. this can show up in forms of banner ads, or commercial videos.

    1. Geographical and Location-Based Targeting:

    This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through different cells.

    1. Behavioral Targeting:

    This form of targeted advertising is centered around the activity/actions of users and is more easily achieved on web pages. Information from browsing websites can be collected, which finds patterns in users search history.

    1. Retargeting:

    Is where advertising uses behavioral targeting to produce ads that follow you after you have looked or purchased are a particular item. Retargeting is where advertisers use this information to ‘follow you’ and try and grab your attention so you do not forget.

    1. Opinions, attitudes, interests, and hobbies:

    Psychographic segmentation also includes opinions on gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues.

  • Suvro Shankar Ghosh
    keyboard_arrow_down

    Suvro Shankar Ghosh - Attempt of a classification of AI use cases by problem class

    20 Mins
    Talk
    Intermediate

    There are many attempts to classify and structure the various AI techniques in the internet produced by a variety of sources with specific interests in this emerging market and the fact that some new technologies make use of multiple techniques does not make the task easier to provide an easy, top-down access and guideline through AI for business decision makers. Most sources structure the AI techniques by their core ability (e.g. supervised vs. unsupervised learning) but even this sometimes controversial (e.g. genetic algorithms). The approach taken here is to find groups of use cases that represent similar problem-solving strategies (just like distinguishing "search" from "sort" without reference to a particular technique like "Huffman search" or "qsort"). Of course, most AI techniques are combinations but with a different focus.

    There are many different sorting criteria to cluster use cases and these criteria determine how well and if at all the above objectives may be achieved. The target is to find “natural” classes of problems that in an abstract way can be applied to all the corresponding use cases. Since the clustering is used to determine which AI techniques are applicable, the classes should correspond to the typical characteristics of AI technique.

    problem class

    core problem description

    sample use cases

    key measure

    AI techniques

    Normalization

    Pre-process and convert unstructured data into structured data (patterns)

    • Big data pre-processing
    • Sample normalization (sound, face images, …)
    • Triggered time sequences
    • Feature extraction
    • Conversion quality

    Clustering

    detect pattern accumulations in a data set

    • customer segment analysis
    • optical skin cancel analysis
    • music popularity analysis
    • inter- and intracluster resolution

    Feature Extraction

    Detect features within patterns and samples

    • Facial expression analysis (eyes and mouth)
    • Scene analysis & surveillance (people ident.)
    • Accuracy
    • completeness

    Recognition

    detect a pattern in a large set of samples

    • image/face recognition
    • speaker recognition
    • natural language recognition
    • associative memory

    • accuracy
    • recognition speed
    • learning or storage speed
    • capacity

    Generalization

    Interpolation and extrapolation of feature patterns in a pattern space

    • adaptive linear feature interpolation
    • fuzzy robot control/navigation in unknown terrain
    • accuracy
    • prediction pattern range
    • Kohonen maps (SOM, SOFM)
    • any backpropagation NN
    • Fuzzy logic systems

    Prediction

    predict future patterns (e.g. based on past experience, i.e. observed sequences of patterns)

    • stock quote analysis
    • heart attack prevention
    • next best action machines
    • weather/storm forecast
    • pre-fetching in CPU's
    • accuracy
    • prediction time range

    Optimization

    optimize a given structure (pattern) according to a fitness- or energy function

    • (bionic) plane or ship construction
    • agricultural fertilization optimization
    • genetic programming
    • convergence
    • detection of local /global optimum
    • (heaviness) cost of optimization

    Conclusion

    detect or apply a (correlative) rule in a data set

    • QM correlation analysis
    • next best action machines
    • consistency
    • completeness
    • rule-based systems
    • Expert systems

  • Lakshya
    Lakshya
    Applied Researcher-2
    Salesforce
    schedule 4 years ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Deep learning has significantly improved state-of-the-art performance for natural language processing (NLP) tasks, but each one is typically studied in isolation. The Natural Language Decathlon (decaNLP) is a new benchmark for studying general NLP models that can perform a variety of complex, natural language tasks. By requiring a single system to perform ten disparate natural language tasks, decaNLP offers a unique setting for multitask, transfer, and continual learning. decaNLP is maintained by salesforce and is publicly available on github in order to use for tasks like Question Answering, Machine Translation, Summarization, Sentiment Analysis etc.

  • Siboli Mukherjee
    keyboard_arrow_down

    Siboli Mukherjee - Real time Anomaly Detection in Network KPI using Time Series

    Siboli Mukherjee
    Siboli Mukherjee
    Data Analyst
    Vodafone Idea Ltd
    schedule 4 years ago
    Sold Out!
    20 Mins
    Experience Report
    Intermediate

    Abstract:

    How to accurately detect Key Performance Indicator (KPI) anomalies is a critical issue in cellular network management. In this talk I shall introduce CNR(Cellular Network Regression) a unified performance anomaly detection framework for KPI time-series data. CNR realizes simple statistical modelling and machine-learning-based regression for anomaly detection; in particular, it specifically takes into account seasonality and trend components as well as supports automated prediction model retraining based on prior detection results. I demonstrate here how CNR detects two types of anomalies of practical interest, namely sudden drops and correlation changes, based on a large-scale real-world KPI dataset collected from a metropolitan LTE network. I explore various prediction algorithms and feature selection strategies, and provide insights into how regression analysis can make automated and accurate KPI anomaly detection viable.

    Index Terms—anomaly detection, NPAR (Network Performance Analysis)

    1. INTRODUCTION

    The continuing advances of cellular network technologies make high-speed mobile Internet access a norm. However, cellular networks are large and complex by nature, and hence production cellular networks often suffer from performance degradations or failures due to various reasons, such as back- ground interference, power outages, malfunctions of network elements, and cable disconnections. It is thus critical for network administrators to detect and respond to performance anomalies of cellular networks in real time, so as to maintain network dependability and improve subscriber service quality. To pinpoint performance issues in cellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.

    Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks . ellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.

    Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks .

  • Siboli Mukherjee
    keyboard_arrow_down

    Siboli Mukherjee - AI in Telecommunication -An Obstacle or Opportunity

    Siboli Mukherjee
    Siboli Mukherjee
    Data Analyst
    Vodafone Idea Ltd
    schedule 4 years ago
    Sold Out!
    45 Mins
    Talk
    Executive

    Introduction

    “Alexa, launch Netflix!”

    No longer limited to providing basic phone and Internet service, the telecom industry is at the epicentre of technological growth, led by its mobile and broadband services in the Internet of Things (IoT) era.This growth is expected to continue,The driver for this growth? Artificial intelligence (AI).

    Artificial Intelligent applications are revolutionizing the way telecoms operate, optimize and provide service to their customers

    Today’s communications service providers (CSPs) face increasing customer demands for higher quality services and better customer experiences (CX). Telecoms are addressing these opportunities by leveraging the vast amounts of data collected over the years from their massive customer base. This data is culled from devices, networks, mobile applications, geolocations, detailed customer profiles, services usage and billing data.

    Telecoms are harnessing the power of AI to process and analyse these huge volumes of Big Data in order to extract actionable insights to provide better customer experiences, improve operations, and increase revenue through new products and services.

    With Gartner forecasting that 20.4 billion connected devices will be in use worldwide by 2020, more and more CSPs are jumping on the bandwagon, recognizing the value of artificial intelligence applications in the telecommunications industry.

    Forward-thinking CSPs have focused their efforts on four main areas where AI has already made significant inroads in delivering tangible business results: Network optimization, preventive maintenance, Virtual Assistants, and robotic process automation (RPA)

    Network optimisation

    AI is essential for helping CSPs build self-optimizing networks (SONs), where operators have the ability to automatically optimize network quality based on traffic information by region and time zone. Artificial intelligence applications in the telecommunications industry use advanced algorithms to look for patterns within the data, enabling telecoms to both detect and predict network anomalies, and allowing operators to proactively fix problems before customers are negatively impacted.

    Some popular AI solutions for telecoms are ZeroStack’s ZBrain Cloud Management, which analyses private cloud telemetry storage and use for improved capacity planning, upgrades and general management; Aria Networks, an AI-based network optimization solution that counts a growing number of Tier-1 telecom companies as customers, and Sedona Systems’ NetFusion, which optimizes the routing of traffic and speed delivery of 5G-enabled services like AR/VR. Nokia launched its own machine learning-based AVA platform, a cloud-based network management solution to better manage capacity planning, and to predict service degradations on cell sites up to seven days in advance.

    Predictive maintenance

    AI-driven predictive analytics are helping telecoms provide better services by utilizing data, sophisticated algorithms and machine learning techniques to predict future results based on historical data. This means telecoms can use data-driven insights to can monitor the state of equipment, predict failure based on patterns, and proactively fix problems with communications hardware, such as cell towers, power lines, data centre servers, and even set-top boxes in customers’ homes.

    In the short-term, network automation and intelligence will enable better root cause analysis and prediction of issues. Long term, these technologies will underpin more strategic goals, such as creating new customer experiences and dealing efficiently with business demands. An innovative solution by AT&Tis using AI to support its maintenance procedures: the company is testing a drone to expand its LTE network coverage and to utilize the analysis of video data captured by drones for tech support and infrastructure maintenance of its cell towers.Preventive maintenance is not only effective on the network side, but on the customer’s side as well.Dutch telecom KPN analyses the notes generated by its call centre agents, and uses the insights generated to make changes to the interactive voice response (IVR) system.

    Virtual Assistants

    Conversational AI platforms — known as virtual assistants — have learned to automate and scale one-on-one conversations so efficiently that they are projected to cut business expenses by as much as $8 billion in the next five years. Telecoms have turned to virtual assistants to help contend with the massive number of support requests for installation, set up, troubleshooting and maintenance, which often overwhelm customer support centre Using AI, telecoms can implement self-service capabilities that instruct customers how to install and operate their own devices.

    Vodafone introduced its new chatbot — TOBi to handle a range of customer service-type questions. The chatbotscales responses to simple customer queries, thereby delivering the speed that customers demand. Nokia’s virtual assistant MIKA suggests solutions for network issues, leading to a 20% to 40% improvement in first-time resolution.

    Robotic process automation (RPA)

    CSPs all have vast numbers of customers and an endless volume of daily transactions, each susceptible to human error. Robotic Process Automation (RPA) is a form of business process automation technology based on AI. RPA can bring greater efficiency to telecommunications functions by allowing telecoms to more easily manage their back office operations and the large volumes of repetitive and rules-based processes. By streamlining execution of once complex, labor-intensive and time-consuming processes such as billing, data entry, workforce management and order fulfillment, RPA frees CSP staff for higher value-add work.

    According to a survey by Deloitte, 40% of Telecom, Media and Tech executives say they have garnered “substantial” benefits from cognitive technologies, with 25% having invested $10 million or more. More than three-quarters expect cognitive computing to “substantially transform” their companies within the next three years.

    Summary

    Artificial intelligence applications in the telecommunications industry is increasingly helping CSPs manage, optimize and maintain not only their infrastructure, but their customer support operations as well. Network optimization, predictive maintenance, virtual assistants and RPA are examples of use cases where AI has impacted the telecom industry, delivering an enhanced CX and added value for the enterprise overall.

  • Kaushik Dey
    keyboard_arrow_down

    Kaushik Dey - Algorithms at Edge leveraging decentralized learning

    45 Mins
    Talk
    Advanced

    The problem of network behavior prediction has been an ongoing study by researchers for quite a while now. Network behavior typically exhibits a complex sequential pattern and is often difficult to predict. Nowadays there are several techniques to predict the degradation in Network KPIs like throughput, latency etc., using various machine learning techniques like Deep Neural Networks, where the initial layers have learnt to map the raw features like performance counter measurements, weather, system configuration details etc into a feature space where classification by the final layers can be performed.

    Given the initial number of counters( which constitutes the dimensions) is substantial (more than 2000 in number) the problem requires huge amount of data to train the Deep Neural Networks. Often this needs resources and time and more importantly this requires provisioning of huge amount of data for every trial. Given each node generates huge amount of data ( data on every 2000 counters generated at 15 minutes interval for each of 6 cells in an eNodeB) and the data needs to be transported across several hundred of eNodeBs to one central data center, it requires a very fat data pipe and consequently huge investment to enable a predictive fault prediction apparatus across the network.

    The alernative is to have a compute infrastructure at the node and take the intelligence at the edge. However the challenge is given the huge amount of data generated in a single node having a compute at each node was proving to be expensive. Nowadays this compute requirement at node could be reduced through use of transfer learning. However the other challenge is on sharing the intelligence and developing a system which is collectively intelligent across nodes.

    Network topology, climate features and user patterns vary across regions and service providers and hence an unique model is often necesarry to serve the node. However in order to deal with unseen patterns intelligence from other nodes can be useful which leads us to building an global model which again leads to the challenge of fat data pipeline requirement which makes it commercially less attractive.

    In order to get around this challenge, an combination of federated learning is used in combination with transfer learning.

    This presentation details such deep learning architectures which combines federated learning with transfer learning to enable construction and updation of Global models which imbibes intelligence from nodes but can be constructed by a consensus mechanism whereby weights and changes to weights of local models are shared to global. Also the local models are periodically updated once global model update iteration is complete. Further updation of local models is only done in final layers and initial layers are freezed. This reduces the compute requirement at node also...

    The above principles are being implemented as First of a kind implementation and has prooved to be a success across multiple customers in delivering a compelling ML enabled fault prediction and self-healing mechanism but keeping the investments in infrastructure lower than would have been required in traditional Deep Learning architectures

    This talk will specifically detail the leverage of above principles of federated and transfer learning on LSTMs..

help