The key aspect in solving ML problems in telecom industry lies in continuous data collection and evaluation from different categories of customers and networks so as to track and dive into varying performance metrics. The KPIs form the basis of network monitoring helping network/telecom operators to automatically add and scale network resources. Such smart automated systems are built with the objective of increasing customer engagement through enhanced customer experience and tracking customer behavior anomaly with timely detection and correction. Further the system is designed to scale and serve current LTE, 4G and upcoming 5G networks with minimal non-effective cell site visits and quick identification of Root Cause Analysis (RCA).

Network congestion has remained an ever-increasing problem. Operators have attempted a variety of strategies to match the network demand capacity with existing infrastructure, as the cost of deploying additional network capacities is expensive. To keep the cost under control, operators apply control measures to attempt to allocate bandwidth fairly among users and throttle the bandwidth of users that consume excessive bandwidth. This approach had limited success. Alternatively, techniques that utilize extra bandwidth for quality of experience (QOE) efficiency by over-provisioning the network has proved to be ineffective and inefficient due to lack of proper estimation.

The evolution of 5G networks, would lead manufacturers and telecom operators to use high-data transfer rates, wide network coverage, low latency to build smart factories using automation, artificial intelligence and Internet of Things (IoT). The application of advanced data science and AI can provide better predictive insights to improve network capacity-planning accuracy. Better network provisioning would yield better network utilization for both next-generation networks based on 5G technology and current LTE and 4G networks. Further AI models can be designed to link application throughput with network performance, prompting users to plan their daily usage based on their current location and total monthly budget.

In this talk, we will understand the current challenges in the telecom industry, the need for an AIOPS platform, and the mission held by telecom operators, communication service providers across the world for designing such AI frameworks, platforms, and best practices. We will see how increasing operator collaborations are helping to create, deploy and produce AI platforms for different AI use-cases. We will study one industrial use-case (with code) based on real-time field research to predict network capacity. In this respect, we will investigate how deep learning networks can be used to train large volumes of data at scale (millions of network cells), and how its use can help the upcoming 5G networks. We will also examine an end-to-end pipeline of hosting the scalable framework on Google Cloud. As data volume is huge and data needs to be stored in highly secured systems, we build our high-performing system with extra security features that can process millions of request in an order of few mili-secs. As the session highlights parameters and metrics in creating a LSTM based neural network, it also discusses the challenges and some of the key aspects involved in designing and scaling the system.


Outline/Structure of the Demonstration

The presentation is structured as:

1. Current and Future Challenges in the Telecom industry - 2mins

3. Machine Learning use-cases related to network capacity and outage- 2 mins

4. Scalable ML architecture with Distributed Training -6 mins

5. Neural Network parameters and design to solve one industrial use-case - 5 mins

6. Demo - 4 mins

Learning Outcome

Understanding of :

1. How to overcome telecom industry challenges with different AI solutions and an AIOPS framework

2. Basic understanding of Google Cloud platform to build an end to end scalable ML pipeline

3. Modeling one industrial use case with deep learning (with code using python Keras)

5. How to improve ML model training and accuracy.

Target Audience

ML and Deep learning enthusiasts , Cloud Engineers and Experts , Managers, Anyone interested in Data Science and Telecom domain

Prerequisites for Attendees

1. Basic understanding of Cloud components

2. Basic understanding of Machine Learning/Data Science

schedule Submitted 3 years ago

  • Ravi Ranjan

    Ravi Ranjan - Deep Reinforcement Learning Based RecSys Using Distributed Q Table

    20 Mins

    Recommendation systems (RecSys) are the core engine for any personalized experience on eCommerce and online media websites. Most of the companies leverage RecSys to increase user interaction, to enrich shopping potential and to generate upsell & cross-sell opportunities. Amazon uses recommendations as a targeted marketing tool throughout its website that contributes 35% of its total revenue generation [1]. Netflix users watch ~75% of the recommended content and artwork [2]. Spotify employs a recommendation system to update personal playlists every week so that users won’t miss newly released music by artists they like. This has helped Spotify to increase its number of monthly users from 75 million to 100 million at a time [3]. YouTube's personalized recommendation helps users to find relevant videos quickly and easily which account for around 60% of video clicks from the homepage [4].

    In general, RecSys generates recommendations based on user browsing history and preferences, past purchases and item metadata. It turns out most existing recommendation systems are based on three paradigms: collaborative filtering (CF) and its variants, content-based recommendation engines, and hybrid recommendation engines that combine content-based and CF or exploit more information about users in content-based recommendation. However, they suffer from limitations like rapidly changing user data, user preferences, static recommendations, grey sheep, cold start and malicious user.

    Classical RecSys algorithm like content-based recommendation performs great on item to item similarities but will only recommend items related to one category and may not recommend anything in other categories as the user never viewed those items before. Collaborative filtering solves this problem by exploiting the user's behavior and preferences over the items in recommending items to the new users. However, collaborative filtering suffers from a few drawbacks like cold start, popularity bias, and sparsity. The classical recommendation models consider the recommendation as a static process. We can solve the static recommendation on rapidly changing user data by RL. RL based RecSys captures the user’s temporal intentions and responds promptly. However, as the user action and items matrix size increases, it becomes difficult to provide recommendations using RL. Deep RL based solutions like actor-critic and deep Q-networks overcome all the aforementioned drawbacks.

    Present systems suffer from two limitations, firstly considering the recommendation as a static procedure and ignoring the dynamic interactive nature between users and the recommender systems. Also, most of the works focus on the immediate feedback of recommended items and neglecting the long-term rewards based on reinforcement learning. We propose a recommendation system that uses the Q-learning method. We use ε-greedy policy combined with Q learning, a powerful method of reinforcement learning that handles those issues proficiently and gives the customer more chance to explore new pages or new products that are not so popular. Usually while implementing Reinforcement Learning (RL) to real-world problems both the state space and the action space are very vast. Therefore, to address the aforementioned challenges, we propose the multiple/distributed Q table approaches which can deal with large state-action space and that aides in actualizing the Q learning algorithm in the recommendation and huge state-action space.


    1. "":
    2. "":
    3. "":
    4. "":
    5. "Deep Reinforcement Learning based Recommendation with Explicit User-Item Interactions Modelling":
    6. "Deep Reinforcement Learning for Page-wise Recommendations":
    7. "Deep Reinforcement Learning for List-wise Recommendations":
    8. "Deep Reinforcement Learning Based RecSys Using Distributed Q Table":
  • Anupam Ranjan

    Anupam Ranjan / Yash Raj - SQUAD application through Knowledge Graph for COVID-19 Literature

    20 Mins

    There are numerous documents and research papers being published for COVID-19 and doctors are not able to absorb the content of all the literature. It has become a real challenge to extract relevant information in a short span of time.

    Knowledge Graph along with SQUAD application can help process multiple documents and extract precise information from a set of documents quickly. This will be a very handy application for healthcare professional to extract relevant information without going in detail with each application.

    The session will demonstrate the following:

    a) Text Processing of COVID-19 literature

    b) Named Entity Extraction from the documents using BERT/Spacy

    c) Building a Knowledge Graph of the documents

    d) Building question-answer application