Real time Anomaly Detection in Network KPI using Time Series
How to accurately detect Key Performance Indicator (KPI) anomalies is a critical issue in cellular network management. In this talk I shall introduce CNR(Cellular Network Regression) a unified performance anomaly detection framework for KPI time-series data. CNR realizes simple statistical modelling and machine-learning-based regression for anomaly detection; in particular, it specifically takes into account seasonality and trend components as well as supports automated prediction model retraining based on prior detection results. I demonstrate here how CNR detects two types of anomalies of practical interest, namely sudden drops and correlation changes, based on a large-scale real-world KPI dataset collected from a metropolitan LTE network. I explore various prediction algorithms and feature selection strategies, and provide insights into how regression analysis can make automated and accurate KPI anomaly detection viable.
Index Terms—anomaly detection, NPAR (Network Performance Analysis)
The continuing advances of cellular network technologies make high-speed mobile Internet access a norm. However, cellular networks are large and complex by nature, and hence production cellular networks often suffer from performance degradations or failures due to various reasons, such as back- ground interference, power outages, malfunctions of network elements, and cable disconnections. It is thus critical for network administrators to detect and respond to performance anomalies of cellular networks in real time, so as to maintain network dependability and improve subscriber service quality. To pinpoint performance issues in cellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.
Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks . ellular networks, a common practice adopted by network administrators is to monitor a diverse set of Key Performance Indicators (KPIs), which provide time-series data measurements that quantify specific performance aspects of network elements and resource usage. The main task of network administrators is to identify any KPI anomalies, which refer to unexpected patterns that occur at a single time instant or over a prolonged time period.
Today’s network diagnosis still mostly relies on domain experts to manually configure anomaly detection rules such a practice is error-prone, labour intensive, and inflexible. Recent studies propose to use (supervised) machine learning for anomaly detection in cellular networks .
Outline/Structure of the Experience Report
Introduction: Discussion on the problem Statement
Objective: Discussion on the target that can be achieved
Sustainability and Future Scope
Real time Anomaly Detection
The audience is expected to receive an overview on the following topic
2.What is Network KPI and how it works
3.How Anomaly can be detected using different Machine Learning models
Anybody who has an inclination in Science and Technology
schedule Submitted 1 year ago
People who liked this proposal, also liked:
Subhasish Misra - Causal data science: Answering the crucial ‘why’ in your analysis.Subhasish MisraStaff Data ScientistWalmart Labs
schedule 1 year agoSold Out!
Causal questions are ubiquitous in data science. For e.g. questions such as, did changing a feature in a website lead to more traffic or if digital ad exposure led to incremental purchase are deeply rooted in causality.
Randomized tests are considered to be the gold standard when it comes to getting to causal effects. However, experiments in many cases are unfeasible or unethical. In such cases one has to rely on observational (non-experimental) data to derive causal insights. The crucial difference between randomized experiments and observational data is that in the former, test subjects (e.g. customers) are randomly assigned a treatment (e.g. digital advertisement exposure). This helps curb the possibility that user response (e.g. clicking on a link in the ad and purchasing the product) across the two groups of treated and non-treated subjects is different owing to pre-existing differences in user characteristic (e.g. demographics, geo-location etc.). In essence, we can then attribute divergences observed post-treatment in key outcomes (e.g. purchase rate), as the causal impact of the treatment.
This treatment assignment mechanism that makes causal attribution possible via randomization is absent though when using observational data. Thankfully, there are scientific (statistical and beyond) techniques available to ensure that we are able to circumvent this shortcoming and get to causal reads.
The aim of this talk, will be to offer a practical overview of the above aspects of causal inference -which in turn as a discipline lies at the fascinating confluence of statistics, philosophy, computer science, psychology, economics, and medicine, among others. Topics include:
- The fundamental tenets of causality and measuring causal effects.
- Challenges involved in measuring causal effects in real world situations.
- Distinguishing between randomized and observational approaches to measuring the same.
- Provide an introduction to measuring causal effects using observational data using matching and its extension of propensity score based matching with a focus on the a) the intuition and statistics behind it b) Tips from the trenches, basis the speakers experience in these techniques and c) Practical limitations of such approaches
- Walk through an example of how matching was applied to get to causal insights regarding effectiveness of a digital product for a major retailer.
- Finally conclude with why understanding having a nuanced understanding of causality is all the more important in the big data era we are into.
Dr. Saptarsi Goswami - Mastering feature selection: basics for developing your own algorithmDr. Saptarsi GoswamiAssistant ProfessorA . K. Choudhury School of IT - University of Calcutta
schedule 1 year agoSold Out!
Feature selection is one of the most important processes for pattern recognition, machine learning and data mining problems. A successful feature selection method facilitates improvement of learning model performance and interpretability as well as reduces computational cost of the classifier by dimensionality reduction of the data. Feature selection is computationally expensive and becomes intractable even for few 100 features. This is a relevant problem because text, image and next generation sequence data all are inherently high dimensional. In this talk, I will discuss about few algorithms we have developed in last 5/6 years. Firstly, we will set the context of feature selection ,with some open issues , followed by definition and taxonomy. Which will take about 20 odd minutes. Then in next 20 minutes we will discuss couple of research efforts where we have improved feature selection for textual data and proposed a graph based mechanism to view the feature interaction. After the talk, participants will be appreciate the need of feature selection, the basic principles of feature selection algorithm and finally how they can start developing their own models
Juan Manuel Contreras - How to lead data science teams: The 3 D's of data science leadershipJuan Manuel ContrerasData Science ManagerUber
schedule 1 year agoSold Out!
Despite the increasing number of data scientists who are asked to take on leadership roles as they grow in their careers, there are still few resources on how to lead data science teams successfully.
In this talk, I will argue that an effective data science leader has to wear three hats: Diplomat (understand the organization and their team and liaise between them), Diagnostician (figure out how what organizational needs can be met by their team and how), and Developer (grow their and their team's skills as well as the organization's understanding of data science to maximize the value their team can drive).
Throughout, I draw on my experience as a data science leader both at a political party (the Democratic Party of the United States of America) and at a fintech startup (Even.com).
Talk attendees will learn a framework for how to manage data scientists and lead a data science practice. In turn, attendees will be better prepared to tackle new or existing roles as data science leaders or be better able to identify promising candidates for these roles.
Joy Mustafi / Aditya Bhattacharya - Person Identification via Multi-Modal Interface with Combination of Speech and Image DataJoy MustafiFounder and PresidentMUST ResearchAditya BhattacharyaLead AI/ML EngineerWest Pharmaceuticals
schedule 2 years agoSold Out!
Having multiple modalities in a system gives more affordance to users and can contribute to a more robust system. Having more also allows for greater accessibility for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others. For example, MUST researchers are working on a personalized humanoid built and equipped with various types of input devices and sensors to allow them to receive information from humans, which are interchangeable and a standardized method of communication with the computer, affording practical adjustments to the user, providing a richer interaction depending on the context, and implementing robust system with features like; keyboard; pointing device; touchscreen; computer vision; speech recognition; motion, orientation etc.
There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.
- Equivalence: information is presented in multiple ways and can be interpreted as the same information
- Specialization: when a specific kind of information is always processed through the same modality
- Redundancy: multiple modalities process the same information
- Complimentarity: multiple modalities take separate information and merge it
- Transfer: a modality produces information that another modality consumes
- Concurrency: multiple modalities take in separate information that is not merged
Computer - Human Modalities
Computers utilize a wide range of technologies to communicate and send information to humans:
- Vision - computer graphics typically through a screen
- Audition - various audio outputs
Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time.
Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people.
Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time.
Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
Multi-Modal Interaction: https://www.youtube.com/watch?v=jQ8Gq2HWxiA
Gesture Detection: https://www.youtube.com/watch?v=rDSuCnC8Ei0
Speech Recognition: https://www.youtube.com/watch?v=AewM3TsjoBk
Assignment (Hands-on Challenge for Attendees)
Real-time multi-modal access control system for authorized access to work environment - All the key concepts and individual steps will be demonstrated and explained in this workshop, and the attendees need to customize the generic code or approach for this assignment or hands-on challenge.
Suvro Shankar Ghosh - Learning Entity embedding’s form Knowledge GraphSuvro Shankar GhoshData ScientistAtos Global IT Solutions And Services Private Limited
schedule 1 year agoSold Out!
- Over a period of time, a lot of Knowledge bases have evolved. A knowledge base is a structured way of storing information, typically in the following form Subject, Predicate, Object
- Such Knowledge bases are an important resource for question answering and other tasks. But they often suffer from their incompleteness to resemble all the data in the world, and thereby lack of ability to reason over their discrete Entities and their unknown relationships. Here we can introduce an expressive neural tensor network that is suitable for reasoning over known relationships between two entities.
- With such a model in place, we can ask questions, the model will try to predict the missing data links within the trained model and answer the questions, related to finding similar entities, reasoning over them and predicting various relationship types between two entities, not connected in the Knowledge Graph.
- Knowledge Graph infoboxes were added to Google's search engine in May 2012
What is the knowledge graph?
▶Knowledge in graph form!
▶Captures entities, attributes, and relationships
▶More specifically, the “knowledge graph” is a database that collects millions of pieces of data about keywords people frequently search for on the World wide web and the intent behind those keywords, based on the already available content
▶In most cases, KGs is based on Semantic Web standards and have been generated by a mixture of automatic extraction from text or structured data, and manual curation work.
▶Structured Search & Exploration
e.g. Google Knowledge Graph, Amazon Product Graph
▶Graph Mining & Network Analysis
e.g. Facebook Entity Graph
▶Big Data Integration
e.g. IBM Watson
▶Diffbot, GraphIQ, Maana, ParseHub, Reactor Labs, SpazioDati
Suvro Shankar Ghosh - Real-Time Advertising Based On Web Browsing In Telecom DomainSuvro Shankar GhoshData ScientistAtos Global IT Solutions And Services Private Limited
schedule 1 year agoSold Out!
The following section describes Telco Domain Real-time advertising based on browsing use case in terms of :
- Potential business benefits to earn.
- Functional use case architecture depicted.
- Data sources (attributes required).
- Analytic to be performed,
- Output to be provided and target systems to be integrated with.
This use case is part of the monetization category. The goal of the use case is to provide a kind of DataMart to either Telecom business parties or external third parties sufficient, relevant and customized information to produce real-time advertising to Telecom end users. The customer targets are all Telecom network end-users.
The customization information to be delivered to advertise are based on several dimensions:
- Customer characteristics: demographic, telco profile.
- Customer usage: Telco products or any other interests.
- Customer time/space identification: location, zoning areas, usage time windows.
Use case requirements are detailed in the description below as “ Targeting method”
- Search Engine Targeting:
The telco will use users web history to track what users are looking at and to gather information about them. When a user goes onto a website, their web browsing history will show information of the user, what he or she searched, where they are from, found by the ip address, and then build a profile around them, allowing Telco to easily target ads to the user more specifically.
- Content and Contextual Targeting:
This is when advertisers can put ads in a specific place, based on the relative content present. This targeting method can be used across different mediums, for example in an article online, about purchasing homes would have an advert associated with this context, like an insurance ad. This is achieved through an ad matching system which analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.
- Technical Targeting
This form of targeting is associated with the user’s own software or hardware status. The advertisement is altered depending on the user’s available network bandwidth, for example if a user is on their mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.
- Time Targeting:
This type of targeting is centered around time and focuses on the idea of fitting in around people’s everyday lifestyles. For example, scheduling specific ads at a timeframe from 5-7pm, when the
- Sociodemographic Targeting:
This form of targeting focuses on the characteristics of consumers, including their age, gender, and nationality. The idea is to target users specifically, using this data about them collected, for example, targeting a male in the age bracket of 18-24. The telco will use this form of targeting by showing advertisements relevant to the user’s individual demographic profile. this can show up in forms of banner ads, or commercial videos.
- Geographical and Location-Based Targeting:
This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through different cells.
- Behavioral Targeting:
This form of targeted advertising is centered around the activity/actions of users and is more easily achieved on web pages. Information from browsing websites can be collected, which finds patterns in users search history.
Is where advertising uses behavioral targeting to produce ads that follow you after you have looked or purchased are a particular item. Retargeting is where advertisers use this information to ‘follow you’ and try and grab your attention so you do not forget.
- Opinions, attitudes, interests, and hobbies:
Psychographic segmentation also includes opinions on gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues.
debapriya das - AI in Martech - Solving the riddle of 4R'sdebapriya dasLead Machine Learning @ SmartechNetcore Solutions
schedule 1 year agoSold Out!
In this digital era when the attention span of customers is reducing drastically, for a marketer it is imperative to understand the following 4 aspects more popularly known as "The 4R's of Marketing" if they want to increase our ROI:
- Right Person
- Right Time
- Right Content
- Right Channel
Only when we design and send our campaigns in such a way, that it reaches the right customers at the right time through the right channel telling them about stuffs they like or are interested in ... can we expect higher conversions with lower investment. This is a problem that most of the organizations need to solve for to stay relevant in this age of high market competition.
Among all these we will put special focus on appropriate content generation based on targeted user base using Markov based models and do a quick hack session.
The time breakup can be:
5 mins : Difference between Martech and traditional marketing. The 4R's of marketing and why solving for them is crucial
5 mins : What is Smart Segments and how to solve for it, with a short demo
5 mins : How marketers use output from Smart Segments to execute targeted campaigns
5 mins: What is STO, how it can be solved and what is the performance uplift seen by clients when they use it
5 mins: What is Channel Optimization, how it can be solved and what is the performance uplift seen by clients when they use it
5 mins: Why sending the right message to customers is crucial, and introduction to appropriate content creation
15 mins: Covering different Text generation nuances, and a live demo with walk through of a toy code implementation
Pushker Ravindra - Data Science Best Practices for R and PythonPushker RavindraData Analytics LeadMonsanto/Bayer
schedule 1 year agoSold Out!
How many times did you feel that you were not able to understand someone else’s code or sometimes not even your own? It’s mostly because of bad/no documentation and not following the best practices. Here I will be demonstrating some of the best practices in Data Science, for R and Python, the two most important programming languages in the world for Data Science, which would help in building sustainable data products.
- Integrated Development Environment (RStudio, PyCharm)
- Coding best practices (Google’s R Style Guide and Hadley’s Style Guide, PEP 8)
- Linter (lintR, Pylint)
- Documentation – Code (Roxygen2, reStructuredText), README/Instruction Manual (RMarkdown, Jupyter Notebook)
- Unit testing (testthat, unittest)
- Version control (Git)
These best practices reduce technical debt in long term significantly, foster more collaboration and promote building of more sustainable data products in any organization.
Siboli Mukherjee - AI in Telecommunication -An Obstacle or OpportunitySiboli MukherjeeData AnalystVodafone Idea Ltd
schedule 1 year agoSold Out!
“Alexa, launch Netflix!”
No longer limited to providing basic phone and Internet service, the telecom industry is at the epicentre of technological growth, led by its mobile and broadband services in the Internet of Things (IoT) era.This growth is expected to continue,The driver for this growth? Artificial intelligence (AI).
Artificial Intelligent applications are revolutionizing the way telecoms operate, optimize and provide service to their customers
Today’s communications service providers (CSPs) face increasing customer demands for higher quality services and better customer experiences (CX). Telecoms are addressing these opportunities by leveraging the vast amounts of data collected over the years from their massive customer base. This data is culled from devices, networks, mobile applications, geolocations, detailed customer profiles, services usage and billing data.
Telecoms are harnessing the power of AI to process and analyse these huge volumes of Big Data in order to extract actionable insights to provide better customer experiences, improve operations, and increase revenue through new products and services.
With Gartner forecasting that 20.4 billion connected devices will be in use worldwide by 2020, more and more CSPs are jumping on the bandwagon, recognizing the value of artificial intelligence applications in the telecommunications industry.
Forward-thinking CSPs have focused their efforts on four main areas where AI has already made significant inroads in delivering tangible business results: Network optimization, preventive maintenance, Virtual Assistants, and robotic process automation (RPA)
AI is essential for helping CSPs build self-optimizing networks (SONs), where operators have the ability to automatically optimize network quality based on traffic information by region and time zone. Artificial intelligence applications in the telecommunications industry use advanced algorithms to look for patterns within the data, enabling telecoms to both detect and predict network anomalies, and allowing operators to proactively fix problems before customers are negatively impacted.
Some popular AI solutions for telecoms are ZeroStack’s ZBrain Cloud Management, which analyses private cloud telemetry storage and use for improved capacity planning, upgrades and general management; Aria Networks, an AI-based network optimization solution that counts a growing number of Tier-1 telecom companies as customers, and Sedona Systems’ NetFusion, which optimizes the routing of traffic and speed delivery of 5G-enabled services like AR/VR. Nokia launched its own machine learning-based AVA platform, a cloud-based network management solution to better manage capacity planning, and to predict service degradations on cell sites up to seven days in advance.
AI-driven predictive analytics are helping telecoms provide better services by utilizing data, sophisticated algorithms and machine learning techniques to predict future results based on historical data. This means telecoms can use data-driven insights to can monitor the state of equipment, predict failure based on patterns, and proactively fix problems with communications hardware, such as cell towers, power lines, data centre servers, and even set-top boxes in customers’ homes.
In the short-term, network automation and intelligence will enable better root cause analysis and prediction of issues. Long term, these technologies will underpin more strategic goals, such as creating new customer experiences and dealing efficiently with business demands. An innovative solution by AT&Tis using AI to support its maintenance procedures: the company is testing a drone to expand its LTE network coverage and to utilize the analysis of video data captured by drones for tech support and infrastructure maintenance of its cell towers.Preventive maintenance is not only effective on the network side, but on the customer’s side as well.Dutch telecom KPN analyses the notes generated by its call centre agents, and uses the insights generated to make changes to the interactive voice response (IVR) system.
Conversational AI platforms — known as virtual assistants — have learned to automate and scale one-on-one conversations so efficiently that they are projected to cut business expenses by as much as $8 billion in the next five years. Telecoms have turned to virtual assistants to help contend with the massive number of support requests for installation, set up, troubleshooting and maintenance, which often overwhelm customer support centre Using AI, telecoms can implement self-service capabilities that instruct customers how to install and operate their own devices.
Vodafone introduced its new chatbot — TOBi to handle a range of customer service-type questions. The chatbotscales responses to simple customer queries, thereby delivering the speed that customers demand. Nokia’s virtual assistant MIKA suggests solutions for network issues, leading to a 20% to 40% improvement in first-time resolution.
Robotic process automation (RPA)
CSPs all have vast numbers of customers and an endless volume of daily transactions, each susceptible to human error. Robotic Process Automation (RPA) is a form of business process automation technology based on AI. RPA can bring greater efficiency to telecommunications functions by allowing telecoms to more easily manage their back office operations and the large volumes of repetitive and rules-based processes. By streamlining execution of once complex, labor-intensive and time-consuming processes such as billing, data entry, workforce management and order fulfillment, RPA frees CSP staff for higher value-add work.
According to a survey by Deloitte, 40% of Telecom, Media and Tech executives say they have garnered “substantial” benefits from cognitive technologies, with 25% having invested $10 million or more. More than three-quarters expect cognitive computing to “substantially transform” their companies within the next three years.
Artificial intelligence applications in the telecommunications industry is increasingly helping CSPs manage, optimize and maintain not only their infrastructure, but their customer support operations as well. Network optimization, predictive maintenance, virtual assistants and RPA are examples of use cases where AI has impacted the telecom industry, delivering an enhanced CX and added value for the enterprise overall.
Kaushik Dey - Algorithms at Edge leveraging decentralized learningKaushik DeyHead of ML and Big Data PracticeEricsson
schedule 1 year agoSold Out!
The problem of network behavior prediction has been an ongoing study by researchers for quite a while now. Network behavior typically exhibits a complex sequential pattern and is often difficult to predict. Nowadays there are several techniques to predict the degradation in Network KPIs like throughput, latency etc., using various machine learning techniques like Deep Neural Networks, where the initial layers have learnt to map the raw features like performance counter measurements, weather, system configuration details etc into a feature space where classification by the final layers can be performed.
Given the initial number of counters( which constitutes the dimensions) is substantial (more than 2000 in number) the problem requires huge amount of data to train the Deep Neural Networks. Often this needs resources and time and more importantly this requires provisioning of huge amount of data for every trial. Given each node generates huge amount of data ( data on every 2000 counters generated at 15 minutes interval for each of 6 cells in an eNodeB) and the data needs to be transported across several hundred of eNodeBs to one central data center, it requires a very fat data pipe and consequently huge investment to enable a predictive fault prediction apparatus across the network.
The alernative is to have a compute infrastructure at the node and take the intelligence at the edge. However the challenge is given the huge amount of data generated in a single node having a compute at each node was proving to be expensive. Nowadays this compute requirement at node could be reduced through use of transfer learning. However the other challenge is on sharing the intelligence and developing a system which is collectively intelligent across nodes.
Network topology, climate features and user patterns vary across regions and service providers and hence an unique model is often necesarry to serve the node. However in order to deal with unseen patterns intelligence from other nodes can be useful which leads us to building an global model which again leads to the challenge of fat data pipeline requirement which makes it commercially less attractive.
In order to get around this challenge, an combination of federated learning is used in combination with transfer learning.
This presentation details such deep learning architectures which combines federated learning with transfer learning to enable construction and updation of Global models which imbibes intelligence from nodes but can be constructed by a consensus mechanism whereby weights and changes to weights of local models are shared to global. Also the local models are periodically updated once global model update iteration is complete. Further updation of local models is only done in final layers and initial layers are freezed. This reduces the compute requirement at node also...
The above principles are being implemented as First of a kind implementation and has prooved to be a success across multiple customers in delivering a compelling ML enabled fault prediction and self-healing mechanism but keeping the investments in infrastructure lower than would have been required in traditional Deep Learning architectures
This talk will specifically detail the leverage of above principles of federated and transfer learning on LSTMs..