Hybrid Classification Model with Topic Modelling and LSTM Text Classifier to identify key drivers behind Incident Volume

Incident volume reduction is one of the top priorities for any large-scale service organization along with timely resolution of incidents within the specified SLA parameters. AI and Machine learning solutions can help IT service desk manage the Incident influx as well as resolution cost by

  • Identifying major topics from incident description and planning resource allocation and skill-sets accordingly
  • Producing knowledge articles and resolution summary of similar incidents raised earlier
  • Analyzing Root Causes of incidents and introducing processes and automation framework to predict and resolve them proactively

We will look at different approached taken to combine standard document clustering algorithms such as Latent Dirichlet Allocation (LDA) and doc2vec with Text classification to produce easily interpret-able document clusters with semantically coherent/ text representation that helped IT operations of a large FMCG client identify key drivers/topics contributing towards incident volume and take necessary action on it.

 
6 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Case Study

1. Baseline Text Classification solution for Incident Root Cause Identification

2. Challenges with Manual labelling of large scale text data - Sampling error, Ambiguity in Manual labelling, Labor-intensive

3. Topic Modelling with Latent Dirichlet Allocation to cluster Incidents

4. Limitations of Topic Modelling Techniques - Domain specific Stop-words, Semi Structure in Incident Description, Topic to word distribution not logical representation of topic description

5. Hybrid Classifier with Distance based Sampling

6. Auto label mapping for volume training data generation

7. LSTM Model for Incident Classification

Learning Outcome

1. NLP: Challenges of Manual sampling of large scale Text data and how to handle it

2. NLP: LDA Topic Modelling pro and con with data examples

3. NLP: Text processing issues encountered while processing real-life Enterprise data and how to handle it

4. Unsolved problems like parsing of structure in the text that create unnecessary Topic with high Topic Match Probability - Open discussion

Target Audience

Data Scientist working on NLP specially Topic Modelling problem

Prerequisites for Attendees

Basic Understanding of NLP specially Topic Modelling

Supervised vs Unsupervised Learning

schedule Submitted 3 days ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Dr. Vikas Agrawal
    keyboard_arrow_down

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins
    Talk
    Intermediate

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Shalini Sinha
    keyboard_arrow_down

    Shalini Sinha / Badri Narayanan Gopalakrishnan ,PhD / Usha Rengaraju - Lifitng Up: Deep Learning for effective and efficient implementation of anti-hunger and anti-poverty programs(AI for Social Good)

    45 Mins
    Talk
    Intermediate

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.