Attempt of a classification of AI use cases by problem class

There are many attempts to classify and structure the various AI techniques in the internet produced by a variety of sources with specific interests in this emerging market and the fact that some new technologies make use of multiple techniques does not make the task easier to provide an easy, top-down access and guideline through AI for business decision makers. Most sources structure the AI techniques by their core ability (e.g. supervised vs. unsupervised learning) but even this sometimes controversial (e.g. genetic algorithms). The approach taken here is to find groups of use cases that represent similar problem-solving strategies (just like distinguishing "search" from "sort" without reference to a particular technique like "Huffman search" or "qsort"). Of course, most AI techniques are combinations but with a different focus.

There are many different sorting criteria to cluster use cases and these criteria determine how well and if at all the above objectives may be achieved. The target is to find “natural” classes of problems that in an abstract way can be applied to all the corresponding use cases. Since the clustering is used to determine which AI techniques are applicable, the classes should correspond to the typical characteristics of AI technique.

problem class

core problem description

sample use cases

key measure

AI techniques

Normalization

Pre-process and convert unstructured data into structured data (patterns)

  • Big data pre-processing
  • Sample normalization (sound, face images, …)
  • Triggered time sequences
  • Feature extraction
  • Conversion quality

Clustering

detect pattern accumulations in a data set

  • customer segment analysis
  • optical skin cancel analysis
  • music popularity analysis
  • inter- and intracluster resolution

Feature Extraction

Detect features within patterns and samples

  • Facial expression analysis (eyes and mouth)
  • Scene analysis & surveillance (people ident.)
  • Accuracy
  • completeness

Recognition

detect a pattern in a large set of samples

  • image/face recognition
  • speaker recognition
  • natural language recognition
  • associative memory

  • accuracy
  • recognition speed
  • learning or storage speed
  • capacity

Generalization

Interpolation and extrapolation of feature patterns in a pattern space

  • adaptive linear feature interpolation
  • fuzzy robot control/navigation in unknown terrain
  • accuracy
  • prediction pattern range
  • Kohonen maps (SOM, SOFM)
  • any backpropagation NN
  • Fuzzy logic systems

Prediction

predict future patterns (e.g. based on past experience, i.e. observed sequences of patterns)

  • stock quote analysis
  • heart attack prevention
  • next best action machines
  • weather/storm forecast
  • pre-fetching in CPU's
  • accuracy
  • prediction time range

Optimization

optimize a given structure (pattern) according to a fitness- or energy function

  • (bionic) plane or ship construction
  • agricultural fertilization optimization
  • genetic programming
  • convergence
  • detection of local /global optimum
  • (heaviness) cost of optimization

Conclusion

detect or apply a (correlative) rule in a data set

  • QM correlation analysis
  • next best action machines
  • consistency
  • completeness
  • rule-based systems
  • Expert systems

 
15 favorite thumb_down thumb_up 1 comment visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

  • AI technique
  • AI technology
  • AI use case
  • AI Use case class

Learning Outcome

Benefits

Clustering in AI use cases according to underlying (abstract) core problems has the following benefits:

  • development of class specific and reusable solutions that are applicable for each use case of that class
  • ability to apply the right AI techniques and solutions to a given use case by classifying it
  • determine the pre-requisites and limitations of AI techniques and solutions for a given use case
  • early understanding of realistic objectives (and risk to overrate AI capabilities)
  • apply the AI results from similar use cases (and even classes) using transfer learning
  • develop a deeper understanding of the use case per se and it’s differentiators from others
  • create a greater economy of scale effects and more cost-efficient use of AI solutions
  • speed up the application of AI for new use cases

Target Audience

Data Scientists, NLP , Deep Learning, Machine Learning domain

Prerequisites for Attendees

Participants are expected to know what is AI, Machine Learning and Deep Learning. Some basics around the Data Science lifecycle including data, features, modeling and evaluation.

schedule Submitted 1 week ago

Public Feedback

comment Suggest improvements to the Speaker
  • Dr. Vikas Agrawal
    By Dr. Vikas Agrawal  ~  1 day ago
    reply Reply

    Dear Suvro: The learning objective from the talk is not clear to me. Also, could you please considering include a video of your talk or a recorded introduction to the topic?

    Warm Regards

    Vikas


  • Liked Suvro Shankar Ghosh
    keyboard_arrow_down

    Suvro Shankar Ghosh - Real-Time Advertising Based On Web Browsing In Telecom Domain

    45 Mins
    Case Study
    Intermediate

    The following section describes Telco Domain Real-time advertising based on browsing use case in terms of :

    • Potential business benefits to earn.
    • Functional use case architecture depicted.
    • Data sources (attributes required).
    • Analytic to be performed,
    • Output to be provided and target systems to be integrated with.

    This use case is part of the monetization category. The goal of the use case is to provide a kind of DataMart to either Telecom business parties or external third parties sufficient, relevant and customized information to produce real-time advertising to Telecom end users. The customer targets are all Telecom network end-users.

    The customization information to be delivered to advertise are based on several dimensions:

    • Customer characteristics: demographic, telco profile.
    • Customer usage: Telco products or any other interests.
    • Customer time/space identification: location, zoning areas, usage time windows.

    Use case requirements are detailed in the description below as “ Targeting method”

    1. Search Engine Targeting:

    The telco will use users web history to track what users are looking at and to gather information about them. When a user goes onto a website, their web browsing history will show information of the user, what he or she searched, where they are from, found by the ip address, and then build a profile around them, allowing Telco to easily target ads to the user more specifically.

    1. Content and Contextual Targeting:

    This is when advertisers can put ads in a specific place, based on the relative content present. This targeting method can be used across different mediums, for example in an article online, about purchasing homes would have an advert associated with this context, like an insurance ad. This is achieved through an ad matching system which analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.

    1. Technical Targeting

    This form of targeting is associated with the user’s own software or hardware status. The advertisement is altered depending on the user’s available network bandwidth, for example if a user is on their mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.

    1. Time Targeting:

    This type of targeting is centered around time and focuses on the idea of fitting in around people’s everyday lifestyles. For example, scheduling specific ads at a timeframe from 5-7pm, when the

    1. Sociodemographic Targeting:

    This form of targeting focuses on the characteristics of consumers, including their age, gender, and nationality. The idea is to target users specifically, using this data about them collected, for example, targeting a male in the age bracket of 18-24. The telco will use this form of targeting by showing advertisements relevant to the user’s individual demographic profile. this can show up in forms of banner ads, or commercial videos.

    1. Geographical and Location-Based Targeting:

    This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through different cells.

    1. Behavioral Targeting:

    This form of targeted advertising is centered around the activity/actions of users and is more easily achieved on web pages. Information from browsing websites can be collected, which finds patterns in users search history.

    1. Retargeting:

    Is where advertising uses behavioral targeting to produce ads that follow you after you have looked or purchased are a particular item. Retargeting is where advertisers use this information to ‘follow you’ and try and grab your attention so you do not forget.

    1. Opinions, attitudes, interests, and hobbies:

    Psychographic segmentation also includes opinions on gender and politics, sporting and recreational activities, views on the environment and arts and cultural issues.

  • Liked Suvro Shankar Ghosh
    keyboard_arrow_down

    Suvro Shankar Ghosh - Learning Entity embedding’s form Knowledge Graph

    45 Mins
    Case Study
    Intermediate
    • Over a period of time, a lot of Knowledge bases have evolved. A knowledge base is a structured way of storing information, typically in the following form Subject, Predicate, Object
    • Such Knowledge bases are an important resource for question answering and other tasks. But they often suffer from their incompleteness to resemble all the data in the world, and thereby lack of ability to reason over their discrete Entities and their unknown relationships. Here we can introduce an expressive neural tensor network that is suitable for reasoning over known relationships between two entities.
    • With such a model in place, we can ask questions, the model will try to predict the missing data links within the trained model and answer the questions, related to finding similar entities, reasoning over them and predicting various relationship types between two entities, not connected in the Knowledge Graph.
    • Knowledge Graph infoboxes were added to Google's search engine in May 2012

    What is the knowledge graph?

    ▶Knowledge in graph form!

    ▶Captures entities, attributes, and relationships

    More specifically, the “knowledge graph” is a database that collects millions of pieces of data about keywords people frequently search for on the World wide web and the intent behind those keywords, based on the already available content

    ▶In most cases, KGs is based on Semantic Web standards and have been generated by a mixture of automatic extraction from text or structured data, and manual curation work.

    ▶Structured Search & Exploration
    e.g. Google Knowledge Graph, Amazon Product Graph

    ▶Graph Mining & Network Analysis
    e.g. Facebook Entity Graph

    ▶Big Data Integration
    e.g. IBM Watson

    ▶Diffbot, GraphIQ, Maana, ParseHub, Reactor Labs, SpazioDati

  • Liked Joy Mustafi
    keyboard_arrow_down

    Joy Mustafi - Human-Machine Interaction through Multi-Modal Interface with Combination of Speech, Text, Image and Sensor Data

    45 Mins
    Talk
    Intermediate

    Introduction

    In the context of human–computer interaction, a modality is the classification of a single independent channel of sensory input / output between a computer and a human. A system is designated uni-modal if it has only one modality implemented, and multi-modal if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively. Modalities can be generally defined in two forms: human-computer and computer-human modalities.

    With the increasing popularity of smartphones, the general public are becoming more comfortable with the more complex modalities. Speech recognition was a major selling point of the iPhone and following Apple products, with the introduction of Siri. This technology gives users an alternative way to communicate with computers when typing is less desirable. However, in a loud environment, the audition modality is not quite effective. This exemplifies how certain modalities have varying strengths depending on the situation. Other complex modalities such as computer vision in the form of Microsoft's Kinect or other similar technologies can make sophisticated tasks easier to communicate to a computer especially in the form of three dimensional movement.

    This talk is based on a physical robot (a personalized humanoid built in MUST Research), equipped with various types of input devices and sensors to allow them to receive information from humans, which are interchangeable and a standardized method of communication with the computer, affording practical adjustments to the user, providing a richer interaction depending on the context, and implementing robust system with features like; keyboard; pointing device; touchscreen; computer vision; speech recognition; motion, orientation etc.

    Cognitive computing makes a new class of problems computable. 

To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. 

These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. 

They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. 

It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology.

    Computer–Human Modalities

    Computers utilize a wide range of technologies to communicate and send information to humans:

    • Vision – computer graphics typically through a screen
    • Audition – various audio outputs
    • Tactition – vibrations or other movement
    • Gustation (taste)
    • Olfaction (smell)
    • Thermoception (heat)
    • Nociception (pain)
    • Equilibrioception (balance)

    Human–computer Modalities

    Computers can be equipped with various types of input devices and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and afford practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.

    • Keyboard
    • Pointing device
    • Touchscreen
    • Computer vision
    • Speech recognition
    • Motion
    • Orientation

    Project Features

    Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time.

    Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people.

    Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

    Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

  • Liked Dr. Saptarsi Goswami
    keyboard_arrow_down

    Dr. Saptarsi Goswami - Meta features and clustering based approaches for feature selection

    45 Mins
    Tutorial
    Beginner

    Feature selection is one of the most important processes for pattern recognition, machine learning and data mining problems. A successful feature selection method facilitates improvement of learning model performance and interpretability as well as reduces computational cost of the classifier by dimensionality reduction of the data. Feature selection refers to the retention of discriminatory features while discarding the redundant and irrelevant features. In this process, a subset of D features are selected from a set of N features (D<N). There is another way of achieving dimensionality reduction by projecting higher dimensional data to lower dimension, normally referred to feature extraction. This thesis refers to the former one i.e. feature subset selection. Optimal feature subset selection method comprises of developing 1) an evaluation function for measuring the goodness of a feature or a feature subset and 2) a search algorithm to find out the best subset of features from all possible subsets of the whole feature set. Based on the nature of the objective function used in the search algorithms, feature subset selection algorithms are broadly classified into filter approach and wrapper approach. Classifier dependent wrapper approaches use classifier accuracy as the objective function while filter approaches use any evaluation function representing the intrinsic characteristics of the data set and the resulting feature subset works equally well for any classifier. This work focusses on filter based feature subset selection approach. In this work, initially a study has been done with currently available search based filter type feature selection algorithms for supervised as well as unsupervised classification with both the single objective and multi-objective evaluation functions. Some improvements over the current algorithms have been proposed and their efficiency has been examined by simulation experiments with bench mark data sets. In the second step, an inexpensive feature evaluation measure based on feature relevance to be used with a filter type feature selection for unsupervised classification has been proposed. It has been noticed during literature study that the concept of feature relevance in case of unsupervised classification is difficult to form and current methods are complex and time consuming. The proposed measure which considers individual variability as well as overall variability of the dataset,is found to be effective compared to the current methods by simulation experiments with bench mark data sets. Thirdly, it seems that the most of the current feature selection algorithms are based on search strategies to find out the best feature subset from the available feature set. For a large number of features, exhaustive search is computationally prohibitive which leads to combinatorial optimization problem and some sort of heuristic is used for the solution. With the increase of the number of features, the computational time for optimal feature subset selection increases.An alternative solution to this problem is to use clustering of the features to find out the best feature subset which is not yet explored sufficiently. In this work, an efficient clustering based feature selection algorithm has been proposed and simulation experiments have been done with bench mark data sets. The main contributions of the proposed algorithm are introduction of a novel method to determine the optimal number of clusters, a way of interpretation of the importance of the feature clusters and a method of selection of the final subset of features from the feature clusters. Finally, it is found that though lots of feature selection algorithms are available, it is very difficult to decide which algorithm is suitable for a particular real world application. Here a study has been done to establish the relation between the feature selection algorithm and the characteristics of the data set. A technique has been proposed to define a data set according to its intrinsic characteristics represented by some meta-features. Finally a feature selection strategy is recommended based on the characteristics of the data set and has been implemented with bench mark data sets to judge its effectiveness.

  • Liked Siboli mukherjee
    keyboard_arrow_down

    Siboli mukherjee - AI in Telecommunication -An Obstacle or Opportunity

    45 Mins
    Talk
    Executive

    Introduction

    “Alexa, launch Netflix!”

    No longer limited to providing basic phone and Internet service, the telecom industry is at the epicentre of technological growth, led by its mobile and broadband services in the Internet of Things (IoT) era.This growth is expected to continue,The driver for this growth? Artificial intelligence (AI).

    Artificial Intelligent applications are revolutionizing the way telecoms operate, optimize and provide service to their customers

    Today’s communications service providers (CSPs) face increasing customer demands for higher quality services and better customer experiences (CX). Telecoms are addressing these opportunities by leveraging the vast amounts of data collected over the years from their massive customer base. This data is culled from devices, networks, mobile applications, geolocations, detailed customer profiles, services usage and billing data.

    Telecoms are harnessing the power of AI to process and analyse these huge volumes of Big Data in order to extract actionable insights to provide better customer experiences, improve operations, and increase revenue through new products and services.

    With Gartner forecasting that 20.4 billion connected devices will be in use worldwide by 2020, more and more CSPs are jumping on the bandwagon, recognizing the value of artificial intelligence applications in the telecommunications industry.

    Forward-thinking CSPs have focused their efforts on four main areas where AI has already made significant inroads in delivering tangible business results: Network optimization, preventive maintenance, Virtual Assistants, and robotic process automation (RPA)

    Network optimisation

    AI is essential for helping CSPs build self-optimizing networks (SONs), where operators have the ability to automatically optimize network quality based on traffic information by region and time zone. Artificial intelligence applications in the telecommunications industry use advanced algorithms to look for patterns within the data, enabling telecoms to both detect and predict network anomalies, and allowing operators to proactively fix problems before customers are negatively impacted.

    Some popular AI solutions for telecoms are ZeroStack’s ZBrain Cloud Management, which analyses private cloud telemetry storage and use for improved capacity planning, upgrades and general management; Aria Networks, an AI-based network optimization solution that counts a growing number of Tier-1 telecom companies as customers, and Sedona Systems’ NetFusion, which optimizes the routing of traffic and speed delivery of 5G-enabled services like AR/VR. Nokia launched its own machine learning-based AVA platform, a cloud-based network management solution to better manage capacity planning, and to predict service degradations on cell sites up to seven days in advance.

    Predictive maintenance

    AI-driven predictive analytics are helping telecoms provide better services by utilizing data, sophisticated algorithms and machine learning techniques to predict future results based on historical data. This means telecoms can use data-driven insights to can monitor the state of equipment, predict failure based on patterns, and proactively fix problems with communications hardware, such as cell towers, power lines, data centre servers, and even set-top boxes in customers’ homes.

    In the short-term, network automation and intelligence will enable better root cause analysis and prediction of issues. Long term, these technologies will underpin more strategic goals, such as creating new customer experiences and dealing efficiently with business demands. An innovative solution by AT&Tis using AI to support its maintenance procedures: the company is testing a drone to expand its LTE network coverage and to utilize the analysis of video data captured by drones for tech support and infrastructure maintenance of its cell towers.Preventive maintenance is not only effective on the network side, but on the customer’s side as well.Dutch telecom KPN analyses the notes generated by its call centre agents, and uses the insights generated to make changes to the interactive voice response (IVR) system.

    Virtual Assistants

    Conversational AI platforms — known as virtual assistants — have learned to automate and scale one-on-one conversations so efficiently that they are projected to cut business expenses by as much as $8 billion in the next five years. Telecoms have turned to virtual assistants to help contend with the massive number of support requests for installation, set up, troubleshooting and maintenance, which often overwhelm customer support centre Using AI, telecoms can implement self-service capabilities that instruct customers how to install and operate their own devices.

    Vodafone introduced its new chatbot — TOBi to handle a range of customer service-type questions. The chatbotscales responses to simple customer queries, thereby delivering the speed that customers demand. Nokia’s virtual assistant MIKA suggests solutions for network issues, leading to a 20% to 40% improvement in first-time resolution.

    Robotic process automation (RPA)

    CSPs all have vast numbers of customers and an endless volume of daily transactions, each susceptible to human error. Robotic Process Automation (RPA) is a form of business process automation technology based on AI. RPA can bring greater efficiency to telecommunications functions by allowing telecoms to more easily manage their back office operations and the large volumes of repetitive and rules-based processes. By streamlining execution of once complex, labor-intensive and time-consuming processes such as billing, data entry, workforce management and order fulfillment, RPA frees CSP staff for higher value-add work.

    According to a survey by Deloitte, 40% of Telecom, Media and Tech executives say they have garnered “substantial” benefits from cognitive technologies, with 25% having invested $10 million or more. More than three-quarters expect cognitive computing to “substantially transform” their companies within the next three years.

    Summary

    Artificial intelligence applications in the telecommunications industry is increasingly helping CSPs manage, optimize and maintain not only their infrastructure, but their customer support operations as well. Network optimization, predictive maintenance, virtual assistants and RPA are examples of use cases where AI has impacted the telecom industry, delivering an enhanced CX and added value for the enterprise overall.