Let the Machine THINK for You

schedule Sep 1st 02:55 - 03:15 PM place Jupiter people 43 Interested

Every organization is now focused on the business or customer data and trying hard to get actionable insights out of it. Most of them are either hiring data scientists or up-skilling their existing developers. However, they do understand the domain or business, relevant data and the impact, but, not essentially excellent in data science programming or cognitive computing. To bridge this gap, IBM brings Watson Machine Learning (WML), which is a service for creating, deploying, scoring and managing machine learning models. WML’s machine learning model creation, deployment, and management capabilities are key components of cognitive applications. The essential feature is the “self-learning” capabilities, personalized and customized for specific persona - may it be the executive or business leader, project manager, financial expert or sales advisor. WML makes the need of cognitive prediction easy with model flow capabilities, where machine learning and prediction can be applied easily with just a few clicks, and to work seamlessly without bunch of coding - for different personas to mark boundaries between developers, data scientists or business analysts. In this session, WML's capabilities would be demonstrated by taking a specific case study to solve real world business problem, along with challenges faced. To align with the developers' community, the architecture of this smart platform would be highlighted to help aspiring developers be aware of the design of a large-scale product.

 
 

Outline/Structure of the Demonstration

  • Case Study: End-to-End Real Business Problem Solving on Different Personas
  • Overall Architecture and System Flow
  • Challenges and Limitations

Learning Outcome

  • Understanding the business, data and outcome; even without going deeper on the algorithms
  • Knowing the platform from architecture perspective by experiencing in solving significant business problems.
  • Thinking on a real-world problem and focusing on connecting the dots to get actionable insights.

Target Audience

Data Science Enthusiasts, Students, Business Analysts, Business Execs etc.

Prerequisites for Attendees

No coding required (Model Flow and Self Learning Capabilities of WML).

schedule Submitted 1 year ago

Public Feedback

comment Suggest improvements to the Speaker
  • Naresh Jain
    By Naresh Jain  ~  1 year ago
    reply Reply

    Srijak, thanks for the proposal. Given your vast experience, I feel this proposal does not do justice to it. What you've in the proposal is easily available online. This also feels more like a product pitch to me.

    How about taking a specific case study where you've used WML's capabilities to solve real business problems? As part of the case study, it would be good to know challenges faced in your journey and what one should be aware when using WML.

    Also as Vishal pointed out, please share links to your past video presentations? This helps the program committee understand your presentation style.

    Look forward to hearing from you.

    • Srijak Bhaumik
      By Srijak Bhaumik  ~  1 year ago
      reply Reply

      Hi Naresh,

      Thanks for your comments. The content of my proposal is definitely not meant to be a product pitch, as Watson Machine Learning is a popular product in this domain.
      For the talk, If you see the Outline, I will be talking about the architecture of the platform to help aspiring developers be aware of the design of a large scale product, also I’ll be explaining about different personas of WML to mark boundaries between developers, data scientists or business analysts. Willing to demonstrate with an end-to-end problem solving and highlight the advantages. As I am from product development background, my talk will align with the developers community.

      However, I wanted to refrain from going into coding itself through notebooks directly, rather use the model flow capabilities of WML to make the talk interesting for all the expected audience. Hence, the chosen name of “taking a break from code”.

      As per your other request, I’m afraid I don’t have any video link to present at this time. I have given talks in IIM-Bangalore and other institutes in recent past, but it’s not recorded to be presented here.

      Thanks,
      Srijak

      • Naresh Jain
        By Naresh Jain  ~  1 year ago
        reply Reply

        Thanks for the clarification, Srijak.

        As a conference, we would like to provide insights to the participants that is not easily available online. Hence we do not accept overview sessions. We are interested in deep-dive sessions where someone can really share their first-hand experience in solving real business problems.

        I would recommend you reconsider the outline and the learning objective from your session.

        • Srijak Bhaumik
          By Srijak Bhaumik  ~  1 year ago
          reply Reply

          Ok Naresh, I had a phone conversation with one of the committee members [Joy ] to clarify the need to me.

          I'll share the first hand experience of solving real life problem using WML and about how the personas come effective in the solution.

          • Naresh Jain
            By Naresh Jain  ~  1 year ago
            reply Reply

            Excellent! Thank you.

            Please update the proposal to reflect the same.

            • Srijak Bhaumik
              By Srijak Bhaumik  ~  1 year ago
              reply Reply

              Thanks Naresh, Updated.

              • Joy Mustafi
                By Joy Mustafi  ~  1 year ago
                reply Reply

                Thanks for updating the proposal and outline. All the best!

    • Vishal Gokhale
      By Vishal Gokhale  ~  1 year ago
      reply Reply

      Thanks for the proposal, Srijak !

      Can you please share links to videos of any of your prior talks?
      This helps the program committee to get an idea of your presentation style.

      • Srijak Bhaumik
        By Srijak Bhaumik  ~  1 year ago
        reply Reply

        Hi Vishal,

        Thanks for your comments.

        As per your request, I’m afraid I don’t have any video link to present at this time. I have given talks in IIM-Bangalore and other institutes in recent past, but it’s not recorded to be presented here.

        Thanks,
        Srijak

    • Dr. Savita Angadi
      By Dr. Savita Angadi  ~  1 year ago
      reply Reply

      Looking forward to hear this. Hope this will cover beyond what is openly available.

      • Srijak Bhaumik
        By Srijak Bhaumik  ~  1 year ago
        reply Reply

        I'll try to cover the basic model builder and score flow, as suggested above, I'm designing this session to showcase without using Notebooks. I can give another session on the advance one.

    • Joy Mustafi
      By Joy Mustafi  ~  1 year ago
      reply Reply

      I like the aspect of the submission, and they should be retained along with cognitive platform development, apart from various algorithms.

      • Srijak Bhaumik
        By Srijak Bhaumik  ~  1 year ago
        reply Reply

        Thank you Joy for your kind words, I really hope, I get to present and benefit a lot of beginners with this platform.


    • Liked Dr. Dakshinamurthy V Kolluru
      keyboard_arrow_down

      Dr. Dakshinamurthy V Kolluru - ML and DL in Production: Differences and Similarities

      45 Mins
      Talk
      Beginner

      While architecting a data-based solution, one needs to approach the problem differently depending on the specific strategy being adopted. In traditional machine learning, the focus is mostly on feature engineering. In DL, the emphasis is shifting to tagging larger volumes of data with less focus on feature development. Similarly, synthetic data is a lot more useful in DL than ML. So, the data strategies can be significantly different. Both approaches require very similar approaches to the analysis of errors. But, in most development processes, those approaches are not followed leading to substantial delay in production times. Hyper parameter tuning for performance improvement requires different strategies between ML and DL solutions due to the longer training times of DL systems. Transfer learning is a very important aspect to evaluate in building any state of the art system whether ML or DL. The last but not the least is understanding the biases that the system is learning. Deeply non-linear models require special attention in this aspect as they can learn highly undesirable features.

      In our presentation, we will focus on all the above aspects with suitable examples and provide a framework for practitioners for building ML/DL applications.

    • Liked Santosh Vutukuri
      keyboard_arrow_down

      Santosh Vutukuri - Embedding Artificial Intelligence in Spreadsheet

      20 Mins
      Demonstration
      Intermediate

      In today's world all of us are growing our data science capabilities. There are many such organizations who think they are comfortable in spreadsheets (e.g. Microsoft Excel, Google Sheets, IBM Lotus, Apache OpenOffice Calc, Apple Numbers etc.), and they seriously do not want to switch into complex coding using R or Python, and not even into any other analytics tools available in the market. This proposal is for demonstrating how we can embed various artificial intelligence and machine learning algorithms into spreadsheet and get meaningful insights for business or research benefit. This would be helpful for the small scale businesses from the data analysis perspective. This approach with user friendly interface really creates value in decision making.

    • Liked Dr. Manish Gupta
      keyboard_arrow_down

      Dr. Manish Gupta / Radhakrishnan G - Driving Intelligence from Credit Card Spend Data using Deep Learning

      45 Mins
      Talk
      Beginner

      Recently, we have heard success stories on how deep learning technologies are revolutionizing many industries. Deep Learning has proven huge success in some of the problems in unstructured data domains like image recognition; speech recognitions and natural language processing. However, there are limited gain has been shown in traditional structured data domains like BFSI. This talk would cover American Express’ exciting journey to explore deep learning technique to generate next set of data innovations by deriving intelligence from the data within its global, integrated network. Learn how using credit card spend data has helped improve credit and fraud decisions elevate the payment experience of millions of Card Members across the globe.

    • Liked Joy Mustafi
      keyboard_arrow_down

      Joy Mustafi - The Artificial Intelligence Ecosystem driven by Data Science Community

      45 Mins
      Talk
      Intermediate

      Cognitive computing makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology. Project Features are Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time; Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people; Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time; Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided). {A set of cognitive systems is implemented and demonstrated as the project J+O=Y}

    • Liked Dr. Veena Mendiratta
      keyboard_arrow_down

      Dr. Veena Mendiratta - Network Anomaly Detection and Root Cause Analysis

      45 Mins
      Talk
      Intermediate

      Modern telecommunication networks are complex, consist of several components, generate massive amounts of data in the form of logs (volume, velocity, variety), and are designed for high reliability as there is a customer expectation of always on network access. It can be difficult to detect network failures with typical KPIs as the problems may be subtle with mild symptoms (small degradation in performance). In this workshop on network anomaly detection we will present the application of multivariate unsupervised learning techniques for anomaly detection, and root cause analysis using finite state machines. Once anomalies are detected, the message patterns in the logs of the anomaly data are compared to those of the normal data to determine where the problems are occurring. Additionally, the error codes in the anomaly data are analyzed to better understand the underlying problems. The data preprocessing methodology and feature selection methods will also be presented to determine the minimum set of features that can provide information on the network state. The algorithms are developed and tested with data from a 4G network. The impact of applying such methods is the proactive detection and root cause analysis of network anomalies thereby improving network reliability and availability.

    • Liked Dr. Savita Angadi
      keyboard_arrow_down

      Dr. Savita Angadi - Connected Vehicle – is far more than just the car…

      45 Mins
      Talk
      Advanced


      For many IoT use cases there is a real challenge in streaming large amounts of data in real time, and the connected vehicle is no exception. Cars and trucks have the ability to generate TB of data daily, and connectivity can be spotty, especially in remote areas. To address this issue companies will want to move the analysis to the edge, on to the device where the data is generated. Will walk through the case in which there is an installed streaming engine on a gateway on a commercial vehicle. Data is analyzed locally on the vehicle, as it is generated, and alerts are communicated via cell connection. Models can be downloaded when a vehicle comes in for service, or over the air. Idea is to use data from the vehicle, like model, horsepower, oil temp, etc, to buid a decision tree to predict our target, turbo fault. Decision trees are nice in that that lay out the rules for you model clearly. In this case the model was predictive for certain engine horsepower ratings, time in service, model, and oil temps. Once this model generated acceptable accuracy with a 30 day window, plenty of time to act on the alert. Now in order to capture the value of this insight, we need to know immediately when a signal is detected, so this model will run natively on the vehicle, in our on board analytics engine.

    • Liked Venkatraman J
      keyboard_arrow_down

      Venkatraman J - Detection and Classification of Fake news using Convolutional Neural networks

      20 Mins
      Talk
      Intermediate

      The proliferation of fake news or rumours in traditional news media sites, social media, feeds, and blogs have made it extremely difficult and challenging to trust any news in day to day life. There are wide implications of false information on both individuals and society. Even though humans can identify and classify fake news through heuristics, common sense and analysis there is a huge demand for an automated computational approach to achieve scalability and reliability. This talk explains how Neural probabilistic models using deep learning techniques are used to classify and detect fake news.

      This talk will start with an introduction to Deep learning, Tensor flow(Google's Deep learning framework), Dense vectors (word2vec model) feature extraction, data preprocessing techniques, feature selection, PCA and move on to explain how a scalable machine learning architecture for fake news detection can be built.

    • Liked Dr. Savita Angadi
      keyboard_arrow_down

      Dr. Savita Angadi - What Chaos and Fractals has to do with Machine Learning?

      45 Mins
      Talk
      Advanced

      The talk will cover how Chaos and Fractals are connected to machine learning. Artificial Intelligence is an attempt to model the characteristics of human brain. This has lead to model that can use connected elements essentially neurons. Most of the biological systems or simulation related developments in neural networks have practical results from computer science point of view. Chaos Theory has a good chance of being one of these developments. Brain itself is an good example of chaos system. Several attempts have been made to take an advantage of chaos in artificial neural systems to reproduce the benefits that have met quite a bit success.

    • Liked Venkatraman J
      keyboard_arrow_down

      Venkatraman J - Hands on Data Science. Get hands dirty with real code!!!

      45 Mins
      Workshop
      Intermediate

      Data science refers to the science of extracting useful information from data. Knowledge discovery in data bases, data mining, Information extraction also closely match with data science. Supervised learning,Semi supervised learning,Un supervised learning methodologies are out of Academia and penetrated deep into the industry leading to actionable insights, dashboard driven development, data driven reasoning and so on. Data science has been the buzzword for last few years in industry with only a handful of data scientists around the world. The industry needs more and more data scientists in future to solve problems using statistical techniques. The exponential availability of unstructured data from the web has thrown huge challenges to data scientists to exploit them before driving conclusions.

      Now that's overload of information and buzzwords. It all has to start somewhere? Where and how to start? How to get hands dirty rather than just reading books and blogs. Is it really science or just code?. Let's get into code to talk data science.

      In this workshop i will show the tools required to do real data science rather than just reading by building real models using Deep neural networks and show live demo of the same. Also share some of the key data science techniques every aspiring data scientist should have to thrive in the industry.

    • Liked Saibal Dutta
      keyboard_arrow_down

      Saibal Dutta - A Multi-criteria Decision Making Approach and its Applications in Business

      45 Mins
      Tutorial
      Intermediate

      We live in a world of information where decision takes a very important role. Human are considered one of the best species since we like to think and quantitatively analysis our decision process based on the available information. But the question may arise:


      1. How can we take a right decision or mathematically, optimal solution among the multiple alternatives?


      In academic world, this knowledge is known as Multiple criteria decision making (MCDM), that involves to making decisions in the presence of multiple, usually conflicting criteria and alternatives. Multi-Criteria Decision Aid (MCDA) or Multi-Criteria Decision Making (MCDM) methods have successfully utilized by researchers and practitioners in evaluating, assessing and ranking alternatives across diverse business problem. As we know, every decision involves trade-offs between risk and opportunity. So, the next question may arise:


      2. How to minimize the risk and define in mathematically for a given business problem?


      Many MCDA/MCDM methods developed to solve real-world decision problems, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) continues to work satisfactorily across different real business application areas. So, the next comes:


      3. How to apply TOPSIS methods to design and solve any practical business problem?


      In this session, we will try to understand the theory behind decision science and discuss step by step implementation of business cases in open source environment

      The overall objective of the session is how to design decision based on knowledge and planning in business.