The Artificial Intelligence Ecosystem driven by Data Science Community

schedule Sep 1st 02:00 - 02:45 PM place Grand Ball Room 1 people 24 Interested

Cognitive computing makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology. Project Features are Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time; Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people; Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time; Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided). {A set of cognitive systems is implemented and demonstrated as the project J+O=Y}

 
23 favorite thumb_down thumb_up 4 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

  • Introduction
  • Ecosystem
  • Objectives
  • Activities
  • Applications (Project J+O=Y)
    • Problem Solving with Text and Numbers
    • Analytical Questions with Tables and Charts
    • Problem Solving with Vision
    • Connected Robot through Sensor and Voice Control
  • Innovativeness
  • Market Impact
  • Social Impact
  • Sustainability
  • Inter-operability
  • Collaboration
  • Patents / Publications

Learning Outcome

Attendees would be benefited by joining the data science research community and continuing deep research in artificial intelligence with a sustainable platform.

Subject Matters:

  • Artificial Intelligence and Machine Learning
  • Natural Language Processing and Text Analytics
  • Image Processing and Computer Vision
  • Audio Signal Processing and Speech Technology
  • Embedded Systems and Internet of Things

Target Audience

Anyone who has interest to build the AI ecosystem across the planet.

Prerequisites for Attendees

Basic Mathematics, Statistics and Programming!

schedule Submitted 11 months ago

Public Feedback

comment Suggest improvements to the Speaker
  • Vishal Gokhale
    By Vishal Gokhale  ~  11 months ago
    reply Reply

    Thanks for the proposal, Joy ! :-)
    This is fairly advanced topic. It would be very interesting to listen to.

    Its great that you would be covering the applications.
    Would you be also covering an end to end solution of a simplified problem?
    I think that would be very helpful for the audience to "see" how cognitive systems are different from the conventional products / systems.
    I think it will give the audience a better gut feel of how such a seemingly complex, abstract idea can be turned to reality and thus will also get people interested in doing cognitive systems themselves.
    Please share your thoughts.

    • Joy Mustafi
      By Joy Mustafi  ~  11 months ago
      reply Reply

      {A set of cognitive systems is implemented and demonstrated as the project J+O=Y}

      Answering Arithmetic and Numerical Queries

      The solution is a computer-based question-answer system can understand an arithmetic or algebraic math problem stated in natural language and provide an answer or solution in real-time. The core idea consists of the following key steps: Get the input problem statement and question to be answered; Determine whether the original sentences are well-formed from a mathematical perspective; If required, convert the input sentences to a sequence of sentences which are well-formed from a mathematical perspective; Convert the well-formed sentences into mathematical equations; Solve the set of equations using applicable logic or mathematical methods to get a mathematical result; Correlate the mathematical result to the original question to be answered; Narrate the mathematical result in natural language, as an answer to the original question. Given a numerical question, compute its answer using a sequence of known formulas. Given a set of formulas in a domain create a formulae dependency graph for the domain. Given a textual question, a set of nodes in the graph are observed and a set of nodes need to be predicted: Require methods for variable identification; Require methods to determine value assignments for variables and discovering dependent variable. Unit normalization and all units need to be put in a single canonical form. Graphical model inference mechanisms are utilized to determine value of dependent variables given observed variables.

      Automatic Data Interpretation and Answering Analytical Questions with Tables and Charts

      The project described here is when the question is asked based on a table or chart or both along with the textual information. This is typical descriptive question, where, analyzing the question, narrating charts and tables, then formation of equation, solving the analytical problem and narrating the answer are required. The method of analyzing a data interpretation question with table and / or chart using natural language processing methods and extracting information from the question and mapping the extracted information into mathematical relations to understand the question. Further comprising extracting table and / or chart information from tables and charts in image format, along with data values (numerals), data labels, headers, footers, legends, overlays, units, lines or axes, edges, contours, shapes, lengths, proportions, angles and other texts in image using combination of pattern recognition and optical character recognition for data interpretation. Further comprising forming and solving a set of mathematical equations from extracted table and / or chart information and mathematical relations to answer.

      Solving Visual Puzzles (Sudoku, Maze, Finding Differences)

      A system which can solve visual puzzles like sudoku, maze or finding differences using computer vision and pattern recognition. The system extracts feature from the clicked images of newspaper cutting, identify relevant information like digits for sudoku, solves the puzzles automatically. For digit or character recognition, a supervised machine learning approach is being used. For solving puzzles, it is rule based. Overall the project is being expanded for similar visual puzzles.

      Controlling Internet Connected Robot through Sensor and Voice for Problem Solving

      This is a cognitive robot for personal use. She responds to voice command where user can ask to introduce, know time, listen to jokes and song, ask to walk along with you or see her exercise and dancing. It is 61 cm tall with 3D printed chassis. The robot has a mic to listen, speaker to respond along with 4 servo and 2 motor for her motion control and a microprocessor to process data which receives the text string from the STT engine and handles the input by processing it and passing the output to the TTS engine, it handles user queries via a series of if-then-else clause in the Python language. It decides what the output should be in response to specific inputs. Overall it has voice recognition ability to interact with user. This is an internet controlled robot connected to cloud via wireless module enabling user to operate the system from anywhere through an interface. It is used in applications where one can do surveillance, tele-presence or enable computer vision capability to detect and process objects.

    • Joy Mustafi
      By Joy Mustafi  ~  11 months ago
      reply Reply

      Updated for 45 minutes.

    • Joy Mustafi
      By Joy Mustafi  ~  11 months ago
      reply Reply

      Yes, I will be covering end to end solutions for multiple problems in a simplified way :)


  • Liked Dr. Dakshinamurthy V Kolluru
    keyboard_arrow_down

    Dr. Dakshinamurthy V Kolluru - ML and DL in Production: Differences and Similarities

    45 Mins
    Talk
    Beginner

    While architecting a data-based solution, one needs to approach the problem differently depending on the specific strategy being adopted. In traditional machine learning, the focus is mostly on feature engineering. In DL, the emphasis is shifting to tagging larger volumes of data with less focus on feature development. Similarly, synthetic data is a lot more useful in DL than ML. So, the data strategies can be significantly different. Both approaches require very similar approaches to the analysis of errors. But, in most development processes, those approaches are not followed leading to substantial delay in production times. Hyper parameter tuning for performance improvement requires different strategies between ML and DL solutions due to the longer training times of DL systems. Transfer learning is a very important aspect to evaluate in building any state of the art system whether ML or DL. The last but not the least is understanding the biases that the system is learning. Deeply non-linear models require special attention in this aspect as they can learn highly undesirable features.

    In our presentation, we will focus on all the above aspects with suitable examples and provide a framework for practitioners for building ML/DL applications.

  • Liked Atin Ghosh
    keyboard_arrow_down

    Atin Ghosh - AR-MDN - Associative and Recurrent Mixture Density Network for e-Retail Demand Forecasting

    45 Mins
    Case Study
    Intermediate

    Accurate demand forecasts can help on-line retail organizations better plan their supply-chain processes. The chal- lenge, however, is the large number of associative factors that result in large, non-stationary shifts in demand, which traditional time series and regression approaches fail to model. In this paper, we propose a Neural Network architecture called AR-MDN, that simultaneously models associative fac- tors, time-series trends and the variance in the demand. We first identify several causal features and use a combination of feature embeddings, MLP and LSTM to represent them. We then model the output density as a learned mixture of Gaussian distributions. The AR-MDN can be trained end-to-end without the need for additional supervision. We experiment on a dataset of an year’s worth of data over tens-of-thousands of products from Flipkart. The proposed architecture yields a significant improvement in forecasting accuracy when compared with existing alternatives.

  • Liked Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Agile Data Science Workflows with Python, Spark and Optimus

    Favio Vázquez
    Favio Vázquez
    Sr. Data Scientist
    Raken Data Group
    schedule 10 months ago
    Sold Out!
    480 Mins
    Workshop
    Intermediate

    Cleaning, Preparing , Transforming and Exploring Data is the most time-consuming and least enjoyable data science task, but one of the most important ones. With Optimus we’ve solve this problem for small or huge datasets, also improving a whole workflow for data science, making it easier for everyone. You will learn how the combination of Apache Spark and Optimus with the Python ecosystem can form a whole framework for Agile Data Science allowing people and companies to go further, and beyond their common sense and intuition to solve complex business problems.

  • Liked Sohan Maheshwar
    keyboard_arrow_down

    Sohan Maheshwar - It's All in the Data: The Machine Learning Behind Alexa's AI Systems

    Sohan Maheshwar
    Sohan Maheshwar
    Alexa Evangelist
    Amazon
    schedule 11 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Amazon Alexa, the cloud-based voice service that powers Amazon Echo, provides access to thousands of skills that enable customers to voice control their world - whether it’s listening to music, controlling smart home devices, listening to the news or even ordering a pizza. Alexa developers use advanced natural language understanding that to use capabilities like built-in slot & intent training, entity resolution, and dialog management. This natural language understanding is powered by advanced machine learning algorithms that will be the focus of this talk.

    This session will tell you about the rise of voice user interfaces and will give an in-depth look into how Alexa works. The talk will delve into the natural language understanding and how utterance data is processed by our systems, and what a developer can do to improve accuracy of their skill. Also, the talk will discuss how Alexa hears and understands you and how error handling works.

  • Liked Santosh Vutukuri
    keyboard_arrow_down

    Santosh Vutukuri - Embedding Artificial Intelligence in Spreadsheet

    20 Mins
    Demonstration
    Intermediate

    In today's world all of us are growing our data science capabilities. There are many such organizations who think they are comfortable in spreadsheets (e.g. Microsoft Excel, Google Sheets, IBM Lotus, Apache OpenOffice Calc, Apple Numbers etc.), and they seriously do not want to switch into complex coding using R or Python, and not even into any other analytics tools available in the market. This proposal is for demonstrating how we can embed various artificial intelligence and machine learning algorithms into spreadsheet and get meaningful insights for business or research benefit. This would be helpful for the small scale businesses from the data analysis perspective. This approach with user friendly interface really creates value in decision making.

  • Liked Dr. Manish Gupta
    keyboard_arrow_down

    Dr. Manish Gupta / Radhakrishnan G - Driving Intelligence from Credit Card Spend Data using Deep Learning

    45 Mins
    Talk
    Beginner

    Recently, we have heard success stories on how deep learning technologies are revolutionizing many industries. Deep Learning has proven huge success in some of the problems in unstructured data domains like image recognition; speech recognitions and natural language processing. However, there are limited gain has been shown in traditional structured data domains like BFSI. This talk would cover American Express’ exciting journey to explore deep learning technique to generate next set of data innovations by deriving intelligence from the data within its global, integrated network. Learn how using credit card spend data has helped improve credit and fraud decisions elevate the payment experience of millions of Card Members across the globe.

  • Liked Dr. Rohit M. Lotlikar
    keyboard_arrow_down

    Dr. Rohit M. Lotlikar - The Impact of Behavioral Biases to Real-World Data Science Projects: Pitfalls and Guidance

    45 Mins
    Talk
    Intermediate

    Data science projects, unlike their software counterparts tend to be uncertain and rarely fit into standardized approach. Each organization has it’s unique processes, tools, culture, data and in-efficiencies and a templatized approach, more common for software implementation projects rarely fits.

    In a typical data science project, a data science team is attempting to build a decision support system that will either automate human decision making or assist a human in decision making. The dramatic rise in interest in data sciences means the typical data science project has a large proportion of relatively inexperienced members whose learnings draw heavily from academics, data science competitions and general IT/software projects.

    These data scientists learn over time that the real world however is very different from the world of data science competitions. In the real-word problems are ill-defined, data may not exist to start with and it’s not just model accuracy, complexity and performance that matters but also the ease of infusing domain knowledge, interpretability/ability to provide explanations, the level of skill needed to build and maintain it, the stability and robustness of the learning, ease of integration with enterprise systems and ROI.

    Human factors play a key role in the success of such projects. Managers making the transition from IT/software delivery to data science frequently do not allow for sufficient uncertainty in outcomes when planning projects. Senior leaders and sponsors, are under pressure to deliver outcomes but are unable to make a realistic assessment of payoffs and risks and set investment and expectations accordingly. This makes the journey and outcome sensitive to various behavioural biases of project stakeholders. Knowing what the typical behavioural biases and pitfalls makes it easier to identify those upfront and take corrective actions.

    The speaker brings his nearly two decades of experience working at startups, in R&D and in consulting to lay forth these recurring behavioural biases and pitfalls.

    Many of the biases covered are grounded in the speakers first-hand experience. The talk will provide examples of these biases and suggestions on how to identify and overcome or correct for them.

  • Liked Akshay Bahadur
    keyboard_arrow_down

    Akshay Bahadur - Recognizing Human features using Deep Networks.

    Akshay Bahadur
    Akshay Bahadur
    SDE-I
    Symantec Softwares
    schedule 11 months ago
    Sold Out!
    20 Mins
    Demonstration
    Beginner

    This demo would be regarding some of the work that I have already done since starting my journey in Machine Learning. So, there are a lot of MOOCs out there for ML and data science but the most important thing is to apply the concepts learned during the course to solve simple real-world use cases.

    • One of the projects that I did included building state of the art Facial recognition system [VIDEO]. So for that, I referred to several research papers and the foundation was given to me in one of the courses itself, however, it took a lot of effort to connect the dots and that's the fun part.
    • In another project, I made an Emoji Classifier for humans [VIDEO] based on your hand gestures. For that, I used deep learning CNN model to achieve great accuracy. I took reference from several online resources that made me realize that the data science community is very helpful and we must make efforts to contribute back.
    • The other projects that I have done using machine learning:
      1. Handwritten digit recognition [VIDEO],
      2. Alphabet recognition [VIDEO],
      3. Apparel classification [VIDEO],
      4. Devnagiri recognition [VIDEO].

    With each project, I have tried to apply one new feature or the other to make my model a bit more efficient. Hyperparameter tuning or just cleaning the data.

    In this demonstration, I would just like to point out that knowledge never goes to waste. The small computer vision applications that I built in my college has helped me to gain deep learning computer vision task. It's always enlightening and empowering to learn new technologies.

    I recently was part of a session on ‘Solving real world applications from Machine learning’ to Microsoft Advanced Analytics User Group of Belgium as well as broadcasted across the globe (Meetup Link) [Session Recording]

  • Liked Nirav Shah
    keyboard_arrow_down

    Nirav Shah - Advanced Data Analysis, Dashboards And Visualization

    Nirav Shah
    Nirav Shah
    Founder
    OnPoint Insights
    schedule 11 months ago
    Sold Out!
    480 Mins
    Workshop
    Intermediate

    In these two training sessions ( 4 hours each, 8 hours total), you will learn to use data visualization and analytics software Tableau Public (free to use) and turn your data into interactive dashboards. You will get hands on training on how to create stories with dashboards and share these dashboards with your audience. However, the first session will begin with a quick refresher of basics about design and information literacy and discussions about best practices for creating charts as well as decision making framework. Whether your goal is to explain an insight or let your audience explore data insights, Tableau's simple drag-and-drop user interface makes the task easy and enjoyable. You will learn what's new in Tableau and the session will cover the latest and most advanced features of data preparation.

    In the follow up second session, you will learn to create Table Calculations, Level of Detail Calculations, Animations and understanding Clustering. You will learn to integrate R and Tableau and how to use R within Tableau. You will also learn mapping, using filters / parameters and other visual functionalities.

  • Liked Srijak Bhaumik
    keyboard_arrow_down

    Srijak Bhaumik - Let the Machine THINK for You

    Srijak Bhaumik
    Srijak Bhaumik
    Sr. Staff Software Developer
    IBM
    schedule 11 months ago
    Sold Out!
    20 Mins
    Demonstration
    Beginner

    Every organization is now focused on the business or customer data and trying hard to get actionable insights out of it. Most of them are either hiring data scientists or up-skilling their existing developers. However, they do understand the domain or business, relevant data and the impact, but, not essentially excellent in data science programming or cognitive computing. To bridge this gap, IBM brings Watson Machine Learning (WML), which is a service for creating, deploying, scoring and managing machine learning models. WML’s machine learning model creation, deployment, and management capabilities are key components of cognitive applications. The essential feature is the “self-learning” capabilities, personalized and customized for specific persona - may it be the executive or business leader, project manager, financial expert or sales advisor. WML makes the need of cognitive prediction easy with model flow capabilities, where machine learning and prediction can be applied easily with just a few clicks, and to work seamlessly without bunch of coding - for different personas to mark boundaries between developers, data scientists or business analysts. In this session, WML's capabilities would be demonstrated by taking a specific case study to solve real world business problem, along with challenges faced. To align with the developers' community, the architecture of this smart platform would be highlighted to help aspiring developers be aware of the design of a large-scale product.

  • Liked a
    keyboard_arrow_down

    a - Building a Feature Platform to Scale Machine Learning at GO-JEK

    a
    a
    a
    a
    schedule 11 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Go-Jek, Indonesia’s first billion-dollar startup, has seen an incredible amount of growth in both users and data over the past two years. Many of the ride-hailing company's services are backed by machine learning models. Models range from driver allocation, to dynamic surge pricing, to food recommendation, and process millions of bookings every day, leading to substantial increases in revenue and customer retention.

    Building a feature platform has allowed Go-Jek to rapidly iterate and launch machine learning models into production. The platform allows for the creation, storage, access, and discovery of features. It supports both low latency and high throughput access in serving, as well as high volume queries of historic feature data during training. This allows Go-Jek to react immediately to real world events.

    Find out how Go-Jek implemented their feature platform, and other lessons learned scaling machine learning.

  • Liked Ujjyaini Mitra
    keyboard_arrow_down

    Ujjyaini Mitra - How to build churn propensity model where churn is single digit, in a non commital market

    45 Mins
    Case Study
    Intermediate

    When most known classification models fail to predict month on month telecom churn for a leading telecom operator, what can we do? Could there be an alternative?

  • Liked Anuj Gupta
    keyboard_arrow_down

    Anuj Gupta - Sarcasm Detection : Achilles Heel of sentiment analysis

    Anuj Gupta
    Anuj Gupta
    Scientist
    Intuit
    schedule 1 year ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Sentiment analysis has been for long poster boy problem of NLP and has attracted a lot of research. However, despite so much work in this sub area, most sentiment analysis models fail miserably in handling sarcasm. Rise in usage of sentiment models for analysis social data has only exposed this gap further. Owing to the subtilty of language involved, sarcasm detection is not easy and has facinated NLP community.

    Most attempts at sarcasm detection still depend on hand crafted features which are dataset specific. In this talk we see some of the very recent attempts to leverage recent advances in NLP for building generic models for sarcasm detection.

    Key take aways:
    + Challenges in sarcasm detection
    + Deep dive into a end to end solution using DL to build generic models for sarcasm detection
    + Short comings and road forward

  • Liked Dr. Jennifer Prendki
    keyboard_arrow_down

    Dr. Jennifer Prendki - Recognition and Localization of Parking Signs using Deep Learning

    45 Mins
    Case Study
    Intermediate
    Drivers in large cities such as San Francisco are often the cause for a lot of traffic jams when they slow down and circle around the streets in order to attempt to decipher the meaning of the parking signs and avoid tickets. This endangers the safety of pedestrians and harms the overall transportation environment.

    In this talk, I will present an automated model developed by the Machine Learning team at Figure Eight which exploits multiple Deep Learning techniques to predict the presence of parking signs from street-level imagery and find their actual location on a map. Multiple APIs are then applied to read and extract the rules from the signs. The obtained map of the digitized parking rules along with the GPS information of a driver can be ultimately used to build functional products to help drive and park more safely.
  • Liked Dr. Jennifer Prendki
    keyboard_arrow_down

    Dr. Jennifer Prendki / Kiran Vajapey - Introduction to Active Learning

    Dr. Jennifer Prendki
    Dr. Jennifer Prendki
    VP of Machine Learning
    Figure Eight
    Kiran Vajapey
    Kiran Vajapey
    HCI Developer
    Figure Eight
    schedule 10 months ago
    Sold Out!
    480 Mins
    Workshop
    Intermediate

    The greatest challenge when building high performance model isn't about choosing the right algorithm or doing hyperparameter tuning: it is about getting high quality labeled data. Without good data, no algorithm, even the most sophisticated one, will deliver the results needed for real-life applications. And with most modern algorithms (such as Deep Learning models) requiring huge amounts of data to train, things aren't going to get better any time soon.

    Active Learning is one of the possible solutions to this dilemma, but is, quite surprisingly, left out of most Data Science conferences and Computer Science curricula. This workshop is hoping to address the lack of awareness of the Machine Learning community for the important topic of Active Learning.

    Link to data used in this course: https://s3-us-west-1.amazonaws.com/figure-eight-dataset/active_learning_odsc_india/Active_Learning_Workshop_data.zip

  • Liked Dr. Savita Angadi
    keyboard_arrow_down

    Dr. Savita Angadi - Connected Vehicle – is far more than just the car…

    Dr. Savita Angadi
    Dr. Savita Angadi
    Sr. Analytical Consultant
    SAS
    schedule 11 months ago
    Sold Out!
    45 Mins
    Talk
    Advanced


    For many IoT use cases there is a real challenge in streaming large amounts of data in real time, and the connected vehicle is no exception. Cars and trucks have the ability to generate TB of data daily, and connectivity can be spotty, especially in remote areas. To address this issue companies will want to move the analysis to the edge, on to the device where the data is generated. Will walk through the case in which there is an installed streaming engine on a gateway on a commercial vehicle. Data is analyzed locally on the vehicle, as it is generated, and alerts are communicated via cell connection. Models can be downloaded when a vehicle comes in for service, or over the air. Idea is to use data from the vehicle, like model, horsepower, oil temp, etc, to buid a decision tree to predict our target, turbo fault. Decision trees are nice in that that lay out the rules for you model clearly. In this case the model was predictive for certain engine horsepower ratings, time in service, model, and oil temps. Once this model generated acceptable accuracy with a 30 day window, plenty of time to act on the alert. Now in order to capture the value of this insight, we need to know immediately when a signal is detected, so this model will run natively on the vehicle, in our on board analytics engine.

  • Liked Saurabh Deshpande
    keyboard_arrow_down

    Saurabh Deshpande - Introduction to reinforcement learning using Python and OpenAI Gym

    Saurabh Deshpande
    Saurabh Deshpande
    Sr. Technical Consultant
    SAS
    schedule 11 months ago
    Sold Out!
    90 Mins
    Workshop
    Advanced

    Reinforcement Learning algorithms becoming more and more sophisticated every day which is evident from the recent win of AlphaGo and AlphaGo Zero (https://deepmind.com/blog/alphago-zero-learning-scratch/ ). OpenAI has provided toolkit openai gym for research and development of Reinforcement Learning algorithms.

    In this workshop, we will focus on introduction to the basic concepts and algorithms in Reinforcement Learning and hands on coding.

    Content

    • Introduction to Reinforcement Learning Concepts and teminologies
    • Setting up OpenAI Gym and other dependencies
    • Introducing OpenAI Gym and its APIs
    • Implementing simple algorithms using couple of OpenAI Gym Environments
    • Demo of Deep Reinforcement Learning using one of the OpenAI Gym Atari game

  • Liked Ujjyaini Mitra
    keyboard_arrow_down

    Ujjyaini Mitra - When the Art of Entertainment ties the knot with Science

    20 Mins
    Talk
    Advanced

    Apparently, Entertainment is a pure art form, but there's a huge bit that science can back the art. AI can drive multiple human intensive works in the Media Industry, driving the gut based decision to data-driven-decisions. Can we create a promo of a movie through AI? How about knowing which part of the video causing disengagement among our audiences? Could AI help content editors? How about assisting script writers through AI?

    i will talk about few specific experiments done specially on Voot Original contents- on binging, hooking, content editing, audience disengagement etc.

  • Liked Gunjan Juyal
    keyboard_arrow_down

    Gunjan Juyal - Building a Case for a Standardized Data Pipeline for All Your Organizational Data

    Gunjan Juyal
    Gunjan Juyal
    Sr. Consultant
    XNSIO
    schedule 11 months ago
    Sold Out!
    20 Mins
    Experience Report
    Beginner

    Organizations of all size and domains today face a data explosion problem, driven by a proliferation of data management tools and techniques. A very common scenario is creation of silos of data and data-products which increases the system’s complexity spread across the whole data lifecycle - right from data modeling to storage and processing infrastructure.

    High complexity = high system maintenance overheads = sluggish decision making. Another side-effect of this is divergence of the implemented system’s behaviour from high-level business objectives.

    In this talk we look at Zeta's experience as a case-study for reducing this complexity by defining and tackling various concerns at well-defined stages so as to prevent a build of complexity.

  • Liked Dr. Savita Angadi
    keyboard_arrow_down

    Dr. Savita Angadi - What Chaos and Fractals has to do with Machine Learning?

    Dr. Savita Angadi
    Dr. Savita Angadi
    Sr. Analytical Consultant
    SAS
    schedule 11 months ago
    Sold Out!
    45 Mins
    Talk
    Advanced

    The talk will cover how Chaos and Fractals are connected to machine learning. Artificial Intelligence is an attempt to model the characteristics of human brain. This has lead to model that can use connected elements essentially neurons. Most of the biological systems or simulation related developments in neural networks have practical results from computer science point of view. Chaos Theory has a good chance of being one of these developments. Brain itself is an good example of chaos system. Several attempts have been made to take an advantage of chaos in artificial neural systems to reproduce the benefits that have met quite a bit success.