filter_list help_outline
  • Liked Joy Mustafi
    keyboard_arrow_down

    Joy Mustafi - The Artificial Intelligence Ecosystem driven by Data Science Community

    45 Mins
    Keynote
    Intermediate

    Abstract. Cognitive computing makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology. Project Features are Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time; Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people; Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time; Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided). {A set of cognitive systems is implemented and demonstrated as the project J+O=Y}

  • Liked Dakshinamurthy V Kolluru, Ph.D.
    keyboard_arrow_down

    Dakshinamurthy V Kolluru, Ph.D. - ML and DL in Production: Differences and Similarities

    45 Mins
    Talk
    Beginner

    While architecting a data-based solution, one needs to approach the problem differently depending on the specific strategy being adopted. In traditional machine learning, the focus is mostly on feature engineering. In DL, the emphasis is shifting to tagging larger volumes of data with less focus on feature development. Similarly, synthetic data is a lot more useful in DL than ML. So, the data strategies can be significantly different. Both approaches require very similar approaches to the analysis of errors. But, in most development processes, those approaches are not followed leading to substantial delay in production times. Hyper parameter tuning for performance improvement requires different strategies between ML and DL solutions due to the longer training times of DL systems. Transfer learning is a very important aspect to evaluate in building any state of the art system whether ML or DL. The last but not the least is understanding the biases that the system is learning. Deeply non-linear models require special attention in this aspect as they can learn highly undesirable features.

    In our presentation, we will focus on all the above aspects with suitable examples and provide a framework for practitioners for building ML/DL applications.

  • Liked Harish Kashyap K
    keyboard_arrow_down

    Harish Kashyap K / Ria Aggarwal - Probabilistic Graphical Models, HMMs using PGMPY

    90 Mins
    Workshop
    Intermediate

    PGMs are generative models that are extremely useful to model stochastic processes. I shall talk about how fraud models, credit risk models can be built using Bayesian Networks. Generative models are great alternatives to deep neural networks, which cannot solve such problems. This talk focuses on Bayesian Networks, Markov Models, HMMs and their applications. Many areas of ML need to explain causality. PGMs offer nice features that enable causality explanations. This will be a hands-on workshop where attendees shall learn about basics of graphical models along with HMMs with the open source library, pgmpy for which we are contributors. HMMs are generative models that are extremely useful to model stochastic processes. This is an advanced area of ML that is helpful to most researchers and ML community who are looking for solutions in state-space problems. An example is a thermostat control when a thermostat can enter into various temperature states. Generative models are also useful to measure causality. This workshop shall have students learn basics needed to learn about HMMs including advanced probability, generative models, markov theory and HMMs. Students shall build an HMM that is used to model a thermostat using pgmpy.

  • Liked Willem Pienaar
    keyboard_arrow_down

    Willem Pienaar - Building a Feature Platform to Scale Machine Learning at GO-JEK

    45 Mins
    Talk
    Intermediate

    Go-Jek, Indonesia’s first billion-dollar startup, has seen an incredible amount of growth in both users and data over the past two years. Many of the ride-hailing company's services are backed by machine learning models. Models range from driver allocation, to dynamic surge pricing, to food recommendation, and process millions of bookings every day, leading to substantial increases in revenue and customer retention.

    Building a feature platform has allowed Go-Jek to rapidly iterate and launch machine learning models into production. The platform allows for the creation, storage, access, and discovery of features. It supports both low latency and high throughput access in serving, as well as high volume queries of historic feature data during training. This allows Go-Jek to react immediately to real world events.

    Find out how Go-Jek implemented their feature platform, and other lessons learned scaling machine learning.

  • Liked Veena  B. Mendiratta
    keyboard_arrow_down

    Veena B. Mendiratta - Network Anomaly Detection and Root Cause Analysis

    45 Mins
    Talk
    Intermediate

    Modern telecommunication networks are complex, consist of several components, generate massive amounts of data in the form of logs (volume, velocity, variety), and are designed for high reliability as there is a customer expectation of always on network access. It can be difficult to detect network failures with typical KPIs as the problems may be subtle with mild symptoms (small degradation in performance). In this workshop on network anomaly detection we will present the application of multivariate unsupervised learning techniques for anomaly detection, and root cause analysis using finite state machines. Once anomalies are detected, the message patterns in the logs of the anomaly data are compared to those of the normal data to determine where the problems are occurring. Additionally, the error codes in the anomaly data are analyzed to better understand the underlying problems. The data preprocessing methodology and feature selection methods will also be presented to determine the minimum set of features that can provide information on the network state. The algorithms are developed and tested with data from a 4G network. The impact of applying such methods is the proactive detection and root cause analysis of network anomalies thereby improving network reliability and availability.

  • Naoya Takahashi
    Naoya Takahashi
    Senior researcher
    Sony
    schedule 2 weeks ago
    Sold Out!
    45 Mins
    Demonstration
    Intermediate

    In evolutionary history, the evolution of sensory organs and brain plays very important role for species to survive and prosper. Extending human’s abilities to achieve a better life, efficient and sustainable world is a goal of artificial intelligence. Although recent advances in machine learning enable machines to perform as good as, or even better than human in many intelligent tasks including automatic speech recognition, there are still many aspects to be addressed to bridge the semantic gap and achieve seamless interaction with machines. Auditory intelligence is a key technology to enable natural man machine interaction and expanding human’s auditory ability. In this talk, I am going to address three aspects of it:

    (1) non-speech audio recognition,

    (2) video highlight detection,

    (3) one technology to surpassing human’s auditory ability, namely source separation.

  • Liked Asha Saini
    keyboard_arrow_down

    Asha Saini - Using Open Data to Predict Market Movements

    20 Mins
    Talk
    Intermediate

    As companies progress on their digital transformation journeys, technology becomes a strategic business decision. In this realm, consulting firms such as Gartner exert tremendous influence on technology purchasing decisions. The ability of these firms to predict the movement of market players will provide vendors with competitive benefits.

    We will explore how, with the use of publicly available data sources, IT industry trends can be mimicked and predicted.

    Big Data enthusiasts learned quickly that there are caveats to making Big Data useful:

    • Data source availability
    • Producing meaningful insights from publicly available sources

    Working with large data sets that are frequently changing can become expensive and frustrating. The learning curve is steep and discovery process long. Challenges range from selection of efficient tools to parse unstructured data, to development of a vision for interpreting and utilizing the data for competitive advantages.

    We will describe how the archive of billions of web pages, captured monthly since 2008 and available for free analysis on AWS, can be used to mimic and predict trends reflected in industry-standard consulting reports.

    There could be potential opportunity in this process to apply machine learning to tune the models and to self-learn so they can optimize automatically. There are over 70 topic area reports that Gartner publishes. Having an automated tool that can analyze across all of those topic areas to help us quickly understand major trends across today’s landscape and plan for those to come would be invaluable to many organizations.

  • Liked Anand Chitipothu
    keyboard_arrow_down

    Anand Chitipothu - DevOps for Data Science: Experiences from building a cloud-based data science platform

    20 Mins
    Experience Report
    Beginner

    Productionizing data science applications is non trivial. Non optimal practices, the people-heavy way of the traditional approaches, the developers love for complex solutions for the sake of using cool technologies makes the situation even worse.

    There are two key ingredients required to streamline this: “the cloud” and “the right level of devops abstractions”.

    In this talk, I’ll share the experiences of building a cloud-based platform for streamlining data science and how such solutions can greatly simplify building and deploying data science and machine learning applications.

  • Liked Sohan Maheshwar
    keyboard_arrow_down

    Sohan Maheshwar - It’s All in the Data: The Machine Learning Behind Alexa’s AI Systems

    Sohan Maheshwar
    Sohan Maheshwar
    Alexa Evangelist
    Amazon
    schedule 1 week ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Amazon Alexa, the cloud-based voice service that powers Amazon Echo, provides access to thousands of skills that enable customers to voice control their world - whether it’s listening to music, controlling smart home devices, listening to the news or even ordering a pizza. Alexa developers use advanced natural language understanding that to use capabilities like built-in slot & intent training, entity resolution, and dialog management. This natural language understanding is powered by advanced machine learning algorithms that will be the focus of this talk.

    This session will tell you about the rise of voice user interfaces and will give an in-depth look into how Alexa works. The talk will delve into the natural language understanding and how utterance data is processed by our systems, and what a developer can do to improve accuracy of their skill. Also, the talk will discuss how Alexa hears and understands you and how error handling works.

  • Liked Savita Angadi
    keyboard_arrow_down

    Savita Angadi - What Chaos and Fractals has to do with Machine Learning?

    Savita Angadi
    Savita Angadi
    Senioe Analytical Consultant
    SAS
    schedule 2 weeks ago
    Sold Out!
    45 Mins
    Talk
    Advanced

    The talk will cover how Chaos and Fractals are connected to machine learning. Artificial Intelligence is an attempt to model the characteristics of human brain. This has lead to model that can use connected elements essentially neurons. Most of the biological systems or simulation related developments in neural networks have practical results from computer science point of view. Chaos Theory has a good chance of being one of these developments. Brain itself is an good example of chaos system. Several attempts have been made to take an advantage of chaos in artificial neural systems to reproduce the benefits that have met quite a bit success.

  • Liked murughan palaniachari
    keyboard_arrow_down

    murughan palaniachari - AIOps - DevOps in Artificial Intelligence & Data Science

    murughan palaniachari
    murughan palaniachari
    DevOps Coach
    euromonitor
    schedule 1 month ago
    Sold Out!
    20 Mins
    Talk
    Beginner

    In this session you will learn how to adopt DevOps values, principles and practices in AI world. DevOps culture increases the collaboration among Data engineering, Data science/AI engineering, & Operations team. DevOps enables faster delivery of high quality product through process improvement & technology adoptions like Cloud, Automation, feedback loop, Self-service, and shift left security.

  • Liked Hariraj K
    keyboard_arrow_down

    Hariraj K - Big Data and Open data: as tools for empowering people

    Hariraj K
    Hariraj K
    Co-Founder
    FOSSMEC
    schedule 1 month ago
    Sold Out!
    20 Mins
    Talk
    Beginner

    With limited transparency, governments tend to become less accessible to the public. While data science remains as a dominating market in almost all day-to-day life industries, its possibilities in administration and governance are yet to be exploited. In this presentation, I address how emerging concepts such as open data and big data can be used to strengthen democracies and help governments serve the public better. We will explore the various possible ways big data and open data can be used to bridge income inequalities and implement proper resource and service allocation. We will also be looking at different initiative taken by individuals and communities and see the impact those initiatives have had on aiding governance. We will also emphasize the concept of open governance and government open data.

  • Liked Manish Gupta
    keyboard_arrow_down

    Manish Gupta / RADHAKRISHNAN G - “Driving Intelligence from Credit Card Spend Data using Deep Learning”

    45 Mins
    Talk
    Beginner

    Recently, we have heard success stories on how deep learning technologies are revolutionizing many industries. Deep Learning has proven huge success in some of the problems in unstructured data domains like image recognition; speech recognitions and natural language processing. However, there are limited gain has been shown in traditional structured data domains like BFSI. This talk would cover American Express’ exciting journey to explore deep learning technique to generate next set of data innovations by deriving intelligence from the data within its global, integrated network. Learn how using credit card spend data has helped improve credit and fraud decisions elevate the payment experience of millions of Card Members across the globe.

  • Liked SANTOSH VUTUKURI
    keyboard_arrow_down

    SANTOSH VUTUKURI - Embedding ML algorithms in Spreadsheet

    20 Mins
    Demonstration
    Intermediate

    Implementing various algorithms in excel may help small scale businesses in decision making.Python/R are not the only tools for data analytics, ofcourse they enable large data processing, but for small scale businesses (e.g A grocery store on the corner) there needs to be something handy that implements various scientific algorithms (K-Means (For classification of customers), Principal Component Analysis (For prioritizing and reducing features or dimensions), Evaluating similarities (Spell Checkers), Visualizations, Recommendations based on Market basket analysis,, Co-Occurance Analysis, Neural Networks and many more). These algortims, when implemented in small scale businesses with user friendly interface really creates value in decision making.

  • Liked Shekhar Prasad Rajak
    keyboard_arrow_down

    Shekhar Prasad Rajak - Communicate with the 'Data' in Ruby & Ruby Web Apps

    20 Mins
    Talk
    Intermediate

    Title

    Communicate with the 'Data' in Ruby & Ruby Web Apps

    Abstract

    (Provide a concise description for the program limited to 600 characters or less.)

    'The more we use, more useful it will be'. Rather than switching to other languages for data science work; why not use the powerful existing gems which is continuously growing under the Ruby Science Foundation i.e. SciRuby named daru and it's plugin gems daru-io and daru-view .

    In this talk, you will learn how you can process, analyze, interactive visualize the data with additional features as data analysts & Predictive Analysts for Business and data representation in web applications that you won't find in other languages.

  • Liked Raghavendra M R
    keyboard_arrow_down

    Raghavendra M R - Processing EEG signals for Brain Computer Interface

    Raghavendra M R
    Raghavendra M R
    SSE
    ThoughtWorks
    schedule 3 weeks ago
    Sold Out!
    20 Mins
    Talk
    Intermediate

    This talk will be about How to process EEG signals to make a comprehensive Brain-Computer Interface (hereafter referred as BCI) system.

    Electroencephalogram (EEG) is perhaps one of the simplest and easy ways to understand brain activities. EEG records the electrical signals produced by neurons amplifies them and show them as waveforms. If we can understand this waveform, then we can identify what our brain trying to do. Example: Consider an operation on the music system, say increasing and decreasing volume. If we can learn the waveform for volume increase and volume decrease, we can control volume without actually touching any device. This becomes the fundamental idea behind BCI

    Feature extraction is a key aspect of any machine learning use case, in case of signal processing it becomes even more complicated as we don’t have any means to visualize it, thus comes several mathematical concepts and theorems which help us in analyzing it.

    As part of this talk, I will be covering several mathematics concepts like Fourier transformation, wavelets, Fourier convolutions, etc.… which help in understanding generated signals

    I will be talking about available python libraries which we can use in our applications. I will be showing code snippets and plot graphs for better understanding.

    I will be touching upon some of the aspects on How to build a Machine learning model for the prediction. A model that is aligned to BCI (using the neural network) and How to evaluate the model

  • Liked Dr. Om Deshmukh
    keyboard_arrow_down

    Dr. Om Deshmukh / Samiran Roy - Reinforcement Learning: Demystifying the hype to successful enterprise applications

    45 Mins
    Case Study
    Beginner

    In 2014, Google accquired DeepMind, a small, london-based AI startup for $500 million. DeepMind was conducting research on AI that would learn to play computer games in a fashion similar to humans. In 2015, Deepmind published a paper in Nature, describing a learning algorithm called Deep-Q-Learning which was able to achieve superhuman performance on a diverse range of Atari 2600 games[1]. They achieved this without any domain specific engineering - The algorithm took only the raw game images as input, and was guided by the game score. Believed by many to be the first steps in Artificial General Intelligence, DeepMind achieved this by pioneering the fusion of two fields of research - Reinforcement Learning(RL) and Deep Learning.

    RL is a learning paradigm inspired by operant conditioning which closely mimics the human learning process. It shifts focus from ML based pattern recognition solutions to learning through trial and error via interaction with an environment, guided by a reward signal or reinforcement. Imagine an agent teaching itself how to steer by navigating the streets of Grand Theft Auto - and transferring this knowledge to a driverless car[2]. Think of team of autonomous robots collaborating to outwit their opponents in a game of Robot Soccer[3]. Any practical real-world application suffers from the curse of dimensionality (A camera mounted on a robot feeding it a 64*64 grayscale image will have 256^(4096) input possibilities). A Deep Neural Network automatically learns compact and efficient feature representations from noisy, high-dimensional sensory inputs in its hidden layers, giving RL algorithms the edge to scale up and give incredible results in dynamic and complex domains.

    The most notable example of this is AlphaGo Zero[4] - the latest version of AlphaGo, the first computer program to defeat a world champion at the game of Go (Also called Chinese Checkers). AlphaGo Zero uses RL to learn by playing games against itself, starting from completely random play, and quickly surpasses human expert performance. Not only is the game extremely complex (A 19*19 Go board can represent 10^170 states of play), accomplished Go players often struggle to evaluate whether a certain move is good or bad. Most AI researchers were astonished by this feat, as it was speculated that it would take atleast a decade for a computer to play Go at an expert human level.

    RL, which was largely confined to academia for several decades is now beginning to see some successful applications and products in the industry, in fields such as robotics, automated trading systems, manufacturing, energy, dialog systems and recommendation engines. For most companies, it is an exciting prospect due to the AI hype, but very few organizations have identified use cases where RL may play a valuable role. In reality, RL is best suited for a niche class of problems where it can help automate some tasks(or augment a human expert). The focus of this presentation will be to give a practical introduction to the RL Setting, how to formulate problems into RL, and presenting successful use cases in the industry.

    [1] https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf
    [2] https://www.technologyreview.com/s/602317/self-driving-cars-can-learn-a-lot-by-playing-grand-theft-auto/
    [3] http://www.robocup.org/
    [4] https://deepmind.com/blog/alphago-zero-learning-scratch/

  • Liked Hariraj K
    keyboard_arrow_down

    Hariraj K - Importing and cleaning data with R

    Hariraj K
    Hariraj K
    Co-Founder
    FOSSMEC
    schedule 1 month ago
    Sold Out!
    45 Mins
    Workshop
    Intermediate

    We are experiencing a tremendous explosion in big data. A significant share of this data is unfit for direct analysis or machine learning. This presentation emphasizes on web scraping with powerful R packages such as httr and tools like XPath.This session will also introduce the principles of data cleaning. By the end of the session, you will be able to import raw data from most websites and transform them into proper robust datasets. In the due course of this session, we would build a robust dataset by implementing the above concepts ready for analysis

  • Liked Sai Charan J
    keyboard_arrow_down

    Sai Charan J - Self Learning - Data Science

    Sai Charan J
    Sai Charan J
    Data Scientist
    MTW Labs
    schedule 1 week ago
    Sold Out!
    45 Mins
    Workshop
    Beginner

    For people from a non-technical background, I recommend formal academic programs. And then raising the bar comes data-driven scientist - Self Taught Data Scientist! These people are trendsetters, go way deep & play with data. They love data crunching & are seen solving real-time problems!

    If that's you, then let's wave our hands!

  • Liked Kuldeep Jiwani
    keyboard_arrow_down

    Kuldeep Jiwani - Topological space creation and Clustering at BigData scale

    45 Mins
    Talk
    Intermediate

    In the space of BigData world we have to regularly handle TBs of data and extract meaningful information from it. We have to apply many Unsupervised Machine Learning techniques to extract such information from the data. Two important steps in this process is building a topological space that captures the natural geometry of the data and then clustering in that topological space to obtain meaningful clusters.

    The regular Euclidean geometry is flat and assumes all dimensions to be equally proportional. But in real world data this is seldom the case. So we need to define our own distance function and build a distance matrix that would be further used to obtain clusters in the data.

    Due to shear large volume of data this exercise becomes extremely hard if traditional methods are used. This talk will talk about various BigData techniques and will showcase via Apache Spark code on how to overcome these hurdles.