filter_list help_outline
  • Liked Grant Sanderson
    keyboard_arrow_down

    Grant Sanderson - Concrete before Abstract

    Grant Sanderson
    Grant Sanderson
    Creator
    3blue1brown
    schedule 4 weeks ago
    Sold Out!
    45 Mins
    Keynote
    Intermediate

    This talk outlines a principle of technical communication which seems simple at first but is devilishly difficult to abide by. It's a principle I try to keep in mind when creating videos aimed at making math and related fields more accessible, and it stands to benefit anyone who regularly needs to describe mathematical ideas in their work. Put simply, it's to resist the temptation to open a topic by describing a general result or definition, and instead let examples precede generality. More than that, it's about finding the type of example which guides the audience to rediscover the general results for themselves. We'll look, aptly enough, at examples of what I mean by this, why it's deceptively difficult to follow, and why this ordering matters.

  • Liked Viral B. Shah
    keyboard_arrow_down

    Viral B. Shah - Growing a compiler - Getting to ML from the general-purpose Julia compiler

    45 Mins
    Keynote
    Intermediate

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Nicolas Dupuis
    keyboard_arrow_down

    Nicolas Dupuis - Using Deep-Learning to Accurately Diagnose Your Broadband Connection

    45 Mins
    Case Study
    Intermediate

    Within Nokia Software Digital Experience, we build products that increase customer satisfaction and reduce churn through proactive identification of the user problems and that enable service providers to resolve problems faster. To achieve such tasks, ML and DL techniques are now contributing a lot to these successes. However, there is usually a long journey between building a first model up-to delivering a field-proven product. Besides providing highlights on how machine and deep learning are used today to boost the broadband connection, this talk will reveal some challenges encountered and best-practices involved to reach the expected quality level.

  • Liked Jared Lander
    keyboard_arrow_down

    Jared Lander - Making Sense of AI, ML and Data Science

    45 Mins
    Talk
    Intermediate

    When I was in grad school it was called statistics. A few years later I told people I did machine learning and after seeing the confused look on their face I changed that to data science which excited them. More years passed, and without changing anything I do, I now practice AI, which seems scary to some people and somehow involves ML. During this talk we will demystify buzzwords, technical terms and overarching ideas. We'll touch upon key concepts and see a little bit of code in action to get a sense of what is happening in ML, AI or whatever else we want to call the field.

  • Liked Dr. Ananth Sankar
    keyboard_arrow_down

    Dr. Ananth Sankar - Sequence to sequence learning with encoder-decoder neural network models

    Dr. Ananth Sankar
    Dr. Ananth Sankar
    Principal Researcher
    LinkedIn
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    In recent years, there has been a lot of research in the area of sequence to sequence learning with neural network models. These models are widely used for applications such as language modeling, translation, part of speech tagging, and automatic speech recognition. In this talk, we will give an overview of sequence to sequence learning, starting with a description of recurrent neural networks (RNNs) for language modeling. We will then explain some of the drawbacks of RNNs, such as their inability to handle input and output sequences of different lengths, and describe how encoder-decoder networks, and attention mechanisms solve these problems. We will close with some real-world examples, including how encoder-decoder networks are used at LinkedIn.

  • Jared Lander
    Jared Lander
    Chief Data Scientist
    Lander Analytics
    schedule 2 weeks ago
    Sold Out!
    480 Mins
    Workshop
    Beginner

    Modern statistics has become almost synonymous with machine learning, a collection of techniques that utilize today's incredible computing power. This two-part course focuses on the available methods for implementing machine learning algorithms in R, and will examine some of the underlying theory behind the curtain. We start with the foundation of it all, the linear model. We look how to assess model quality with traditional measures and cross-validation and visualize models with coefficient plots. Next we turn to penalized regression with the Elastic Net. After that we turn to Boosted Decision Trees utilizing xgboost. Along the way we learn modern techniques for preprocessing data.

  • 45 Mins
    Talk
    Advanced

    Dr.Vikram Vij, Senior Vice President, Head of Voice Intelligence Team, Samsung Research India – Bangalore (SRIB) will share the journey that Samsung has undertaken in developing its Voice Assistant Bixby and particularly Automatic Speech Recognition(ASR) system that powers it. ASR is one of the complex engines that power modern virtual Assistants. Several independent components such as pre-processors (Acoustic Echo Cancellation, Noise Suppression, Neural Beam forming and so on), Wake word detectors, End-point detectors, Hybrid Decoders, Inverse Text Normalizers work together to make a complete ASR system. We are in an exciting period with tremendous advancements made in recent times. The development of End-to-End(E2E) ASR systems is one such advancement that has boosted recognition accuracy significantly and it has the potential to make speech recognition ubiquitous by fitting completely On-Device thereby bringing down the latency and cost and addressing the privacy concerns of the users. Samsung, the largest device maker on the planet, envisions a huge value in bringing Bixby to a variety of existing devices and new devices such as Social Robots, which throws many technical challenges particularly in making the ASR very robust. In this talk, Dr.Vikram is excited to present the cutting-edge technologies that his team is developing - Far-Field Speech Recognition, E2E ASR, Whisper Detection, Contextual End-Point Detection (EPD), On-device ASR and so on. He would also elaborate on the research activities his team is relentlessly pursuing.

  • Liked Dr. Satnam Singh
    keyboard_arrow_down

    Dr. Satnam Singh - AI for CyberSecurity

    45 Mins
    Talk
    Advanced

    In the last few years, when the cybercrooks have speeded their execution plan on making quick money by ransomware attacks. All enterprises, including banks, government offices, police stations, big and small businesses, have witnessed WannaCry, Petya, NotPetya ransomware attacks. The question for us is what we can do to defend from cyber threats? The cybersecurity industry is pitching heavily to leverage AI to combat cyber threats. Almost every cybersecurity vendor is claiming to have AI in its product. This makes it difficult for end-user enterprises to choose the product, and they need to evaluate the AI capabilities of multiple vendors. In this talk, I will cut the hype and discuss the reality of what AI can do for cybersecurity? I will share use cases, data pipeline, architecture, algorithms that are proven for information security along with the challenges in deploying them in the wild. The audience will be able to learn how to combine AI with domain knowledge to make an enterprise AI solution.

  • Liked Dr. Dakshinamurthy V Kolluru
    keyboard_arrow_down

    Dr. Dakshinamurthy V Kolluru - Understanding Text: An exciting journey from Probabilistic Models to Neural Networks

    45 Mins
    Talk
    Intermediate

    We will trace the journey of NLP over the past 50 odd years. We will cover chronologically Hidden Markov Models, Elman networks, Conditional Random Fields, LSTMs, Word2Vec, Encoder-Decoder models, Attention models, transfer learning in text and finally transformer architectures. Our emphasis is going to be on how the models became powerful and simple to implement simultaneously. To demonstrate this, we take a few case studies solved at INSOFE with a primary goal of retaining accuracy while simplifying engineering. Traditional methods will be compared and contrasted against modern models and show how the latest models actually are becoming easier to implement by the business. We also explain how this enhanced comfort with text data is paving way for state of the art inclusive architectures

  • Liked Dr. Rohit M. Lotlikar
    keyboard_arrow_down

    Dr. Rohit M. Lotlikar - Overcoming data limitations in real-world data science initiatives

    45 Mins
    Talk
    Executive

    “Is this the only data you have?” An expression of surprise not uncommonly encountered when evaluating a new opportunity to apply data science. Suitability of available data is a key factor in the abandonment of many otherwise well considered data science initiatives.

    "Could the folks who were responsible for the design of the business process and the supporting IT applications not been more forward thinking and captured the more of the relevant data? To make it even worse, for the data that is being captured, the manual entries are not even consistent between the operators."

    Well, don't throw up you hands just yet. If you are a relatively newly minted data scientist, you are probably used to data being served to you on a platter! (Kaggle, UCI, Imagenet ..add your favourite platter to the list)

    Generally 3 types of challenges are present..

    • At one extreme.. They are building a new app. They want to incorporate a recommendation engine. The app is not released ! There is no data, zero, nada, zilch..
    • At the other extreme.. I want us to build a up-sell engine. They have a massive database with a huge number of tables. If I just look for revenue related fields, I see 10 different customer revenue fields! Which is the right one to use!!
    • The client wants me to build a promotion targeting engine. But they keep changing their offers every month! By the time I have enough data for a promotion, they are ready to kill that promotion move on to some other promotion..
    • They want to build a decision support engine. But the available attributes capture only 20-30% of what goes into making the decision. How it this going to be of any help?

    Sounds familiar? You are not alone. The speaker using case studies from his own experience will guide the audience on how they can make the best of the situation and deliver a value adding data science solution, or how to decide whether it is more prudent to not pursue it after all.

  • Liked Anuj Gupta
    keyboard_arrow_down

    Anuj Gupta - Continuous Learning Systems: Building ML systems that keep learning from their mistakes

    Anuj Gupta
    Anuj Gupta
    Scientist
    Intuit
    schedule 1 month ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    Won't it be great to have ML models that can update their “learning” as and when they make mistake and correction is provided in real time? In this talk we look at a concrete business use case which warrants such a system. We will take a deep dive to understand the use case and how we went about building a continuously learning system for text classification. The approaches we took, the results we got.

    For most machine learning systems, “train once, just predict thereafter” paradigm works well. However, there are scenarios when this paradigm does not suffice. The model needs to be updated often enough. Two of the most common cases are:

    1. When the distribution is non-stationary i.e. the distribution of the data changes. This implies that with time the test data will have very different distribution from the training data.
    2. The model needs to learn from its mistakes.

    While (1) is often addressed by retraining the model, (2) is often addressed using batch update. Batch updation requires collecting a sizeable number of feedback points. What if you have much fewer feedback points? You need model that can learn continuously - as and when model makes a mistake and feedback is provided. To best of our knowledge there is a very limited literature on this.

  • Liked Dr. Vikas Agrawal
    keyboard_arrow_down

    Dr. Vikas Agrawal - Non-Stationary Time Series: Finding Relationships Between Changing Processes for Enterprise Prescriptive Systems

    45 Mins
    Talk
    Intermediate

    It is too tedious to keep on asking questions, seek explanations or set thresholds for trends or anomalies. Why not find problems before they happen, find explanations for the glitches and suggest shortest paths to fixing them? Businesses are always changing along with their competitive environment and processes. No static model can handle that. Using dynamic models that find time-delayed interactions between multiple time series, we need to make proactive forecasts of anomalous trends of risks and opportunities in operations, sales, revenue and personnel, based on multiple factors influencing each other over time. We need to know how to set what is “normal” and determine when the business processes from six months ago do not apply any more, or only applies to 35% of the cases today, while explaining the causes of risk and sources of opportunity, their relative directions and magnitude, in the context of the decision-making and transactional applications, using state-of-the-art techniques.

    Real world processes and businesses keeps changing, with one moving part changing another over time. Can we capture these changing relationships? Can we use multiple variables to find risks on key interesting ones? We will take a fun journey culminating in the most recent developments in the field. What methods work well and which break? What can we use in practice?

    For instance, we can show a CEO that they would miss their revenue target by over 6% for the quarter, and tell us why i.e. in what ways has their business changed over the last year. Then we provide the prioritized ordered lists of quickest, cheapest and least risky paths to help turn them over the tide, with estimates of relative costs and expected probability of success.

  • Liked Juan Manuel Contreras
    keyboard_arrow_down

    Juan Manuel Contreras - Beyond Individual Contribution: How to Lead Data Science Teams

    Juan Manuel Contreras
    Juan Manuel Contreras
    Head of Data Science
    Even
    schedule 2 months ago
    Sold Out!
    45 Mins
    Talk
    Advanced

    Despite the increasing number of data scientists who are being asked to take on managerial and leadership roles as they grow in their careers, there are still few resources on how to manage data scientists and lead data science teams. There is also scant practical advice on how to serve as head of a data science practice: how to set a vision and craft a strategy for an organization to use data science.

    In this talk, I will describe my experience as a data science leader both at a political party (the Democratic Party of the United States of America) and at a fintech startup (Even.com), share lessons learned from these experiences and conversations with other data science leaders, and offer a framework for how new data science leaders can better transition to both managing data scientists and heading a data science practice.

  • Liked Subhasish Misra
    keyboard_arrow_down

    Subhasish Misra - Causal data science: Answering the crucial ‘why’ in your analysis.

    Subhasish Misra
    Subhasish Misra
    Staff Data Scientist
    Walmart Labs
    schedule 2 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Causal questions are ubiquitous in data science. For e.g. questions such as, did changing a feature in a website lead to more traffic or if digital ad exposure led to incremental purchase are deeply rooted in causality.

    Randomized tests are considered to be the gold standard when it comes to getting to causal effects. However, experiments in many cases are unfeasible or unethical. In such cases one has to rely on observational (non-experimental) data to derive causal insights. The crucial difference between randomized experiments and observational data is that in the former, test subjects (e.g. customers) are randomly assigned a treatment (e.g. digital advertisement exposure). This helps curb the possibility that user response (e.g. clicking on a link in the ad and purchasing the product) across the two groups of treated and non-treated subjects is different owing to pre-existing differences in user characteristic (e.g. demographics, geo-location etc.). In essence, we can then attribute divergences observed post-treatment in key outcomes (e.g. purchase rate), as the causal impact of the treatment.

    This treatment assignment mechanism that makes causal attribution possible via randomization is absent though when using observational data. Thankfully, there are scientific (statistical and beyond) techniques available to ensure that we are able to circumvent this shortcoming and get to causal reads.

    The aim of this talk, will be to offer a practical overview of the above aspects of causal inference -which in turn as a discipline lies at the fascinating confluence of statistics, philosophy, computer science, psychology, economics, and medicine, among others. Topics include:

    • The fundamental tenets of causality and measuring causal effects.
    • Challenges involved in measuring causal effects in real world situations.
    • Distinguishing between randomized and observational approaches to measuring the same.
    • Provide an introduction to measuring causal effects using observational data using matching and its extension of propensity score based matching with a focus on the a) the intuition and statistics behind it b) Tips from the trenches, basis the speakers experience in these techniques and c) Practical limitations of such approaches
    • Walk through an example of how matching was applied to get to causal insights regarding effectiveness of a digital product for a major retailer.
    • Finally conclude with why understanding having a nuanced understanding of causality is all the more important in the big data era we are into.
  • Liked Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Complete Data Science Workflows with Open Source Tools

    90 Mins
    Tutorial
    Beginner

    Cleaning, preparing , transforming, exploring data and modeling it's what we hear all the time about data science, and these steps maybe the most important ones. But that's not the only thing about data science, in this talk you will learn how the combination of Apache Spark, Optimus, the Python ecosystem and Data Operations can form a whole framework for data science that will allow you and your company to go further, and beyond common sense and intuition to solve complex business problems.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Badri Narayanan Gopalakrishnan
    keyboard_arrow_down

    Badri Narayanan Gopalakrishnan / Shalini Sinha / Usha Rengaraju - Lifting Up: Deep Learning for implementing anti-hunger and anti-poverty programs

    45 Mins
    Case Study
    Intermediate

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.

  • Liked Johnu George
    keyboard_arrow_down

    Johnu George / Ramdoot Kumar P - A Scalable Hyperparameter Optimization framework for ML workloads

    20 Mins
    Demonstration
    Intermediate

    In machine learning, hyperparameters are parameters that governs the training process itself. For example, learning rate, number of hidden layers, number of nodes per layer are typical hyperparameters for neural networks. Hyperparameter Tuning is the process of searching the best hyper parameters to initialize the learning algorithm, thus improving training performance.

    We present Katib, a scalable and general hyper parameter tuning framework based on Kubernetes which is ML framework agnostic (Tensorflow, Pytorch, MXNet, XGboost etc). You will learn about Katib in Kubeflow, an open source ML toolkit for Kubernetes, as we demonstrate the advantages of hyperparameter optimization by running a sample classification problem. In addition, as we dive into the implementation details, you will learn how to contribute as we expand this platform to include autoML tools.

  • Liked Venkata Pingali
    keyboard_arrow_down

    Venkata Pingali - Accelerating ML using Production Feature Engineering Platform

    Venkata Pingali
    Venkata Pingali
    Co-Founder & CEO
    Scribble Data
    schedule 3 months ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Anecdotally only 2% of the models developed are productionized, i.e., used day to day to improve business outcomes. Part of the reason is the high cost and complexity of productionization of models. It is estimated to be anywhere from 40 to 80% of the overall work.

    In this talk, we will share Scribble Data’s insights into productionization of ML, and how to reduce the cost and complexity in organizations. It is based on the last two years of work at Scribble developing and deploying production ML Feature Engineering Platform, and study of platforms from major organizations such as Uber. This talk expands on a previous talk given in January.

    First, we discuss the complexity of production ML systems, and where time and effort goes. Second, we give an overview of feature engineering, which is an expensive ML task, and the associated challenges Third, we suggest an architecture for Production Feature Engineering platform. Last, we discuss how one could go about building one for your organization

  • Liked Paolo Tamagnini
    keyboard_arrow_down

    Paolo Tamagnini / Kathrin Melcher - Guided Analytics - Building Applications for Automated Machine Learning

    90 Mins
    Tutorial
    Beginner

    In recent years, a wealth of tools has appeared that automate the machine learning cycle inside a black box. We take a different stance. Automation should not result in black boxes, hiding the interesting pieces from everyone. Modern data science should allow automation and interaction to be combined flexibly into a more transparent solution.

    In some specific cases, if the analysis scenario is well defined, then full automation might make sense. However, more often than not, these scenarios are not that well defined and not that easy to control. In these cases, a certain amount of interaction with the user is highly desirable.

    By mixing and matching interaction with automation, we can use Guided Analytics to develop predictive models on the fly. More interestingly, by leveraging automated machine learning and interactive dashboard components, custom Guided Analytics Applications, tailored to your business needs, can be created in a few minutes.

    We'll build an application for automated machine learning using KNIME Software. It will have an input user interface to control the settings for data preparation, model training (e.g. using deep learning, random forest, etc.), hyperparameter optimization, and feature engineering. We'll also create an interactive dashboard to visualize the results with model interpretability techniques. At the conclusion of the workshop, the application will be deployed and run from a web browser.