filter_list help_outline
  • Liked Grant Sanderson
    keyboard_arrow_down

    Grant Sanderson - Concrete before Abstract

    Grant Sanderson
    Grant Sanderson
    Creator
    3blue1brown
    schedule 3 months ago
    Sold Out!
    45 Mins
    Keynote
    Intermediate

    This talk outlines a principle of technical communication which seems simple at first but is devilishly difficult to abide by. It's a principle I try to keep in mind when creating videos aimed at making math and related fields more accessible, and it stands to benefit anyone who regularly needs to describe mathematical ideas in their work. Put simply, it's to resist the temptation to open a topic by describing a general result or definition, and instead let examples precede generality. More than that, it's about finding the type of example which guides the audience to rediscover the general results for themselves. We'll look, aptly enough, at examples of what I mean by this, why it's deceptively difficult to follow, and why this ordering matters.

  • Liked Sheamus McGovern
    keyboard_arrow_down

    Sheamus McGovern / Naresh Jain - Welcome Address

    20 Mins
    Keynote
    Beginner

    This talk will help you understand the vision behind ODSC Conference and how it has grown over the years.

  • Liked Naresh Jain
    keyboard_arrow_down

    Naresh Jain - Ethical AI - Fishbowl

    Naresh Jain
    Naresh Jain
    Founder
    XNSIO
    schedule 4 weeks ago
    Sold Out!
    45 Mins
    Keynote
    Beginner

    There has been a lot of concerns about the black-box nature of AI. People have been asking for a sensible AI guideline with the weight of Law behind it. In April 2019, the EU released it's Ethics guidelines for trustworthy AI. Before that during the Obama administration, the National Science and Technology Council came up with its own set of broad guidelines called "Preparing for the Future of Artificial Intelligence."

    Most of these cover an impressive amount of ground in several major categories:

    • Transparency: Any time an AI system makes decisions on a user's behalf, that person should be aware of it. The reasoning behind decisions should be easily explainable.
    • Safety: AI systems should be designed to withstand attempted hijacking and other attacks performed by hackers.
    • Fairness: Decisions made by AI systems should not be influenced by gender, race or other personal identifiers. They should be as impartial as possible and not reflect human biases.
    • Environmental stewardship: Not all the stakeholders in AI development are human. The development of these platforms and the implications of their decision-making and sustainability should take into account the needs of the larger environment and other forms of life.
    • And so on...

    At this conference, we would like to bring our experts together to hear their views/concerns on this topic.

  • 45 Mins
    Keynote
    Intermediate

    Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct "static graph" and "eager execution" interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on.

    Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs.

    This talk will demonstrate how Julia is increasingly becoming a natural language for machine learning, the kind of libraries and applications the Julia community is building, the contributions from India (there are many!), and our plans going forward.

  • Liked Jared Lander
    keyboard_arrow_down

    Jared Lander - Making Sense of AI, ML and Data Science

    45 Mins
    Talk
    Intermediate

    When I was in grad school it was called statistics. A few years later I told people I did machine learning and after seeing the confused look on their face I changed that to data science which excited them. More years passed, and without changing anything I do, I now practice AI, which seems scary to some people and somehow involves ML. During this talk we will demystify buzzwords, technical terms and overarching ideas. We'll touch upon key concepts and see a little bit of code in action to get a sense of what is happening in ML, AI or whatever else we want to call the field.

  • 45 Mins
    Talk
    Advanced

    In the last few years, when the cybercrooks have speeded their execution plan on making quick money by ransomware attacks. All enterprises, including banks, government offices, police stations, big and small businesses, have witnessed WannaCry, Petya, NotPetya ransomware attacks. The question for us is what we can do to defend from cyber threats? The cybersecurity industry is pitching heavily to leverage AI to combat cyber threats. Almost every cybersecurity vendor is claiming to have AI in its product. This makes it difficult for end-user enterprises to choose the product, and they need to evaluate the AI capabilities of multiple vendors. In this talk, I will cut the hype and discuss the reality of what AI can do for cybersecurity? I will share use cases, data pipeline, architecture, algorithms that are proven for information security along with the challenges in deploying them in the wild. The audience will be able to learn how to combine AI with domain knowledge to make an enterprise AI solution.

  • Liked Dr. Dakshinamurthy V Kolluru
    keyboard_arrow_down

    Dr. Dakshinamurthy V Kolluru - Understanding Text: An exciting journey from Probabilistic Models to Neural Networks

    45 Mins
    Talk
    Intermediate

    We will trace the journey of NLP over the past 50 odd years. We will cover chronologically Hidden Markov Models, Elman networks, Conditional Random Fields, LSTMs, Word2Vec, Encoder-Decoder models, Attention models, transfer learning in text and finally transformer architectures. Our emphasis is going to be on how the models became powerful and simple to implement simultaneously. To demonstrate this, we take a few case studies solved at INSOFE with a primary goal of retaining accuracy while simplifying engineering. Traditional methods will be compared and contrasted against modern models and show how the latest models actually are becoming easier to implement by the business. We also explain how this enhanced comfort with text data is paving way for state of the art inclusive architectures

  • Liked Dr. Shailesh Kumar
    keyboard_arrow_down

    Dr. Shailesh Kumar - Data Science and the art of "Formulation"

    45 Mins
    Talk
    Intermediate

    Today most Data Scientists focus on the art, science, and engineering of "Modelling" - how to build a model. But as AutoML is taking over, this skill is fast becoming obsolete.

    In this talk, through a variety of examples, we will highlight an even more fundamental skill in Data Science: The Art of "Formulating" a specific Business problem, a Holistic Solution, or a Product feature as a Data Science problem.

  • Liked Nicolas Dupuis
    keyboard_arrow_down

    Nicolas Dupuis - Using Deep-Learning to Accurately Diagnose Your Broadband Connection

    45 Mins
    Case Study
    Intermediate

    Within Nokia Software Digital Experience, we build products that increase customer satisfaction and reduce churn through proactive identification of the user problems and that enable service providers to resolve problems faster. To achieve such tasks, ML and DL techniques are now contributing a lot to these successes. However, there is usually a long journey between building a first model up-to delivering a field-proven product. Besides providing highlights on how machine and deep learning are used today to boost the broadband connection, this talk will reveal some challenges encountered and best-practices involved to reach the expected quality level.

  • Liked Dr. Sarabjot Singh Anand
    keyboard_arrow_down

    Dr. Sarabjot Singh Anand - The Art and Science of building Recommender Systems

    Dr. Sarabjot Singh Anand
    Dr. Sarabjot Singh Anand
    Co-Founder & Chief Data Scientist
    Tatras Data
    schedule 1 month ago
    Sold Out!
    480 Mins
    Workshop
    Beginner

    In this workshop, we will understand the algorithms behind recommender systems in different domains and gain an appreciation for how the domain impacts the approach used. Attendees will be creating recommenders using user item matrices, news and graphs gaining an understanding of collaborative and content-based filtering, text representation, matrix factorization, and random walks.

  • Liked Dr. Ajay Chander
    keyboard_arrow_down

    Dr. Ajay Chander / Dr. Ramya Srinivasan - Detecting Bias in AI: A Systems View & A Technique for Datasets

    45 Mins
    Talk
    Intermediate

    Modern machine learning (ML) offers a new way of creating software to solve problems, focused on learning structures, learning algorithms, and data. In all steps of this process, from the specification of the problem, to the datasets chosen as relevant to the solution, to the choice of learning structures and algorithms, a variety of biases can creep in and compound each other. In this talk, we present a systems view of detecting Bias in AI/ML systems as analogous to the software testing problem. To start, a variety of expectations from an AI/ML system can be specified given its intended goals and deployment. Different kinds of bias can then be mapped to different failure modes, which can then be tested for during a variety of techniques. We will also describe a new technique based on Topological Data Analysis to detect bias in source datasets. This technique utilizes a persistence homology based visualization and is lightweight: the human-in-the-loop does not need to select metrics or tune parameters, and carry out this step before choosing a model. We’ll describe experiments on the German credit dataset using this technique to demonstrate its effectiveness.

  • 45 Mins
    Talk
    Advanced

    Dr.Vikram Vij, Senior Vice President, Head of Voice Intelligence Team, Samsung Research India – Bangalore (SRIB) will share the journey that Samsung has undertaken in developing its Voice Assistant Bixby and particularly Automatic Speech Recognition(ASR) system that powers it. ASR is one of the complex engines that power modern virtual Assistants. Several independent components such as pre-processors (Acoustic Echo Cancellation, Noise Suppression, Neural Beam forming and so on), Wake word detectors, End-point detectors, Hybrid Decoders, Inverse Text Normalizers work together to make a complete ASR system. We are in an exciting period with tremendous advancements made in recent times. The development of End-to-End(E2E) ASR systems is one such advancement that has boosted recognition accuracy significantly and it has the potential to make speech recognition ubiquitous by fitting completely On-Device thereby bringing down the latency and cost and addressing the privacy concerns of the users. Samsung, the largest device maker on the planet, envisions a huge value in bringing Bixby to a variety of existing devices and new devices such as Social Robots, which throws many technical challenges particularly in making the ASR very robust. In this talk, Dr.Vikram is excited to present the cutting-edge technologies that his team is developing - Far-Field Speech Recognition, E2E ASR, Whisper Detection, Contextual End-Point Detection (EPD), On-device ASR and so on. He would also elaborate on the research activities his team is relentlessly pursuing.

  • Liked Dr. Ananth Sankar
    keyboard_arrow_down

    Dr. Ananth Sankar - Sequence to sequence learning with encoder-decoder neural network models

    Dr. Ananth Sankar
    Dr. Ananth Sankar
    Principal Researcher
    LinkedIn
    schedule 3 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    In recent years, there has been a lot of research in the area of sequence to sequence learning with neural network models. These models are widely used for applications such as language modeling, translation, part of speech tagging, and automatic speech recognition. In this talk, we will give an overview of sequence to sequence learning, starting with a description of recurrent neural networks (RNNs) for language modeling. We will then explain some of the drawbacks of RNNs, such as their inability to handle input and output sequences of different lengths, and describe how encoder-decoder networks, and attention mechanisms solve these problems. We will close with some real-world examples, including how encoder-decoder networks are used at LinkedIn.

  • Jared Lander
    Jared Lander
    Chief Data Scientist
    Lander Analytics
    schedule 2 months ago
    Sold Out!
    480 Mins
    Workshop
    Beginner

    Modern statistics has become almost synonymous with machine learning, a collection of techniques that utilize today's incredible computing power. This two-part course focuses on the available methods for implementing machine learning algorithms in R, and will examine some of the underlying theory behind the curtain. We start with the foundation of it all, the linear model. We look how to assess model quality with traditional measures and cross-validation and visualize models with coefficient plots. Next we turn to penalized regression with the Elastic Net. After that we turn to Boosted Decision Trees utilizing xgboost. Along the way we learn modern techniques for preprocessing data.

  • Liked Dr. Rohit M. Lotlikar
    keyboard_arrow_down

    Dr. Rohit M. Lotlikar - Overcoming data limitations in real-world data science initiatives

    45 Mins
    Talk
    Executive

    “Is this the only data you have?” An expression of surprise not uncommonly encountered when evaluating a new opportunity to apply data science. Suitability of available data is a key factor in the abandonment of many otherwise well considered data science initiatives.

    "Could the folks who were responsible for the design of the business process and the supporting IT applications not been more forward thinking and captured the more of the relevant data? To make it even worse, for the data that is being captured, the manual entries are not even consistent between the operators."

    Well, don't throw up you hands just yet. If you are a relatively newly minted data scientist, you are probably used to data being served to you on a platter! (Kaggle, UCI, Imagenet ..add your favourite platter to the list)

    Generally 3 types of challenges are present..

    • At one extreme.. They are building a new app. They want to incorporate a recommendation engine. The app is not released ! There is no data, zero, nada, zilch..
    • At the other extreme.. I want us to build a up-sell engine. They have a massive database with a huge number of tables. If I just look for revenue related fields, I see 10 different customer revenue fields! Which is the right one to use!!
    • The client wants me to build a promotion targeting engine. But they keep changing their offers every month! By the time I have enough data for a promotion, they are ready to kill that promotion move on to some other promotion..
    • They want to build a decision support engine. But the available attributes capture only 20-30% of what goes into making the decision. How it this going to be of any help?

    Sounds familiar? You are not alone. The speaker using case studies from his own experience will guide the audience on how they can make the best of the situation and deliver a value adding data science solution, or how to decide whether it is more prudent to not pursue it after all.

  • Liked Vivek Singhal
    keyboard_arrow_down

    Vivek Singhal / Shreyas Jagannath - Training Autonomous Driving Systems to Visualize the Road ahead for Decision Control

    90 Mins
    Workshop
    Intermediate

    We will train the audience how to develop advanced image segmentation with FCN/DeepLab algorithms which can help visualize the driving scenarios accurately, so as to allow the autonomous driving system to take appropriate action considering the obstacle views.

  • 45 Mins
    Talk
    Intermediate

    Logistics companies, both old and new, have invested heavily in building an efficient frontline workforce to provide swift and convenient services to their users. Timely delivery is often a critical deciding factor for the ever-impatient customers to choose service A over service B. Hence, operations/logistic team is the key enabler here.

    The attrition rate in large frontline teams is high, close to 75 percent annually. Yet most companies have aggressive growth targets, necessitating recruitment of high volumes of workers constantly. High-growth companies in this domain like Zomato and Swiggy, grew by more than 50-60 percent by the end of 2018, recruited tens of thousands of delivery boys every month.

    At Vahan, we have developed an AI-driven virtual assistant that helps logistics companies scale and automate their hiring process by leveraging the common addiction of messaging applications like WhatsApp and FB messenger.

    In this talk, I will cover in detail how we developed a complete data collection and natural language processing pipeline for Indian languages and built a chatbot over Whatsapp which is currently connecting companies like Dunzo, Zomato, Swiggy & Rapido Express with potential frontline workers and fulfilling the hiring requirements of this industry in a scalable and autonomous fashion.

  • 90 Mins
    Workshop
    Intermediate

    Machine learning and deep learning have been rapidly adopted in various spheres of medicine such as discovery of drug, disease diagnosis, Genomics, medical imaging and bioinformatics for translating biomedical data into improved human healthcare. Machine learning/deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.

    We have successfully developed three deep learning based healthcare applications and are currently working on two more healthcare related projects. In this workshop, we will discuss one healthcare application titled "Deep Learning based Craniofacial Distance Measurement for Facial Reconstructive Surgery" which is developed by us using TensorFlow. Craniofacial distances play important role in providing information related to facial structure. They include measurements of head and face which are to be measured from image. They are used in facial reconstructive surgeries such as cephalometry, treatment planning of various malocclusions, craniofacial anomalies, facial contouring, facial rejuvenation and different forehead surgeries in which reliable and accurate data are very important and cannot be compromised.

    Our discussion on healthcare application will include precise problem statement, the major steps involved in the solution (deep learning based face detection & facial landmarking and craniofacial distance measurement), data set, experimental analysis and challenges faced & overcame to achieve this success. Subsequently, we will provide hands-on exposure to implement this healthcare solution using TensorFlow. Finally, we will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

  • Liked Dr. C.S.Jyothirmayee
    keyboard_arrow_down

    Dr. C.S.Jyothirmayee / Usha Rengaraju / Vijayalakshmi Mahadevan - Deep learning powered Genomic Research

    90 Mins
    Workshop
    Advanced

    The event disease happens when there is a slip in the finely orchestrated dance between physiology, environment and genes. Treatment with chemicals (natural, synthetic or combination) solved some diseases but others persisted and got propagated along the generations. Molecular basis of disease became prime center of studies to understand and to analyze root cause. Cancer also showed a way that origin of disease, detection, prognosis and treatment along with cure was not so uncomplicated process. Treatment of diseases had to be done case by case basis (no one size fits).

    With the advent of next generation sequencing, high through put analysis, enhanced computing power and new aspirations with neural network to address this conundrum of complicated genetic elements (structure and function of various genes in our systems). This requires the genomic material extraction, their sequencing (automated system) and analysis to map the strings of As, Ts, Gs, and Cs which yields genomic dataset. These datasets are too large for traditional and applied statistical techniques. Consequently, the important signals are often incredibly small along with blaring technical noise. This further requires far more sophisticated analysis techniques. Artificial intelligence and deep learning gives us the power to draw clinically useful information from the genetic datasets obtained by sequencing.

    Precision of these analyses have become vital and way forward for disease detection, its predisposition, empowers medical authorities to make fair and situationally decision about patient treatment strategies. This kind of genomic profiling, prediction and mode of disease management is useful to tailoring FDA approved treatment strategies based on these molecular disease drivers and patient’s molecular makeup.

    Now, the present scenario encourages designing, developing, testing of medicine based on existing genetic insights and models. Deep learning models are helping to analyze and interpreting tiny genetic variations ( like SNPs – Single Nucleotide Polymorphisms) which result in unraveling of crucial cellular process like metabolism, DNA wear and tear. These models are also responsible in identifying disease like cancer risk signatures from various body fluids. They have the immense potential to revolutionize healthcare ecosystem. Clinical data collection is not streamlined and done in a haphazard manner and the requirement of data to be amenable to a uniform fetchable and possibility to be combined with genetic information would power the value, interpretation and decisive patient treatment modalities and their outcomes.

    There is hugh inflow of medical data from emerging human wearable technologies, along with other health data integrated with ability to do quickly carry out complex analyses on rich genomic databases over the cloud technologies … would revitalize disease fighting capability of humans. Last but still upcoming area of application in direct to consumer genomics (success of 23andMe).

    This road map promises an end-to-end system to face disease in its all forms and nature. Medical research, and its applications like gene therapies, gene editing technologies like CRISPR, molecular diagnostics and precision medicine could be revolutionized by tailoring a high-throughput computing method and its application to enhanced genomic datasets.

  • Liked Badri Narayanan Gopalakrishnan
    keyboard_arrow_down

    Badri Narayanan Gopalakrishnan / Shalini Sinha / Usha Rengaraju - Lifting Up: How AI and Big data can contribute to anti-poverty programs

    45 Mins
    Case Study
    Intermediate

    Ending poverty and zero hunger are top two goals United Nations aims to achieve by 2030 under its sustainable development program. Hunger and poverty are byproducts of multiple factors and fighting them require multi-fold effort from all stakeholders. Artificial Intelligence and Machine learning has transformed the way we live, work and interact. However economics of business has limited its application to few segments of the society. A much conscious effort is needed to bring the power of AI to the benefits of the ones who actually need it the most – people below the poverty line. Here we present our thoughts on how deep learning and big data analytics can be combined to enable effective implementation of anti-poverty programs. The advancements in deep learning , micro diagnostics combined with effective technology policy is the right recipe for a progressive growth of a nation. Deep learning can help identify poverty zones across the globe based on night time images where the level of light correlates to higher economic growth. Once the areas of lower economic growth are identified, geographic and demographic data can be combined to establish micro level diagnostics of these underdeveloped area. The insights from the data can help plan an effective intervention program. Machine Learning can be further used to identify potential donors, investors and contributors across the globe based on their skill-set, interest, history, ethnicity, purchasing power and their native connect to the location of the proposed program. Adequate resource allocation and efficient design of the program will also not guarantee success of a program unless the project execution is supervised at grass-root level. Data Analytics can be used to monitor project progress, effectiveness and detect anomaly in case of any fraud or mismanagement of funds.