filter_list help_outline
  • Liked Kuldeep Singh
    keyboard_arrow_down

    Kuldeep Singh - Simplify Experimentation, Deployment and Collaboration for ML and AI Models

    45 Mins
    Demonstration
    Intermediate

    Machine Learning and AI are changing or would say have changed the way how businesses used to behave. However, the Data Science community is still lacking good practices for organizing their projects and effectively collaborating and experimenting quickly to reduce “time to market”.

    During this session, we will learn about one such open-source tool “DVC”
    which can help you in helping ML models shareable and reproducible.
    It is designed to handle large files, data sets, machine learning models, metrics as well as code

  • Liked Dr. Mayuri Mehta
    keyboard_arrow_down

    Dr. Mayuri Mehta - An Automated Tear Film Break Time based Dry Eye Disease Diagnosis using Deep Learning

    45 Mins
    Case Study
    Intermediate

    Deep Learning has been rapidly adopted in various spheres of healthcare for translating biomedical data into improved human healthcare. Deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis. Amongst the several successfully developed healthcare applications, in this case study, I would like to discuss how commonly occurring Dry Eye Disease (DED) can be diagnosed accurately and speedily using deep learning based automated system!

    DED is one of the commonly occurring chronic disease in the world today. It causes severe discomfort in eye, visual disturbance and blurred vision impacting the quality of life of patient. Certain factors such as prolonged use of electronic gadgets, old age, environmental conditions, medication, smoking habits and use of contact lens can disturb the tear film balance and can lead to evaporation of moisture from tear film which causes dry eye disease. If DED is left untreated, it can cause infection, corneal ulcer or blindness. However, diagnosis of dry eye is a difficult task because it occurs due to different factors. An ophthalmologist sometimes requires multiple tests or repetitive tests for proper diagnosis. Moreover, we observed the following drawbacks of clinical diagnosis: 1) Higher time in clinical diagnosis as it is done manually. 2) Diagnosis is subjective in nature 3) Accurate severity level of DED is not identified and 4) Medication may be prescribed for incorrect period on the basis of inaccurate severity level. Hence, we have proposed a deep learning based automated approach to diagnose DED considering Tear Film Breakup Time (TBUT) which is a standard diagnostic procedure. Our automated approach is to assist ophthalmologist and optometrist to bring objectivity in diagnosis, to increase diagnosis accuracy and to make diagnosis faster so that ophthalmologist can devote more time in counselling of patients.

    To the best of our information, ours is the first attempt to automate TBUT based DED diagnosis. Discussion of this case study will include precise problem statement, steps involved in the solution, data set generated consulting ophthalmologists, experimental results and challenges faced & overcame to achieve this success. Finally, I will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

  • 45 Mins
    Case Study
    Intermediate

    Deep Learning has been rapidly adopted in various spheres of healthcare for translating biomedical data into improved human healthcare. Deep learning based healthcare applications assist physicians to make faster, cheaper and more accurate diagnosis.

    Amongst several successfully developed healthcare applications, in this case study, I will speak on “Early Detection of Hypothyroidism in Infants using Deep Learning”. Thyroid is a hormone secreting gland which influences all metabolic activities in our body. Hypothyroidism is a common disorder of thyroid that occurs when thyroid gland produces an insufficient amount of thyroid hormone. Deficiency of thyroid hormone at birth leads to hypothyroidism in babies. The common hypothyroidism symptoms in infants are prolong jaundice, protruding tongue, hoarse cry, puffy face, pain and swelling in joints, goiter and umbilical hernia. During the early stage of hypothyroidism, babies may not have noticeable symptoms and hence, doctors (Physicians, Paediatricians and Paediatric Endocrinologists) face difficulty in recognizing hypothyroidism in infants. If hypothyroidism in infants isn’t treated during early stage, severe complications such as mental retardation, slower linear growth, loss of IQ, poor muscle tone, sensorinueral deafness, speech disorder and vision problem may arise. Due to these complications, infant’s growth cannot be proceeded as healthy infants. To prevent such complications, we have designed and developed a novel approach to diagnose hypothyroidism in infants using deep learning during its early stage. To the best of our knowledge, ours is the first attempt to classify infant as either healthy infant or suffering from hypothyroidism based on only facial symptoms viz. puffy face, jaundice, swelling around eyes, protruding tongue and flat bridged nose with broad fleshy tip to diagnose hypothyroidism. The classification output is a probability of hypothyroidism to assist doctors to identify hypothyroidism only from the facial image of infant in the initial phase.

    The discussion of this case study will include precise problem statement, the major steps involved in the solution, data set created consulting several Endocrinologists, experimental results and challenges faced & overcame to achieve this success. Finally, I will briefly discuss the possible extensions of our work and the future scope of research in healthcare sector.

  • Liked Kunaal Naik
    keyboard_arrow_down

    Kunaal Naik - Feature Engineering with Excel and Machine Leaning using Python

    240 Mins
    Workshop
    Beginner

    Machine Learning is such an exciting topic to learn. However, Feature Engineering is even more crucial for beginners during the learning process. Often the feature engineering is explored and executed through codes, not giving enough time for new learners to grasp the concept entirely.

    In this workshop, I plan to use Excel to clean data by imputing missing values, creating new features, dividing into test train datasets and build the intuition of modelling. The learners will get a visual cue of what all happens in the background.

    Once the learners are confident about the process, making them do the same thing with codes will help them understand the feature engineering topic better.

    Once, the pre-processing part is over, teaching various models ranging from Logistic to Deep Learning Models on the cleaned data will help them grasp the modelling process better.

  • Liked Vaibhav Gambhir
    keyboard_arrow_down

    Vaibhav Gambhir / Somil Gupta - Application of Hybrid Deep Learning Models in Churn Management for Automotive Sector

    45 Mins
    Case Study
    Advanced

    Customer churn is to find whether a customer will leave a particular service or not. Effective churn rate prediction is an essential challenge for the service industry as the cost of earning new customers is a lot more than sustaining existing ones. Preventive actions, new marketing strategies, and long term service contracts can be made if the churners are identified at an early stage.

    In this proposal, we discuss the application of hybrid auto encoder attention models to calculate the churn score of the automobile service consumers and provide the dynamic personalized discounting at the right stage to subvert churn out. We also show the implementation of reinforcement learning to update and optimize the discounting portfolio to minimize the loss by targeting the high potential customers only. Traditionally available classification and scoring algorithms require manual feature extraction and cannot handle skewed datasets. However, the hybrid model can produce better results even for fewer training sets. Autoencoder network represents the data into a latent space while the attention layer simultaneously works to focus on highly contributing features for target prediction.


    With this concise understanding of transactional data, the industry professionals can have better estimations on when existing customers might be open to considering upgrading to premium vehicles and when they are on the verge to churn out. Hence, they can act accordingly.

  • Liked KRITI DONERIA
    keyboard_arrow_down

    KRITI DONERIA - Trust Building in AI systems: A critical thinking perspective

    KRITI DONERIA
    KRITI DONERIA
    ANALYST
    ADT
    schedule 2 months ago
    Sold Out!
    90 Mins
    Tutorial
    Beginner

    How do I know when to trust AI,and when not to?

    Who goes to jail if a self driving car kills someone tomorrow?

    Do you know scientists say people will believe anything,repeated enough

    Designing AI systems is also an exercise in critical thinking because an AI is only as good as its creator.This talk is for discussions like these,and more.

    With the exponential increase in computing power available, several AI algorithms that were mere papers written decades ago have become implementable. For a data scientist, it is very tempting to use the most sophisticated algorithm available. But given that its applicability has moved beyond academia and out into the business world, are numbers alone sufficient? Putting context to AI, or XAI (explainable AI) takes the black box out of AI to enhance human-computer interaction. This talk shall revolve around the interpret-ability-complexity trade-off, challenges, drivers and caveats of the XAI paradigm, and an intuitive demo of translating inner workings of an ML algorithm into human understandable formats to achieve more business buy-ins.

    Prepare to be amused and enthralled at the same time.

  • Liked Harsh Mishra
    keyboard_arrow_down

    Harsh Mishra - Identification of fraud transactions using self organizing map (SOM)

    45 Mins
    Talk
    Beginner

    From the recent statistics of RBI and world bank, there are 80.245 million transactions occurring each second from India to worldwide, and 1.06 billion transactions occurring each second all over the world which are generating a robust database, analyzing and identifying each money trail as fraud or not is nearly impossible to make it possible we have some concept called self-organizing map which reduces data dimensionality, SOM is designed to convert complex data matrix into two dimensional, which result in building deep learning models which can be trained on these heavy datasets to analyze transaction activity leading banks to stop fraud transaction.

  • Liked Ravi Ranjan
    keyboard_arrow_down

    Ravi Ranjan / Abhishek Kumar / Subarna Rana - Building End-to-End Deep Reinforcement Learning based RecSys with TensorFlow and Kubeflow

    90 Mins
    Workshop
    Intermediate

    Recommendation systems (RecSys) are the core engine for any personalized experience on eCommerce and online media websites. Most of the companies leverage RecSys to increase user interaction, to enrich shopping potential and to generate upsell & cross-sell opportunities. Amazon uses recommendations as a targeted marketing tool throughout its website that contributes 35% of its total revenue generation [1]. Netflix users watch ~75% of the recommended content and artwork [2]. Spotify employs a recommendation system to update personal playlists every week so that users won’t miss newly released music by artists they like. This has helped Spotify to increase its number of monthly users from 75 million to 100 million at a time [3]. YouTube's personalized recommendation helps users to find relevant videos quickly and easily which account for around 60% of video clicks from the homepage [4].

    In general, RecSys generates recommendations based on user browsing history and preferences, past purchases and item metadata. It turns out most existing recommendation systems are based on three paradigms: collaborative filtering (CF) and its variants, content-based recommendation engines, and hybrid recommendation engines that combine content-based and CF or exploit more information about users in content-based recommendation. However, they suffer from limitations like rapidly changing user data, user preferences, static recommendations, grey sheep, cold start and malicious user.

    Classical RecSys algorithm like content-based recommendation performs great on item to item similarities but will only recommend items related to one category and may not recommend anything in other categories as the user never viewed those items before. Collaborative filtering solves this problem by exploiting the user's behavior and preferences over the items in recommending items to the new users. However, collaborative filtering suffers from a few drawbacks like cold start, popularity bias, and sparsity. The classical recommendation models consider the recommendation as a static process. We can solve the static recommendation on rapidly changing user data by RL. RL based RecSys captures the user’s temporal intentions and responds promptly. However, as the user action and items matrix size increases, it becomes difficult to provide recommendations using RL. Deep RL based solutions like actor-critic and deep Q-networks overcome all the aforementioned drawbacks.

    However, there are two major challenges when deep RL applied for RecSys – (a) the large and dynamic action item space, and (b) the computational cost to select an optimal recommendation. The conventional Deep Q-learning architectures inputs only the state space and outputs Q-values of all actions. This architecture is suitable for the scenario with high state space and small/fixed action space, not for dynamic action space scenario, like recommender systems. The Actor-Critic architecture is preferred since it is suitable for large and dynamic action space and can also reduce redundant computation simultaneously compared to alternative architectures. It will provide recommendations considering dynamic changes in user preference, incorporating return patterns of users, increasing diversity on a large dataset thereby making it one of the most effective recommendation models.

    Model building is just one component of end to end machine learning. We will also investigate the holistic view of productionizing RecSys models using Kubeflow. A healthy learning pipeline include components such as data ingestion, transformation, feature engineering, validation, hyper-parameter tuning, A/B testing and deploy. Typical challenges when creating such a system are low latency, high performance, real-time processing, scalability, model management and governance. Kubeflow on the Kubernetes engine plays crucial role in stitching training, serving, monitoring and logging components. Kubeflow pipelines (a core component of Kubeflow) makes implementation of training pipelines simple and concise without bothering on low-level details of managing a cluster. On the other hand, Kubernetes cluster takes care of system-level challenges such as scalability, latency etc by providing features such as auto-scaling, load balancing and lot more. By the end of this workshop, you will build an effective learning system which will leverage Kubeflow on Kubernetes engine to deploy high performing scalable recommendation system using deep reinforcement learning.

    References:

    1. "https://www.mckinsey.com/industries/retail/our-insights/how-retailers-can-keep-up-with-consumers":https://www.mckinsey.com/industries/retail/our-insights/how-retailers-can-keep-up-with-consumers
    2. "https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429":https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429
    3. "https://www.bloomberg.com/news/articles/2016-09-21/spotify-is-perfecting-the-art-of-the-playlist":https://www.bloomberg.com/news/articles/2016-09-21/spotify-is-perfecting-the-art-of-the-playlist
    4. "https://dl.acm.org/citation.cfm?id=1864770":https://dl.acm.org/citation.cfm?id=1864770
    5. "Deep Reinforcement Learning based Recommendation with Explicit User-Item Interactions Modelling": https://arxiv.org/pdf/1810.12027.pdf
    6. "Deep Reinforcement Learning for Page-wise Recommendations": https://arxiv.org/pdf/1805.02343.pdf
    7. "Deep Reinforcement Learning for List-wise Recommendations": https://arxiv.org/pdf/1801.00209.pdf
  • Liked Dr. Sri Vallabha Deevi
    keyboard_arrow_down

    Dr. Sri Vallabha Deevi - How to train your dragon - Reinforcement learning from scratch

    90 Mins
    Workshop
    Beginner

    Reinforcement learning helped Google's "AlphaGo" beat the world's best Go player. Have you wondered if you too can train a program to play a simple game?

    Reinforcement learning is a simple yet powerful technique that is driving many applications, from recommender systems to autonomous vehicles. It is best suited to handle situations where the behavior of the system cannot be described in simple rules. For example, a trained reinforcement learning agent can understand the scene on the road and drive the car like a human.

    In this workshop, I will demonstrate how to train a RL agent to a) cross a maze and b) play a game of Tic-Tac-Toe against an intelligent opponent with the help of plain python code. As you participate in this workshop, you will master the basics of reinforcement learning and acquire the skills to train your own dragon.

  • Liked Srikanth K S
    keyboard_arrow_down

    Srikanth K S - Actionable Rules from Machine Learning Models

    45 Mins
    Demonstration
    Intermediate

    Beyond predictions, some ML models provide rules to identify actionable sub-populations in support-confidence-lift paradigm. Along with making the models interpretable, rules make it easy for stakeholders to decide on the plan of action. We discuss rule-based models in production, rule-based ensembles, anchors using R package: tidyrules.

  • Liked Tushar Mittal
    keyboard_arrow_down

    Tushar Mittal - Machine Learning In Production

    45 Mins
    Talk
    Beginner

    People spend most of their time and efforts in building really good generalized models which could give satisfactory numbers on different metrics. Most of the tutorials, blogs and articles focus on explaining various concepts such as feature engineering, model selection, hyperparameter tuning etc. and using model in production is a topic that is often overlooked.

    After you have built your perfect model, then comes the next part how do I serve this to people so they can also use it. This is the question that I faced a while ago when I wanted to use my model on a webapp and display results based on the users inputs. This is when I started exploring the topic in AI which is equally important but very less talked about.

    The real value of machine learning models lies in production but most of the models end up being in a Jupyter Notebook (in the form of Python code) or inside a folder on the local computer. If you have also faced the same problems then this talk is for you !

    In this talk I'll tell you about the working of the systems in production, how to plan their architecture, the various ways you can deploy your model to production, their pros and cons, and when to use what. All these topics will be supported by code snippets and demos including real life examples. The talk will also revolve around making the best use of the available open source tools and frameworks to build a reliable and scalable pipeline for you machine learning system.

  • Liked Srikanth Gopalakrishnan
    keyboard_arrow_down

    Srikanth Gopalakrishnan - Role of Hessian and its Eigen pair in Deep Learning

    45 Mins
    Talk
    Advanced

    While we speak of optimization methods in Machine Learning/Deep Learning, stochastic gradient descent and its variants are quite popular. Choosing an optimal learning rate is crucial towards reaching minima. While some are chosen with theoretical reasons, some are empirically driven. Hessian play an important in understanding and driving the learning rates towards achieving convergence at a faster rate. This topic is getting increasingly popular among deep learning researchers with a quest to accelerate training time with effective convergence properties.

    In this talk, we will analyze the role of Hessian in driving the error function curvature and also understand the implications of its eigenvalues. Understanding the eigenvalue spectrum can reveal a lot about the behavior of the optimization algorithm.

  • Liked Tushar Mittal
    keyboard_arrow_down

    Tushar Mittal - Train your GANs Faster and Achive Better Results

    20 Mins
    Experience Report
    Intermediate

    GANs have been in trend since they were introduced, back in 2014 and have also produced some very amazing results in every domain ranging from images to videos and even audios.

    When reading and understanding about the working of GANs, they seem very intuitive and not that hard to train. It is when you get into training them you realize that its quite hard to train and achieve good results with your GAN architecture, and I learnt this the hard way.

    I trained my first ever GAN as part of a contest on Kaggle, wherein the task was to generate new unseen images of dogs using the given 20,000 images. I gladly entered the competition thinking how hard it could be. But as I trained my first model and analyzed the results I realized that its not as simple as it looks, and as I progressed through the competition, I participated in various discussions, read the kernels submitted by others and tried out various approaches. This taught me a lot about training GANs the right ways. I have trained various GANs on several datasets since then.

    So in this talk I want to share the tips and tricks that worked for me in achieving good results so you can directly use them and not have to learn them the hard way as I did.

  • Liked Gopalan Oppiliappan
    keyboard_arrow_down

    Gopalan Oppiliappan / Divyasree Tummalapalli / Sakina Pitalwala - A strategic approach to setting up a AI Center of Excellence(CoE)

    45 Mins
    Case Study
    Advanced

    The word AI, creates quite a lot excitement and expectations in everyone's mind. It also creates a sense of 'aura', some sort of a magical expectation that it can perform everything, that a human can perform. This has an adverse impact. The technique precedes the problem and every idea seems to be a candidate that can be solved by AI/ML. Hence managing this hype and expectation is the first and foremost challenge a Data Scientist and a leader of a COE faces. The second challenge is the data readiness of the organization.

    This presentation aims at sharing the wisdom gained from the experience of setting up a global AI practice, the trials and tribulations one goes through in balancing the expectations from the team, stakeholders and organization.

    Will share some of the sweet and bitter experiences so that we are fearless in articulating the reality when stakeholders come with the hyped up expectations.

  • Liked Nikita  Bhandari
    keyboard_arrow_down

    Nikita Bhandari - Applications of Machine Learning and Deep Learning in Genomics: DNA Sequence Classification

    45 Mins
    Demonstration
    Intermediate

    Genomics is a study of functions and information structures encoded in DNA sequence of a cell variable. Few cell variables that we can observe are outcome of many interacting cells that we cannot. These cell variables are nothing but biological sequences, which are primary structure of a biological macromolecule. The term biological sequences are most often used to refer to a DNA sequences. The interacting processes of these biological sequences are hidden/abstract. As the field of genomic is exploding in terms of data due to the breakthrough technology called Next Generation Sequencing, regular statistical methods are not very effective for the tasks like identification of splice site, promoters, terminators, classification of diseased genes and healthy genes, identifying TSS, identifying protein binding sites and so on. And therefore, to extract knowledge from big data in bioinformatics and to understand mechanism underlying gene expressions, Machine Learning and Deep Learning are being used widely. Our emphasis is going to be on how to identify a DNA sequence for a particular category using various Machine learning and deep learning techniques, types of data to be considered for the background purpose and how to preprocess raw data.

  • Liked Kanimozhi U
    keyboard_arrow_down

    Kanimozhi U - Semantic Web Technologies for Business and Industry Challenges

    Kanimozhi U
    Kanimozhi U
    NLP Engineer
    Cactus Communications
    schedule 2 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    The term semantic technologies represents a fairly diverse family of technologies that have been in existence for a long time and seek to help derive meaning from information. Some examples of semantic technologies include natural language processing (NLP), data mining, artificial intelligence (AI), category tagging, and semantic search. Semantic technology reads and tries to understand language and words in its context. Technically, this approach is based on different levels of analysis: morphological and grammatical analysis; logical, sentence and lexical analysis, in other words: natural language analysis. The major focus of this talk will be an overview of Semantic Technologies; what differentiates Semantic Technologies and its organisational needs. The most relevant Semantic Web concepts and methodologies will be discussed that helps us to develop an understanding for which use cases this technology approach provides substantial advantages.

    Semantic Web standards are used to describe metadata but also have great potential as a general data format for data communication and data integration. Machine learning solutions have been developed to support the management of ontologies, for the semi-automatic annotation of unstructured data, and to integrate semantic information into web mining. Machine learning can be employed to analyze distributed data sources described in Semantic Web formats and to support approximate Semantic Web reasoning and querying. From this talk you will get acquainted with specialised terminology, which enables us to dive deeper into the semantic technologies field. A general view of tools and methods to develop semantic applications with getting to know in which business domains it can be useful to embrace semantic solutions.

    Learn different knowledge modelling approaches to understand which applications require taxonomies or ontologies as underlying knowledge graph. Also, will discuss on the value of consistent metadata of different types and how they add up to a semantic layer around our digital assets. We will see how metadata schemes and their respective values can be managed with controlled vocabularies and taxonomies. Linked Data structure and its serialisation formats such as Resource Description Framework (RDF) and its subject-predicate-object expressions with the fundamental learning of graph based data.

    The most important linguistic concepts that get applied for text mining operations will be discussed along with different word forms, homographs, lemmatisation, ambiguity and more. We will also see how linguistic concepts in combination with knowledge modelling are used for semantic text mining. Get an overview on how Semantic Data Integration provides the conversion of data into RDF with a step-by step process on how different data types can be transformed into RDF. SPARQL: The query language of the Semantic Web will be demonstrated with a concrete example on how to make use of multi-facetted data. Semantic Web Architecture for organisations gives an overview on how semantic technology components play together to solve a business use case.

    Semantic technologies are algorithms and solutions that bring structure and meaning to information. Semantic Web technologies specifically are those that adhere to a specific set of W3C open technology standards that are designed to simplify the implementation of not only semantic technology solutions, but other kinds of solutions as well.

  • Liked Andrew Murphy
    keyboard_arrow_down

    Andrew Murphy - How to communicate anything to anyone and see a real impact - communicating effectively and efficiently

    90 Mins
    Workshop
    Beginner

    Everyone thinks they are a good at communication, but... how many times have you been at an event talking to someone you really didn’t want to talk to? Been sold to by someone who didn’t get that you weren’t interested?

    These are examples of bad communication and they all have a few things in common, they weren’t efficient and they weren’t effective

    .They didn’t go into the communication with the right mindset and the right preparation

    Also, sorry to say it, but your own communications probably suck too. But after this talk you’ll have a leg up on your competition: you’ll know your communication sucks... and you know how to fix it.

  • No more submissions exist.