Introduction to reinforcement learning using Python and OpenAI Gym

location_city Bengaluru schedule Aug 31st 11:30 AM - 01:00 PM IST place Neptune people 128 Interested

Reinforcement Learning algorithms becoming more and more sophisticated every day which is evident from the recent win of AlphaGo and AlphaGo Zero (https://deepmind.com/blog/alphago-zero-learning-scratch/ ). OpenAI has provided toolkit openai gym for research and development of Reinforcement Learning algorithms.

In this workshop, we will focus on introduction to the basic concepts and algorithms in Reinforcement Learning and hands on coding.

Content

  • Introduction to Reinforcement Learning Concepts and teminologies
  • Setting up OpenAI Gym and other dependencies
  • Introducing OpenAI Gym and its APIs
  • Implementing simple algorithms using couple of OpenAI Gym Environments
  • Demo of Deep Reinforcement Learning using one of the OpenAI Gym Atari game

 
 

Outline/Structure of the Workshop

Session will introduce reinforcement learning concepts using Python Code and participants will do hands on by coding along.

Details of the reinforcement learning topics and libraries are covered in abstract

Learning Outcome

reinforcement learning basics

Use OpenAI Gym for developing and testing

Target Audience

Anyone who wants to learn basics of reinforcement learning and what to get the hands dirty !

Prerequisites for Attendees

Participants must be well versed with python. Some exposure to analytics libraries in python such as numpy, pandas, keras, tensorflow, pytorch would help.

Please make sure you have following prerequisites installed on your laptop for the hands on activity

1. Python >3.5 environment (Pure python / Anaconda / Miniconda)

2. Code Editor (PyCharm / Spider which comes with Anaconda / Sublime etc.)

3. Python packages as per the instructions in README at https://github.com/saurabh1deshpande/odsc-2018

Slides


Video


schedule Submitted 5 years ago

  • Favio Vázquez
    keyboard_arrow_down

    Favio Vázquez - Agile Data Science Workflows with Python, Spark and Optimus

    480 Mins
    Workshop
    Intermediate

    Cleaning, Preparing , Transforming and Exploring Data is the most time-consuming and least enjoyable data science task, but one of the most important ones. With Optimus we’ve solve this problem for small or huge datasets, also improving a whole workflow for data science, making it easier for everyone. You will learn how the combination of Apache Spark and Optimus with the Python ecosystem can form a whole framework for Agile Data Science allowing people and companies to go further, and beyond their common sense and intuition to solve complex business problems.

  • Joy Mustafi
    keyboard_arrow_down

    Joy Mustafi - The Artificial Intelligence Ecosystem driven by Data Science Community

    Joy Mustafi
    Joy Mustafi
    Founder and President
    MUST Research
    schedule 5 years ago
    Sold Out!
    45 Mins
    Talk
    Intermediate

    Cognitive computing makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology. Project Features are Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time; Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people; Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time; Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided). {A set of cognitive systems is implemented and demonstrated as the project J+O=Y}

  • 480 Mins
    Workshop
    Intermediate

    You have been hearing about machine learning (ML) and artificial intelligence (AI) everywhere. You have heard about computers recognizing images, generating speech, natural language, and beating humans at Chess and Go.

    The objectives of the workshop:

    1. Learn machine learning, deep learning and AI concepts

    2. Provide hands-on training so that students can write applications in AI

    3. Provide ability to run real machine learning production examples

    4. Understand programming techniques that underlie the production software

    The concepts will be taught in Julia, a modern language for numerical computing and machine learning - but they can be applied in any language the audience are familiar with.

    Workshop will be structured as “reverse classroom” based laboratory exercises that have proven to be engaging and effective learning devices. Knowledgeable facilitators will help students learn the material and extrapolate to custom real world situations.

  • Vishal Gokhale
    keyboard_arrow_down

    Vishal Gokhale - Fundamental Math for Data Science

    Vishal Gokhale
    Vishal Gokhale
    Sr. Consultant
    Xnsio
    schedule 5 years ago
    Sold Out!
    480 Mins
    Workshop
    Beginner

    By now it is evident that a solid math foundation is indispensable if one has to get into Data science in an honest-to-goodness way. Unfortunately, for many of us math was just a means to get better scores at school-level and never really a means to understand the world around us.
    That systemic failure (education system) causes many of us to feel a “gap” when learning data science concepts. It is high time that we acknowledge that gap and take remedial action.

    The purpose of the workshop is to develop an intuitive understanding of the concepts.
    We let go the fear of rigorous notation and embrace the rationale behind it.
    The intended key take away for participants is confidence to deal with math.

  • Ujjyaini Mitra
    keyboard_arrow_down

    Ujjyaini Mitra - When the Art of Entertainment ties the knot with Science

    20 Mins
    Talk
    Advanced

    Apparently, Entertainment is a pure art form, but there's a huge bit that science can back the art. AI can drive multiple human intensive works in the Media Industry, driving the gut based decision to data-driven-decisions. Can we create a promo of a movie through AI? How about knowing which part of the video causing disengagement among our audiences? Could AI help content editors? How about assisting script writers through AI?

    i will talk about few specific experiments done specially on Voot Original contents- on binging, hooking, content editing, audience disengagement etc.

  • Gunjan Juyal
    keyboard_arrow_down

    Gunjan Juyal - Building a Case for a Standardized Data Pipeline for All Your Organizational Data

    Gunjan Juyal
    Gunjan Juyal
    Sr. Consultant
    Xnsio
    schedule 5 years ago
    Sold Out!
    20 Mins
    Experience Report
    Beginner

    Organizations of all size and domains today face a data explosion problem, driven by a proliferation of data management tools and techniques. A very common scenario is creation of silos of data and data-products which increases the system’s complexity spread across the whole data lifecycle - right from data modeling to storage and processing infrastructure.

    High complexity = high system maintenance overheads = sluggish decision making. Another side-effect of this is divergence of the implemented system’s behaviour from high-level business objectives.

    In this talk we look at Zeta's experience as a case-study for reducing this complexity by defining and tackling various concerns at well-defined stages so as to prevent a build of complexity.

  • Sai Charan J
    keyboard_arrow_down

    Sai Charan J - Self Learning - Data Science

    Sai Charan J
    Sai Charan J
    Data Scientist
    MTW Labs
    schedule 4 years ago
    Sold Out!
    45 Mins
    Workshop
    Beginner

    For people from a non-technical background, I recommend formal academic programs. And then raising the bar comes data-driven scientist - Self Taught Data Scientist! These people are trendsetters, go way deep & play with data. They love data crunching & are seen solving real-time problems!

    If that's you, then let's wave our hands!

  • Harshad Saykhedkar
    keyboard_arrow_down

    Harshad Saykhedkar - Linear Algebra for Machine Learning Workshop

    240 Mins
    Workshop
    Beginner

    Linear Algebra, Optimization & Statistics is base of all machine learning. This workshop will cover required linear algebra for machine learning in a hands on way through short code examples. We will cover basic theory, interesting applications and the big picture.

  • Saurabh Deshpande
    keyboard_arrow_down

    Saurabh Deshpande - Introduction to Natural Language Processing using Python

    90 Mins
    Workshop
    Intermediate

    Python ecosystem for Natural language processing has evolved in last decade and rich set of open source tools and data sets are now available.

    In this session, we will go over basics of Natural language processing along with sample code demonstration and hands on tutorials using following famous python libraries,

    1. NLTK : One of the oldest and famous library for natural language analysis for researchers
    2. Stanford CoreNLP : Production ready NLP library. (Written in java but has many open source python wrappers)
    3. SpaCy: Comparatively new python NLP toolkit marketed as 'Industrial Strength' python library.

    Session will introduce the various use cases and basic concepts related to the natural language processing with demo and hands on tutorials

    Following NLP fundamentals will be discussed,

    - Syntax Vs. Semantics

    - Regular Expressions (Demo and Hands on)

    - Word Embeddings (Demo and Hands on)

    - Word Tokenization (Demo and Hands on)

    - Part of Speech Tagging (Demo and Hands on)

    - Text Similarity (Demo and Hands on)

    - Text Summarization

    - Named Entity Recognition ((Demo and Hands on))

    - Sentiment Analysis (Demo and Hands on)

  • Venkatraman J
    keyboard_arrow_down

    Venkatraman J - Hands on Data Science. Get hands dirty with real code!!!

    45 Mins
    Workshop
    Intermediate

    Data science refers to the science of extracting useful information from data. Knowledge discovery in data bases, data mining, Information extraction also closely match with data science. Supervised learning,Semi supervised learning,Un supervised learning methodologies are out of Academia and penetrated deep into the industry leading to actionable insights, dashboard driven development, data driven reasoning and so on. Data science has been the buzzword for last few years in industry with only a handful of data scientists around the world. The industry needs more and more data scientists in future to solve problems using statistical techniques. The exponential availability of unstructured data from the web has thrown huge challenges to data scientists to exploit them before driving conclusions.

    Now that's overload of information and buzzwords. It all has to start somewhere? Where and how to start? How to get hands dirty rather than just reading books and blogs. Is it really science or just code?. Let's get into code to talk data science.

    In this workshop i will show the tools required to do real data science rather than just reading by building real models using Deep neural networks and show live demo of the same. Also share some of the key data science techniques every aspiring data scientist should have to thrive in the industry.

  • Saurabh Deshpande
    keyboard_arrow_down

    Saurabh Deshpande - Machine Learning DevOps and A/B testing using docker and python

    45 Mins
    Talk
    Beginner

    Training a machine learning / deep learning model is one thing and deploying it to a production is completely different beast. Not only you have to deploy it to a production, but you will have to retrain the model every now and then and redeploy the updates. With many machine learning / deep learning projects / POCs running in parallel with multiple environments such as dev, test prod, managing model life cycle from training to deployment can quickly become overwhelming. In this talk, I will discuss an approach to handle this complexity using Docker and Python. Rough outline of the talk is,

    • Introduction to the topic
    • Problem statement
    • Quick introduction to Docker
    • Discussing the proposed architecture
    • Alternative architecture using AWS infrastructure
    • Demo
help