Deep Learning Research/Application on shoestring budget

The biggest entry barrier for Deep Learning is the cost associated with Deep Learning Infrastructure and projects. This is the main reason why lot of aspirants just complete online courses and are not able to contribute to meaningful research.

This technical talk is aimed at the deep learning aspirants who would like to contribute to research or succeed at application in the field. The talk will breakdown the technical aspects of Deep Learning from Coding, Infrastructure, Performance points of view.

Topics Covered:

1. Python Multi Threading Coding

2. Deep Learning CPU vs GPU: Build vs Buy

3. Using Kaggle Kernels

4. OpenAI Gym

 
1 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

Topics Covered:

1. Python Multi Threading Coding - Using Multi threading to squeeze system resources for deep learning

2. Deep Learning CPU vs GPU: Build vs Buy - Technical Aspects of getting a workstation for deep learning

3. Using Kaggle Kernels - Demonstration of Deep Learning on Kaggle kernels to do free deep learning and constraints

4. OpenAI Gym - Demonstration of Reinforcement Learning on OpenAI gym

Learning Outcome

1. How to contribute to Research/Application in Deep Learning without funding

2. Democratization of Deep Learning to the masses

Target Audience

Machine Learning Enthusiasts/Researchers/Newbies

Prerequisites for Attendees

Basic Understanding of Machine Learning

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker