Computational Machine Learning
You have been hearing about machine learning (ML) and artificial intelligence (AI) everywhere. You have heard about computers recognizing images, generating speech, natural language, and beating humans at Chess and Go.
The objectives of the workshop:
Learn machine learning, deep learning and AI concepts
Provide hands-on training so that students can write applications in AI
Provide ability to run real machine learning production examples
Understand programming techniques that underlie the production software
The concepts will be taught in Julia, a modern language for numerical computing and machine learning - but they can be applied in any language the audience are familiar with.
Workshop will be structured as “reverse classroom” based laboratory exercises that have proven to be engaging and effective learning devices. Knowledgeable facilitators will help students learn the material and extrapolate to custom real world situations.
Outline/Structure of the Workshop
- Representing Data with Models. Use of functions and parametric functions to build models.
- Model Complexity, what is Learning from a Computational point of view. How does a Computer learn?
- Exploring Data with Unsupervised Learning, Dimensionality reduction for Image Classification.
- Applications using Unsupervised Machine learning
- Introduction to Supervised Machine Learning
- Practical Applications using Supervised Machine Learning, (Object detection etc.)
- Introduction to Neurons, Learning with a Single Neuron
- Introduction to Flux.jl, learning with a single neuron using Flux
- Introduction to Neural Networks, Building single layer neural net with Flux
- Introduction to Deep Learning, Multi-Layer Neural Network with Flux
- Handwritten recognition with neural networks
- Participants will walk away feeling comfortable with machine learning and the underlying algorithms.
- Participants can consider themselves not as consumers of APIs of various ML libraries, but can become comfortable with building the underlying algorithms in Julia and be able to contribute to various ML packages and in general to Julia too!
Aspiring Data Scientists, experienced data scientists who are eager to get better understanding of the implementation of ML algorithms.
Prerequisites for Attendees
We highly recommend the participants to install Julia on your laptops, along with the following set of Julia packages. The actual plan is to use JuliaBox.com, however, if we run into challenges w.r.t internet, we might not be able to comfortably use the cloud platform.
Options for installing Julia :
List of packages
To add packages, `Pkg.add("PackageName")`.
Not to shy away from getting into some mathematical concepts
Commitment to strive towards understanding the concepts and program for applications
Active participation in the workshop and strive to solve exercises taking the help of support staff
Commitment to follow on work or projects in order to apply the concepts in real life
schedule Submitted 11 months ago
People who liked this proposal, also liked:
Rahee Walambe / Vishal Gokhale - Processing Sequential Data using RNNsRahee WalambeResearch and Teaching FacultySymbiosis Institute of TechnologyVishal GokhaleSr. ConsultantXnsio
schedule 1 year agoSold Out!
Data that forms the basis of many of our daily activities like speech, text, videos has sequential/temporal dependencies. Traditional deep learning models, being inadequate to model this connectivity needed to be made recurrent before they brought technologies such as voice assistants (Alexa, Siri) or video based speech translation (Google Translate) to a practically usable form by reducing the Word Error Rate (WER) significantly. RNNs solve this problem by adding internal memory. The capacities of traditional neural networks are bolstered with this addition and the results outperform the conventional ML techniques wherever the temporal dynamics are more important.
In this full-day immersive workshop, participants will develop an intuition for sequence models through hands-on learning along with the mathematical premise of RNNs.