Busting the hype and Deconstructing the Hyperparameters of Generative models

Generative models have progressed substantially in the past few years. There has been a substantial proof point of success around image synthesis and
topic extraction and fast information retrieval or filtering.

Traditional GANs even though popular are still hard to train and issues like slow convergence, diminished gradients, overfitting and hypersensitivity to Hyperparameter selection surface frequently.

The discussion attempts to propose an alternate architecture that advocates using a pretrained Generator and two loss functions Feature loss and Discriminator loss reduces the time to achieve Nash equilibrium thereby reducing the training time for GANs. We will use image dataset consisting of human faces to construct our alternate architecture.

The second algorithm will focus on topic modeling using Latent Dirichlet algorithm. We will explore in depth the Dirichlet distribution and use it to find the latent topics present within the text. This will help us answer queries like which cuisines are similar to Mexican cuisine or which restaurant offers the best Chicken rise in the town.

Flow Models, Autoregressive Models and Generative Adversarial Networks(GANs) are popular generative models are an active area of research.

The presentation will take a conceptual yet hands-on approach to explore Generative models and discuss the trade-Offs and why we need an ensemble cast of generative models to solve different use cases related to NLP and Computer Vision and why one single solution may not fit all the use cases.

 
 

Outline/Structure of the Tutorial

The focus of this session will be around Generative models. Text data is generally messy and unlabeled. Generative models like LDA is well proven and used at scale to provide structure to unlabeled data. Labeling unlabeled text is key for Named Entity Recognition (NER) , Named Entity Linking and Named Entity Disambiguation.

The second algorithm will focus on GANs and would attempt to generate different styles that can be generated from the fashion data. The focus would be on the new styles that can be generated and detailed explanation on the roles of Generator Network and Discriminator Network.

We will then discuss the trade-offs between the two algorithms and the potential use cases for both. Overall the talk will be structured as follows.

Part 1: Clustering and Topic modeling onYELP dataset

  • Understanding Flow Model.
  • Importance of Machine Learning Model Interpretation
  • Criteria for Model Interpretation Methods
  • Scope of Model Interpretation
  • Visualization using t-SNE

Part 2: Generating Neural style on Fashion Data

  • Understanding Generative Adversarial Model.
  • Generating new styles.
  • Result interpretation and visualization.

Part 3: Discussing the trade-off and potential use cases.

Learning Outcome

Key Takeaways from this talk\tutorial

- Understand what are Generative Models

- Learn the latest and best techniques for building Flow models and GANs

- Learn how to leverage state-of-the-art model interpretation frameworks in Python

- Understand how to interpret models on both structured and unstructured data and visualization techniques

Target Audience

Data Scientists, Engineers, Managers, AI Enthusiasts

Prerequisites for Attendees

Participants are expected to know what is AI, Machine Learning and Deep Learning. Some basics around the Data Science lifecycle including data, features, modeling, and evaluation. Its a hands-on session with two Jupyter Notebook, using Python, so having a basic knowledge of Python would help.

schedule Submitted 7 months ago

Public Feedback

comment Suggest improvements to the Speaker