schedule Aug 8th 11:00 - 11:45 AM place Grand Ball Room 1 people 1 Interested

In recent years, there has been a lot of research in the area of sequence to sequence learning with neural network models. These models are widely used for applications such as language modeling, translation, part of speech tagging, and automatic speech recognition. In this talk, we will give an overview of sequence to sequence learning, starting with a description of recurrent neural networks (RNNs) for language modeling. We will then explain some of the drawbacks of RNNs, such as their inability to handle input and output sequences of different lengths, and describe how encoder-decoder networks, and attention mechanisms solve these problems. We will close with some real-world examples, including how encoder-decoder networks are used at LinkedIn.

 
1 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

  • An overview of sequence to sequence learning
  • Introduction to RNNs for language modeling, translation, speech tagging, and automatic speech recognition
  • Drawbacks of RNNs (such as their inability to handle input and output sequences of different lengths)
  • Describe how encoder-decoder networks and attention mechanisms solve these problems.
  • We will close with some real-world examples, including how encoder-decoder networks are used at LinkedIn.

Learning Outcome

  • Understanding of RNNs for Language Modeling
  • Draws of RNNs and how to overcome them
  • How LinkedIn is using encoder-decoder networks

Target Audience

Data Scientists, Data Engineers, Data Specialists, Machine Learning Engineers, Data Science Enthusiasts

schedule Submitted 2 weeks ago

Public Feedback

comment Suggest improvements to the Speaker