Kubeflow Explained: NLP Architectures on Kubernetes
There's more to a Natural Language Processing (NLP) application than an ensemble of models. Much more! Like any traditional application, there is an entire ecosystem of supporting tools that enables core ML functionality. How do you choose the best ones? Which ones can you do without? How do you maintain context when the complexity of such a system gets out of hand?
In this session, you will learn how to deploy the full scope of an NLP application on Kubernetes with Kubeflow. The guiding principles of a robust and resilient system are explained and used as the foundation for defining a specific architecture. Techniques for modifying and maintaining it over time are described. Find out what Kubeflow currently supports and the long-term vision for the project, presented by a project contributor.
Learning Outcome
- Principles of a robust, resilient system
- How to deploy a complete NLP application using Kubeflow on Kubernetes
- How to modify and maintain the application over time
- What's next for Kubeflow
Target Audience
ML engineers, DevOps engineers, data scientists, architects
Prerequisites for Attendees
Experience creating and deploying NLP applications is useful, but not required. Familiarity with Kubernetes is helpful.