Getting started with Smart Speakers and Voice Interfaces
Conversational voice interfaces have the potential to be the most significant new user interaction mechanism since the rise of mobile touch devices. Prominent examples of these interfaces in action are smart speakers like Google Home and Amazon Echo. In this talk we’ll explore and compare how you develop for each of these voice platforms.
Outline/structure of the Session
We’ll start by building an application for a Google Home. Along the way, we’ll explore the main components - the device itself, text-to-speech conversion, natural language processing (NLP), supporting services - and how they fit together. We’ll be using Google’s API.AI service to do the NLP, and will dive into how it also handles more advanced requirements like maintaining conversational context.
Next we’ll look at the Amazon Echo, re-building our example app for that device. We’ll then draw upon this experience to compare and contrast each platform’s capabilities and development experience. We’ll also look into the feasibility of importing API.AI configurations from Google into the Amazon platform, and any other strategies that might reduce duplication between platforms.
Finally, we'll sum up with some thoughts on the future of conversational voice interfaces and smart speakers, especially as more players (i.e. Apple) enter the market.
Attendees will leave this talk with an overview of the key components of a conversational voice interface system, an insight into the development experience for the two leading platforms, and a sense of how they might be able to get started building their own app for this new frontier of user interaction.
Any developers with an interest in this new channel for user interaction