Make that first leap into voice interfaces and AI
An ever growing ecosystem of connected devices needs newer methods of interaction and UX. While mobile applications are common for interfacing with the IoT, more and more devices make managing a growing number of apps infeasible. Rather than opening up separate apps for each intended action, voice interfaces are starting to become more powerful and commonplace with the Amazon Echo, Google Home and more bringing an expectation of voice commands to the IoT.
With voice interfaces emerging comes another expectation — intelligent responses from these voice interfaces! This is where early artificial intelligence and chatbot creation will come in handy. While these areas sound like they'd be complex to get started with, there are platforms and technologies out there today that can enable you to do a whole lot out of the box which you can build upon.
In this talk, PatCat will give you a crash course in voice interfaces and AI — looking at how you can get started with existing services and APIs, and how you can take all of this and apply it to your own idea or connected device.
Outline/structure of the Session
- A quick overview of voice assistants (Amazon Echo, Google Assistant, Siri, Cortana, Bixby)
- An overview of APIs for voice assistants and AI (Api.ai, Wit.ai, Clarifai)
- Basic AI terms explained (e.g. "machine learning", "deep learning", "neural networks")
- Live demo of a custom voice assistant with IoT integration, how it was created and how attendees could make their own
By the end of the talk, attendees should have a good overall understanding of voice interfaces, common AI terminology and how they can use existing APIs/frameworks to put together their own voice assistant with some basic AI concepts and IoT integration.
No particular knowledge, attendees can start from scratch for a lot of this.