AI and ML models are moving to the edge for inferencing. If the public cloud is the preferred environment for training the models, it is the edge where most of the models are deployed for inferencing. Unlike the cloud and data center, edge computing environments are highly constrained. They are deployed in remote, harsh environments that make them difficult to manage. To optimize deep learning models for the edge, both hardware and software vendors have built various platforms to accelerate the inferencing. This session introduces the current state of [email protected] while discussing various choices from Amazon, Google, Intel, Microsoft, NVIDIA, and Qualcomm.


Outline/Structure of the Demonstration

  • Why edge has become THE destination for AI
  • Building blocks of an edge computing platform
  • Hardware AI Accelerators
  • Software for optimizing and deploying the models at the edge
  • Tradeoffs and recommendations

Learning Outcome

  • State of edge computing for AI
  • Best practices of optimizing and deploying models at the edge
  • Vendor strategies

Target Audience

AI Engineers, Data Scientists, Software Developers

schedule Submitted 3 years ago