Accelerating AI Camera Development with Xilinx Vitis

Fueled by a trifecta of rapid advances in network training, big data, and ML research, so-called "Deep Learning" is rapidly becoming mainstream. Nowhere is this statement more true than in embedded vision applications where the end game for Deep Learning is teaching machines to "see". The range of embedded vision applications is seemingly endless, from machine vision cameras that ensure zero defects on the production line, to pole-mounted "Smart City" cameras that monitor traffic, detect theft and disasters, to robots that deliver your online purchases right to your doorstep. However, CNN inference is computationally expensive, requiring billions of operations per inference. Moreover, many critical applications require extremely low latency and must support high frame rates. Given these constraints, and given a need for sub-10W power consumption, high-reliability, security, and product longevity, how do we design an integrated camera which can provide the required levels of ML inference performance?

During this webinar we will explore this topic from a variety of different perspectives:

  • What are the critical challenges for CNN inference?
  • What level of performance is required, what level of performance is feasible?
  • How do we leverage Xilinx MPSoC devices with Vitis to accelerate CNN inference?
  • What are the capabilities and advantages of the embedded Video Codec in the MPSoC device family?
  • What approaches are available for Image Sensor Pipeline integration?