Simplifying and accelerating AI model development workflows is hugely valuable, whether you have an army of data scientists or just a few developers.
From adapting a model to fit your use-case to optimizing it for production deployment - it is a complex and iterative process. In this session, we'll show how easy it is to train and optimize an object detection model with NVIDIA TAO, a low-code AI toolkit, and deploy it for inference using the NVIDIA Triton Inference Server on Azure ML.
Speakers
Principal Cloud Advocate and Lead for Data in Developer Relations - Microsoft
Developer Relations Manager - NVIDIA
Sr. Product Manager - NVIDIA