This talk will provide an introduction and real-world examples for one of the latest innovations in embedded vision, known as AI agents (or agentic AI), along with best practices for realizing their potential as agentic workflows in next-generation embedded systems.
We’ll start with an overview of these agents—what they are, how they operate, and how they can benefit embedded applications. This will include an example application illustrating a basic agentic AI workflow within a typical embedded vision system.
Next, we’ll showcase the full potential of embedded agentic AI, with multiple agents working together through a process called chain-of-thought reasoning. We’ll explain how agentic frameworks can orchestrate multiple vision and language models, to take the next step beyond the analysis that GenAI provides. In particular, how AI agents can automate the resulting actions, either by alerting humans with very specific, real-time, and immediately actionable information, or by acting independently.
Finally, we’ll discuss how to achieve an agentic AI workflow within the latency and power requirements of embedded vision systems, including best practices for agent routing/deployment, such as blended language model support, dynamic model orchestration, memory management strategies, and real-time priority management. This will be illustrated through examples with applicability for a broad range of industry verticals (e.g., industrial automation, robotics, and smart cities).