by Datadog
Observability Theater
Best Practices for Monitoring AI and LLM Technologies Embedded Across Your Tech Stack
Date & Location
August 03 | 4:30 PM PDT | Observability Theater
The rapid advancement of AI and large language models (LLM) has begun to revolutionize the way people work across industries, from extracting valuable insights from data to automating complex tasks. As organizations continue to implement generative AI models to create operational efficiencies, they will inevitable need to make changes across their tech stacks. As an example, new types of databases, such as vector databases, are already being used to create predictions based on unstructured data. Organizations’ tech stacks are changing to accommodate AI-powered capabilities, and Datadog is evolving its integration offerings to also cater to the AI stack. We are excited to announce a growing collection of integrations for AI technologies that consolidate monitoring LLM technologies into a single pane of glass. Specifically, these integrations provide end users with visibility into the performance, resource consumption, and reliability of their applications via out-of-the-box dashboards and monitors. With comprehensive insights, organizations can more easily identify and resolve incidents, improve model performance, and strengthen the reliability and trustworthiness of their AI systems.Datadog’s newest AI integrations are primarily focused on the deploy phase, as this is where application developers and engineers run LLMs in production. It’s also the point at which observability is critical to providing high-quality service to customers, and where LLM is integrated into the larger technology infrastructure of the organization. In this session, you’ll learn how our AI integrations provide complete monitoring across the critical layers of the AI tech stack—model layer, ML orchestration and libraries, data storage and management, and hardware.