by Datadog
Observability Theater
Confidently Scale Generative AI Applications in Production with Datadog LLM Observability
Date & Location
June 26 | 4:00 PM EDT | Observability Theater
The implementation and productionization of large language models (LLMs) present several challenges. LLM chains are complex and involve a series of steps, such as retrieving specific information from a database, that complicate troubleshooting causes of latencies, errors, and inaccurate responses or hallucinations. Additionally, evaluating the quality and accuracy of LLM applications’ responses is difficult. Finally, LLM applications are susceptible to security threats, such as prompt hacking and exposing sensitive data, which compromises user privacy and damages your organization’s reputation.In this session, you will discover how Datadog LLM Observability enables you to confidently monitor LLM applications in production with end-to-end tracing, swift issue resolution, and robust privacy and security. You will learn how Datadog LLM Observability provides visibility into each step of the LLM chain to easily identify errors and monitor operational metrics (e.g., latency and token usage). Additionally, see how you can use LLM Observability to evaluate the quality of your LLM application, such as topic relevance or toxicity, and get insights to mitigate security and privacy risks in your LLM applications with out-of-the-box quality and safety checks.