by Datadog
Platform Talks
Mastering Generative AI: Improve Your Model Performance with Datadog LLM Monitoring
Date & Location
August 03 | 1:30 PM PDT | Platform Talks Theater in Partner Expo
The recent advancements of generative AI have unlocked the potential of large language models (LLMs), making them more powerful and accessible than ever before. However, these unique software components add a new layer of complexity to the applications that use them, and monitoring their performance presents new types of challenges on top of traditional observability ones. As LLM updates over time with use, effectively monitoring its performance presents a considerable challenge. Investigating the source of model deterioration and evaluating its effects can be difficult due to the lack of labeled datasets and correlation between user feedback and the model version. If left unchecked, this can lead to a degraded user experience and further decline of the model’s performance.In this session, you will discover how the new Datadog LLM monitoring offering can help you confidently address the challenges you might encounter in LLM-based applications deployed in production. You will learn how LLM monitoring correlates request prompts, responses, and model versions to provide complete visibility into the performance of your LLMs. Finally, you will understand how to elevate your end users’ experience by tracking model performance, rapidly detecting and resolving deviations, and identifying opportunities for fine tuning.