by Datadog

Observability Theater

Datadog for Data Platforms: Optimize Spark Pipelines and Improve Data Quality

Date & Location

June 26 | 3:00 PM EDT | Observability Theater

As data becomes increasingly central to business value and a competitive advantage, identifying data issues—whether missing, delayed, or incorrect—is critical. Unfortunately, teams often discover these issues too late, resulting in damaged customer trust and a complicated troubleshooting process across evolving data stacks and multiple teams. Additionally, these data processing workloads are exploding in cost and it’s difficult to understand if they are optimized and to find the cost saving opportunities.In this session we'll introduce Data Jobs Monitoring (DJM), showcasing how it enables engineers to observe, troubleshoot, and cost-optimize their Spark and Databricks jobs across data pipelines. We’ll also explore how Datadog can help teams effectively monitor their data pipelines across technologies, by allowing you to identify an issue, isolate the root cause and understand the impact of Spark job failures as well as stale Snowflake data. Join us to learn how DJM and other Datadog offerings can help teams improve job performance and data quality across their data pipelines.

by Datadog