Learn more
All posts

Datadog Log Receiver for OpenTelemetry Collector

Observability
Jul
30
2025
Jul
24
2025

We’re excited to introduce the datadoglogreceiver, a new component in the OpenTelemetry Collector that bridges Datadog’s log collection with OpenTelemetry’s vendor-neutral pipelines. This receiver lets observability engineers use the Datadog Agent’s robust log collection features in tandem with the OpenTelemetry Collector’s flexible processing and exporting.

In this post, we’ll explain how to instrument log collection using the Datadog Agent and OTel Collector together, how the new receiver fits into log ingestion architecture, and provide step-by-step configuration (with YAML examples). We’ll also cover how to orchestrate and monitor this log pipeline, and strategies to handle data loss in distributed environments. This contribution expands the OpenTelemetry ecosystem and enables consistent, vendor-agnostic observability pipelines for your logs.

Why this matters and what you can do with it

Until now, if you were collecting logs with the Datadog Agent, you were locked into Datadog’s backend. The data pipeline was a one-way street: Agent to Datadog, full stop. But the datadoglogreceiver breaks that barrier. Suddenly, your logs are portable. You can route them into any OpenTelemetry-compatible backend, whether it’s another vendor, your own hosted stack, or a multi-destination setup that keeps Datadog in the mix but also sends a copy elsewhere. For companies looking to decouple from vendor lock-in, this is your exit ramp. You continue to use the same trusted Agent for collection while regaining control over where that data goes and how it’s handled.

Using Datadog Agent and OpenTelemetry Collector for Log Collection

Datadog Agent for log capture: The Datadog Agent is a battle-tested host agent that can collect logs from various sources (files, containers, network sockets, etc.) and perform enhancements like multiline aggregation, filtering, and scrubbing. To tail logs from an application’s file:

# Example Datadog Agent integration config: <APP_NAME>.d/conf.yaml
logs:
  - type: file
    path: "/var/log/myapp/app.log"
    service: "myapp"
    source: "myapp"

OpenTelemetry Collector for log ingestion: The OpenTelemetry Collector is a vendor-neutral hub that can receive, process, and export telemetry data. With the new datadoglogreceiver, the Collector can now ingest logs directly from the Datadog Agent.

Integrating the two: Configure the Datadog Agent to forward logs to the Collector using logs_config.logs_dd_url. The Collector listens via datadoglogreceiver on the specified TCP port.

How the datadoglogreceiver Fits into the Log Ingestion Architecture

Architecture overview: The Datadog Agent collects logs and forwards them locally to the OTel Collector. The Collector ingests, processes, and exports logs to one or more destinations. This allows using Datadog's powerful log collection and OpenTelemetry's routing and vendor-agnostic processing.

Vendor-agnostic flexibility: The Collector enables multi-destination pipelines (e.g., Datadog and an open-source backend) with enriched logs from the Agent.

Step-by-Step Configuration (with YAML Examples)

1. Configure the Datadog Agent:

# datadog.yaml
logs_enabled: true
logs_config:
  logs_dd_url: "localhost:8121"
  logs_no_ssl: true

Define log sources in conf.d/<app>.d/conf.yaml as shown above.

2. Configure the OTel Collector:

receivers:
  datadoglog:
    endpoint: "0.0.0.0:8121"

processors:
  batch: {}
  memory_limiter: {}

exporters:
  datadog:
    api:
      key: "${DD_API_KEY}"
      site: "datadoghq.com"

service:
  pipelines:
    logs:
      receivers: [datadoglog]
      processors: [batch, memory_limiter]
      exporters: [datadog]

Orchestrating and Monitoring the Log Collection Pipeline

  • Run Agent and Collector on the same host when possible.
  • Use startup health checks or supervision to manage startup order.
  • Monitor Agent status via datadog-agent status.
  • Monitor Collector using internal metrics (queue size, dropped logs).
  • Export metrics for pipeline health and alert on failures.

Strategies for Handling Data Loss in Distributed Environments

  • Agent buffering: The Agent retries sending logs but has finite buffers.
  • Collector queues and retries: Use sending_queue and retry_on_failure settings.
  • Persistent queues: Enable file-based queues with the file_storage extension.
  • Backpressure: Tune batch size, add queues or use Kafka if necessary.
  • Dual shipping: Use multiple endpoints to send logs to both Collector and Datadog.

Example persistent queue configuration:

extensions:
  file_storage:
    directory: /var/lib/otelcol/queue

exporters:
  datadog:
    sending_queue:
      storage: file_storage
      queue_size: 5000
    retry_on_failure:
      max_elapsed_time: 10m

service:
  extensions: [file_storage]
  pipelines:
    logs:
      receivers: [datadoglog]
      processors: [batch, memory_limiter]
      exporters: [datadog]

Bridging the gap

The datadoglogreceiver bridges the gap between Datadog’s Agent and OpenTelemetry’s vendor-neutral processing. It enables flexible, reliable, and observable log pipelines with the ability to route logs anywhere. With proper configuration, retry mechanisms, and monitoring, this integration is a significant step toward more open and robust observability practices.

We’re excited to see how observability engineers use this new component to streamline their log pipelines and build resilient telemetry systems.

Related Resources: