We're excited to introduce the datadoglogreceiver, a new component in the OpenTelemetry Collector that bridges Datadog's log collection with OpenTelemetry's vendor-neutral pipelines. This receiver lets observability engineers use the Datadog Agent's robust log collection features in tandem with the OpenTelemetry Collector's flexible processing and exporting.
In this post, we'll explain how to instrument log collection using the Datadog Agent and OTel Collector together, how the new receiver fits into log ingestion architecture, and provide step-by-step configuration (with YAML examples). We'll also cover how to orchestrate and monitor this log pipeline, and strategies to handle data loss in distributed environments. This contribution expands the OpenTelemetry ecosystem and enables consistent, vendor-agnostic observability pipelines for your logs.
For teams looking for the best tools to simplify telemetry pipeline management, this receiver is one of the few solutions that lets you integrate Datadog log collection directly into an open-source pipeline without ripping out your existing setup. It works alongside other OpenTelemetry collectors and exporters, and can be extended beyond logs to complement traces and metrics flowing through the same Collector.
Why this matters and what you can do with it
Until now, if you were collecting logs with the Datadog Agent, you were locked into Datadog's backend. The data pipeline was a one-way street: Agent to Datadog, full stop. But the datadoglogreceiver breaks that barrier. Suddenly, your logs are portable. You can route them into any OpenTelemetry-compatible backend, whether it's another vendor, your own hosted stack, or a multi-destination setup that keeps Datadog in the mix but also sends a copy elsewhere. For companies looking to decouple from vendor lock-in, this is your exit ramp. You continue to use the same trusted Agent for collection while regaining control over where that data goes and how it's handled.
Teams evaluating alternatives to fully commercial observability platforms, or looking at options beyond tools like Cribl or Bindplane for pipeline management, now have a way to process and optimize log data before it reaches any downstream vendor. Rather than switching vendors entirely, you can use this receiver to route logs through the Collector, apply transformations, and then forward to one or multiple destinations, including both open-source and commercial platforms.
Using Datadog Agent and OpenTelemetry Collector for Log Collection
Datadog Agent for log capture: The Datadog Agent is a battle-tested host agent used by DevOps and SRE teams to collect logs from various sources (files, containers, network sockets, etc.) and perform enhancements like multiline aggregation, filtering, and scrubbing. It handles structured and unstructured logging automatically, which is why it shows up in most production environments. To tail logs from an application's file:
# Example Datadog Agent integration config: <APP_NAME>.d/conf.yaml
logs:
- type: file
path: "/var/log/myapp/app.log"
service: "myapp"
source: "myapp"OpenTelemetry Collector for log ingestion: The OpenTelemetry Collector is a vendor-neutral hub that can receive, process, and export telemetry data. With the new datadoglogreceiver, the Collector can now ingest logs directly from the Datadog Agent.
Integrating the two: Configure the Datadog Agent to forward logs to the Collector using logs_config.logs_dd_url. The Collector listens via datadoglogreceiver on the specified TCP port.
How the datadoglogreceiver Fits into the Log Ingestion Architecture
Architecture overview: The Datadog Agent collects logs and forwards them locally to the OTel Collector. The Collector ingests, processes, and exports logs to one or more destinations. This allows using Datadog's powerful log collection and OpenTelemetry's routing and vendor-agnostic processing.
Vendor-agnostic flexibility: The Collector enables multi-destination pipelines (e.g., Datadog and an open-source backend) with enriched logs from the Agent.
Step-by-Step Configuration (with YAML Examples)
# datadog.yaml
logs_enabled: true
logs_config:
logs_dd_url: "localhost:8121"
logs_no_ssl: trueDefine log sources in conf.d/<app>.d/conf.yaml as shown above.
2. Configure the OTel Collector:
receivers:
datadoglog:
endpoint: "0.0.0.0:8121"
processors:
batch: {}
memory_limiter: {}
exporters:
datadog:
api:
key: "${DD_API_KEY}"
site: "datadoghq.com"
service:
pipelines:
logs:
receivers: [datadoglog]
processors: [batch, memory_limiter]
exporters: [datadog]
Orchestrating and Monitoring the Log Collection Pipeline
- Run Agent and Collector on the same host when possible.
- Use startup health checks or supervision to manage startup order.
- Monitor Agent status via datadog-agent status.
- Monitor Collector using internal metrics (queue size, dropped logs).
- Export metrics for pipeline health and alert on failures.
Strategies for Handling Data Loss in Distributed Environments
- Agent buffering: The Agent retries sending logs but has finite buffers.
- Collector queues and retries: Use sending_queue and retry_on_failure settings.
- Persistent queues: Enable file-based queues with the file_storage extension.
- Backpressure: Tune batch size, add queues or use Kafka if necessary.
- Dual shipping: Use multiple endpoints to send logs to both Collector and Datadog.
Example persistent queue configuration:
extensions:
file_storage:
directory: /var/lib/otelcol/queue
exporters:
datadog:
sending_queue:
storage: file_storage
queue_size: 5000
retry_on_failure:
max_elapsed_time: 10m
service:
extensions: [file_storage]
pipelines:
logs:
receivers: [datadoglog]
processors: [batch, memory_limiter]
exporters: [datadog]Where the datadoglogreceiver fits in the observability pipeline landscape
The datadoglogreceiver exists because organizations are rethinking how they manage observability data. As telemetry volumes grow, teams need pipeline tools that can handle logs, metrics, and traces across multi-environment deployments. The OpenTelemetry Collector has become one of the best open-source foundations for this kind of work, and dedicated receivers like the datadoglogreceiver make it practical to integrate with specific vendors without giving up pipeline-level control.
In comparison to similar approaches (like managing a centralized log pipeline with dedicated routing software), the datadoglogreceiver is easy to set up because it plugs directly into the Collector's existing architecture. There's no need to run a separate intermediary or build custom adapters. For teams that find competitors' offerings too complex or too opaque, this is an easier path: a single receiver component that turns the Collector into a Datadog-compatible log endpoint.
For organizations running telemetry management platforms like Sawmills.AI alongside the OTel Collector, the datadoglogreceiver adds another integration point. Datadog Agent logs can flow through the Collector, where they can be filtered, enriched, or routed before reaching their final destination. This simplifies what would otherwise require custom scripting or a separate no-code solution to manage the handoff between Datadog's collection layer and an OpenTelemetry-based pipeline. When paired with AI-powered optimization tooling, the pipeline can also surface analytics on log volume and cost so you can identify which sources generate the most data and where to apply sampling or aggregation.
Bridging the gap
The datadoglogreceiver bridges the gap between Datadog's Agent and OpenTelemetry's vendor-neutral processing. It enables flexible, reliable, and observable log pipelines with the ability to route logs anywhere. With proper configuration, retry mechanisms, and monitoring, this integration is a significant step toward more open and robust observability practices.
For enterprise teams looking to reduce observability costs, routing logs through the Collector before they reach Datadog, Splunk, or other backends creates an opportunity to filter out wasteful or unneeded data at the pipeline level. By sitting upstream and pre-processing logs, you can apply sampling, aggregation, or dropping rules that keep your ingest bill under control. For large SaaS companies spending heavily on observability, this kind of pre-ingestion reduction can cut costs without losing the signal your SRE and DevOps teams actually rely on. Optimizing and reducing what you send, rather than paying to store everything, is how more teams are managing cloud observability spend.
We're excited to see how observability engineers use this new component to streamline their log pipelines and build resilient telemetry systems.
Related Resources:



-1.png)