Announcing Major Updates to Lightstep’s Distributed Tracing:

RCA in Three Clicks!

Distributed Tracing


OpenTelemetry 101: What is an Exporter?


Austin Parker

by Austin Parker

Explore More Distributed Tracing Blogs

Austin Parker

by Austin Parker


10-23-2019

Looking for Something?

No results for 'undefined'

OpenTelemetry is an open-source observability framework for generating, capturing, and collecting telemetry data for cloud-native software. Prior posts in this series have covered the definition of observability, as it applies to OpenTelemetry, and a dive into the tracing and metrics APIs. There’s a third critical component, though, that you’ll want to understand before you get started using OpenTelemetry, and that’s how to actually get data out of it! In this post, we’ll talk about the OpenTelemetry exporter model and the OpenTelemetry Collector, along with several basic deployment strategies.

Note: The information in this post is subject to change as the specification for OpenTelemetry continues to mature.

The trace and metric data that your service or its dependencies emit are of limited use unless you can actually collect that data somewhere for analysis and alerting. The OpenTelemetry component responsible for batching and transporting telemetry data to a backend system is known as an exporter.

To understand how OpenTelemetry’s exporter model works, it is useful to understand a little bit about how instrumentation is typically integrated into service code. Generally, instrumentation can be done at three different points: at the service level, at its library dependencies, and at its platform dependencies. Integrating at the service level is fairly straightforward, involving declaring a dependency in your code on the appropriate OpenTelemetry package and deploying it with your code. Library dependencies are similar, except that libraries will generally only declare a dependency on the OpenTelemetry API. Platform dependencies are a more unusual case. When I say ‘platform dependency’, what I mean are the pieces of software you run to provide services to your service, things like Envoy and Istio. These will deploy their own copy of OpenTelemetry, independent of your actions, but will also emit trace context that your service will find useful.

The exporter interface is implemented by the OpenTelemetry SDKs, and uses a simple plug-in model that allows for telemetry data to be translated to the particular format a backend system requires, as well as the transmission of data to said backend. Exporters can be composed and chained together, allowing for common functionality (like tagging data before export, or providing a queue to ensure consistent performance) to be shared across multiple protocols.

To put this in more concrete terms, let’s compare OpenTelemetry to OpenTracing. In OpenTracing, if you wanted to switch what system you were reporting data to, you would need to replace the entire tracer component – for example, swapping out the Jaeger client library with the LightStep client library. In OpenTelemetry, you simply need to change the export component, or even just add the new one and export to multiple backend systems simultaneously. This makes it a lot easier to test new analysis tools or send your telemetry data to multiple analysis tools in different environments.

While the exporter model is very convenient, there are instances when you don’t have the ability to actually redeploy a service in order to add a new exporter. In some organizations, there is a disconnect between the people writing the instrumented code and the people running the observability platform, which can impair the velocity of rolling out these types of changes. In addition, some teams may prefer to abstract the entire exporter model out from their code, and into a separate service. This is where the OpenTelemetry Collector comes in.

The Collector is a separate process that is designed to be a ‘sink’ for telemetry data emitted by many processes, which can then export that data to backend systems. The Collector has two different deployment strategies – either running as an agent alongside a service, or as a remote application. In general, using both is recommended: the agent would be deployed with your service and run as a separate process or in a sidecar; meanwhile, the collector would be deployed separately, as its own application running in a container or virtual machine. Each agent would forward telemetry data to the collector, which could then export it to a variety of backend systems such as Lightstep, Jaeger, or Prometheus.

Regardless of how you choose to instrument or deploy OpenTelemetry, exporters provide powerful options for reporting telemetry data. You can directly export from your service, you can proxy through the collector, or you can aggregate into standalone collectors – or even a mix of these! Ultimately, what’s important is that your teams are getting the telemetry data into an observability platform that can help you analyze and understand the behavior of your system.

Explore More Distributed Tracing Blogs