Lightstep from ServiceNow Logo





Lightstep from ServiceNow Logo
< all blogs

Auto-Instrumentation Is Magic: Using OpenTelemetry Python with Lightstep

In my last OpenTelemetry blog postlast OpenTelemetry blog post, I talked about how to send OpenTelemetry (OTel)OpenTelemetry (OTel) data to LightstepLightstep using Golang. That’s all well and good if you’re a Golang developer, but what if you use Python? Well, my friend, you’re in luck, because today, I’ll be looking at how to send OpenTelemetry data to Lightstep using Python.

As with the OTel Golang post, we can send OTel data to Lightstep (or any other Observability tool that supports OpenTelemetry Protocol (OTLP)OpenTelemetry Protocol (OTLP), for that matter) in one of 3 ways:

  1. Direct from application

  2. OpenTelemetry CollectorOpenTelemetry Collector

  3. Launchers

In this post, I will dig into each of these three approaches in detail, with code snippets which explain how to get data into Lightstep ObservabilityLightstep Observability. Let’s do this!

OpenTelemetry & Lightstep

Lightstep Observability supports the native OpenTelemetry Protocol (OTLP)OpenTelemetry Protocol (OTLP). It can receive data in the OTLP format either via HTTPHTTP or gRPCgRPC. You will need to specify which method you wish to use in your code, as we’ll see in the upcoming code snippets.

If you're curious about using gRPC vs HTTP for OpenTelemetry, check out these docsthese docs.

Note: Other Observability tools that support OTLP include HoneycombHoneycomb, GrafanaGrafana, and JaegerJaeger.

Automatic Instrumentation & Python

One thing that’s super cool about using OTel to instrument your Python code is that Python offers automatic (auto) instrumentation. What does this mean? At a high level, it means that you can run a Python OpenTelemetry binary (called opentelemetry-instrumentopentelemetry-instrument) that wraps around your Python application, to automagically instrument it. 🪄

More specifically, auto-instrumentation uses shims or bytecode instrumentation agents to intercept your code at runtime or at compile-time to add tracing and metrics instrumentation to the libraries and frameworks you depend on. The beauty of auto-instrumentation is that it requires a minimum amount of effort. Sit back, relax, and enjoy the show. A number of popular Python libraries are auto-instrumented, including FlaskFlask and DjangoDjango. You can find the full list herehere.

Manual instrumentation requires adding spans, context propagation, attributes, etc. to your code. It’s akin to commenting your code or writing tests.

Does this mean that you shouldn’t manually instrument? Not at all! Start with auto-instrumentation if it’s available. If the auto-instrumentation isn’t sufficient for your use case (most often it’s not), then add in the manual instrumentation. For example, auto-instrumentation doesn’t know your business logic—it only knows about frameworks and languages—in which case you’ll want to manually instrument your business logic, so that you get that visibility.


Before we start our tutorial, here are some things that you’ll need:

If you’d like to run the full code examples, you’ll also need:

Direct from Application

If you’re getting started with instrumenting your application with OpenTelemetry, this is probably the most common route taken by most beginners. As the name suggests, we are sending data to a given Observability back-end directly from our application code.


Our sample application is a Flask application. We will be leveraging both automatic and manual instrumentation.

Let’s look at this in greater detail below.

1- Set up your environment

Let’s set up our working directory and our Python virtual environment

mkdir otel_python
cd otel_python

python3 -m venv .
source ./bin/activate

Open, and paste the following:

from flask import Flask, request
from opentelemetry import trace
from random import randint
tracer = trace.get_tracer_provider().get_tracer(__name__)

app = Flask(__name__)

def roll_dice():
    return str(do_roll())

def do_roll():
    res = randint(1, 6)
    current_span = trace.get_current_span()
    current_span.set_attribute("roll.value", res)
    current_span.set_attribute("", "Saying hello!")
    current_span.set_attribute("operation.other-stuff", [1, 2, 3])
    return res

if __name__ == "__main__":, debug=True, use_reloader=False)

2- Install the required OTel libraries

These are the libraries that are required to send data to an Observability back-end (e.g Lightstep).

# OTel-specific
pip install opentelemetry-distro
pip install opentelemetry-exporter-otlp

# App-specific
pip install flask
pip install requests

A few noteworthy items:

  • Installing opentelemetry-distro will install a number of other dependent packages for instrumenting code, including opentelemetry-api and opentelemetry-sdk, and our auto-instrumentation wrapper binary, opentelemetry-instrument.

  • The opentelemetry-exporter-otlp package is used to send OTel data to your Observability back-end (e.g. Lightstep). Installing it in turn installs opentelemetry-exporter-otlp-proto-grpc (send data via gRPC) and opentelemetry-exporter-otlp-proto-http (send data via HTTP).

3- Install auto-instrumentation

As you may recall from earlier in this post, Python auto-instrumentation includes a binary that wraps our Python application and automagically adds some high-level instrumentation for us. But that's only part of the picture. There are Python auto-instrumentation librariesPython auto-instrumentation libraries available for a number of popular Python libraries (e.g. Flask, requests). Using these auto-instrumentation libraries, along with opentelemetry-instrument, gives us auto-instrumentation superpowers. 💪

So how do we install these auto-instrumentation libraries? Well, there's a handy little tool for that, called opentelemetry-bootstrap. It was installed as part of our installation of opentelemetry-distro.

Let's run it:

opentelemetry-bootstrap -a install

So what does this do? The above command will read through the packages installed in your active site-packages folder, and will install the applicable auto-instrumentation libraries. For example, if you already installed the flask and requests packages (as we did in Step 2), running opentelemetry-bootstrap -a install will install opentelemetry-instrumentation-flask and opentelemetry-instrumentation-requests for you. If you leave out -a install, it will simply list out the recommended auto-instrumentation packages to be installed.

For more information on opentelemetry-bootstrap, check out the official OpenTelemetry docsofficial OpenTelemetry docs.

4- Run the app

Here’s where it gets interesting! Normally to run this app, we’d run it like this:


But if we did that, we wouldn’t be sending any OTel data to Lightstep. So we must instead do this:


opentelemetry-instrument \
           --traces_exporter console,otlp_proto_grpc \
           --metrics_exporter console,otlp_proto_grpc \
           --service_name test-py-auto-otlp-grpc-server \
           --exporter_otlp_endpoint "" \

Some noteworthy items:

  • Replace <LS_ACCESS_TOKEN> with your own Lightstep Access TokenLightstep Access Token.

  • traces_exporter and metrics_exporter specify which trace exporter and which metrics to use, respectively. In this case, traces and metrics are being exported to console (stdout) and to otlp_proto_grpc. The otlp_proto_grpc option tells opentelemetry-instrument to send it to an endpoint that accepts OTLP via gRPC. The full list of available options for traces_exporter can be found herehere.

  • service_name sets the name of the service. This is the value that will show up in the Lightstep service explorerLightstep service explorer. Be sure to replace <service_name> with your own service name.

  • exporter_otlp_endpoint tells opentelemetry-instrument to send the traces to gRPC endpoint (i.e. Lightstep).

Sample output:

Screen captuere of Python server app sample output

Want to use HTTP instead of gRPC? First, you need to make sure that the pip package opentelemetry-exporter-otlp-proto-http is installed (should be automagically installed as part of installing opentelemetry-exporter-otlp).

Next, your opentelemetry-instrument command would look like this:

opentelemetry-instrument \
  --traces_exporter console,otlp_proto_http \
  --metrics_exporter console \
  --service_name test-py-auto-otlp-server \
  --exporter_otlp_traces_endpoint "" \

Some noteworthy items:

  • The traces_exporter uses otlp_proto_http instead of otlp_proto_grpc.

  • The exporter_otlp_traces_endpoint endpoint is (see docsdocs, instead of

  • There is currently no metrics support for otlp_proto_http and there is no exporter_otlp_metrics_endpoint option, which is why metrics are being sent to console only.

5- Call the /rolldice service

Open up a new terminal window, and run the following:

curl http://localhost:8082/rolldice

Running the above line will return a random number between 1 and 6. Nothing too remarkable there. But if you look over at the terminal window for, you’ll notice something in the output:

Screen capture of Python server output after client call

We see the trace from! Why are we seeing this here? Because we set the --traces_exporter flag to console,otlp_proto_grpc, which exports to Lightstep via OTLP and to the console.

6- See it in Lightstep

python-server-otlp-ls Screen capture of trace in Lightstep - OTLP direct

OpenTelemetry Collector

The next approach to sending data to an Observability back-end is by way of the OpenTelemetry CollectorOpenTelemetry Collector. For non-development setups, this is the recommended approach to send OpenTelemetry data to your Observability back-end.


Sending OTel data via the OTel Collector is almost identical to what we did in the Direct from Application example above. The only difference is that:

  • We need to run an OTel Collector

  • When we run opentelemetry-instrument, our options are slightly different

Let’s look at this in greater detail below.

1- Follow Steps 1-3 from the “Direct from Application” example

2- Run the Collector

First, we need to configure our Collector for sending data to Lightstep. We do this by grabbing collector.yaml from Lightstep’s opentelemetry-examples repo.

git clone

Open up a new terminal window. First, you'll need to edit the collector.yaml file. Be sure to replace ${LIGHTSTEP_ACCESS_TOKEN} with your own Lightstep Access TokenLightstep Access Token.

Now you can start up the Collector:

cd opentelemetry-examples/collector/vanilla
docker run -it --rm -p 4317:4317 -p 4318:4318 \
    -v $(pwd)/collector.yaml:/otel-config.yaml \
    --name otelcol otel/opentelemetry-collector-contrib:0.53.0  \
    "/otelcol-contrib" \

Sample output:

Screen capture of OTel Collector startup sequence

3- Run the app

opentelemetry-instrument \
   --traces_exporter console,otlp \
   --metrics_exporter console,otlp \
   --service_name test-py-auto-collector-server \

Notice that the endpoint isn't specified. That's because it assumes that you are using the default Collector gRPC endpoint, The above command is the equivalent of saying:

opentelemetry-instrument \
  --traces_exporter console,otlp \
  --metrics_exporter console,otlp\
  --service_name test-py-auto-collector-server \
  --exporter_otlp_endpoint "" \
  --exporter_otlp_insecure true \

If you specify the endpoint, you must also specify --exporter_otlp_insecure true if a certificate isn't configured with your Collector.

Some additional noteworthy items:

  • otlp, used in configuring traces_exporter and metrics_exporter, is equivalent to using otlp_proto_grpc

  • To use a different Collector endpoint, simply replace it with your own. If you don't have a Certificate configured with your Collector, remember to add --exporter_otlp_insecure true

  • You don't need to set OTEL_EXPORTER_OTLP_TRACES_HEADERS, because that's already configured in the Collector's config.ymlconfig.yml file.

If you wish to use HTTP instead of gRPC, the command would then look like this:

opentelemetry-instrument \
  --traces_exporter console,otlp_proto_http \
  --metrics_exporter console,otlp_proto_http \
  --service_name test-py-auto-collector-server \

Which is the same as saying:

opentelemetry-instrument \
  --traces_exporter console,otlp_proto_http \
  --metrics_exporter console,otlp_proto_http \
  --service_name test-py-auto-collector-server \
  --exporter_otlp_endpoint "" \
  --exporter_otlp_insecure true \

Again, if you wish to use your own Collector endpoint, simply replace the value in exporter_otlp_endpoint, making sure that you prefix it with http:// or https://. Remember to add --exporter_otlp_insecure true if you don't have a Certificate configured with your Collector.

Okay. Enough banter. Let's look at the sample output:

Screen captuere of Python server app sample output

4- Call the /rolldice service

Open up a new terminal window, and run the following:

curl http://localhost:8082/rolldice

Sample output:

Screen capture of Python server output after client call

Again, we see the trace for because we set the --traces_exporter flag to console,otlp, which exports to the Collector via OTLP and to the console.

5- See it in Lightstep

Screen capture of trace in Lightstep - Collector


If you thought it was easy-peasey to send OTel data to Lightstep à la auto-instrumentation binary, then it’s even easier to do it via the OTel Python LauncherOTel Python Launcher! Think of it as an OTel wrapper to make it extra-easy to send data to Lightstep, by having a bunch of things pre-configured for you to lower that barrier to entry.

Sending OTel data via the Launcher is almost identical to what we did in the Direct from Application example above, with a few minor differences:

  • We have fewer packages to (yay!)

  • When we run opentelemetry-instrument, our options are slightly different

Let’s see it in action shall we?

1- Follow Steps 1-3 from the “Direct from Application” example

Minor change: replace the libraries from Step 2 with these:

# OTel-specific
pip install opentelemetry-launcher
pip install protobuf==3.20.1

# App-specific
pip install requests
pip install flask

We need to force a specific version of protobuf because of Launcher compatibility issues with newer versions. This was already fixedalready fixed in opentelemetry-pythonopentelemetry-python.

When we install the opentelemetry-launcher package, it also does double-duty and doesn’t require that we run opentelemetry-bootstrap -a install.

2- Run the app

Be sure to replace <LS_ACCESS_TOKEN> with your own Lightstep Access TokenLightstep Access Token.


opentelemetry-instrument \
    --service_name test-py-auto-launcher-server \

Looks like we have fewer options, don’t we? Let's dig in a bit to some noteworthy items:

  • We don’t need to specify an --exporter_otlp_traces_endpoint, because that’s already implicitly done for us, and as set to

  • Instead of setting a messy-looking environment var for our Lightstep Access TokenLightstep Access Token (export OTEL_EXPORTER_OTLP_TRACES_HEADERS="lightstep-access-token=<LS_ACCESS_TOKEN>"), we just have to do this: export LS_ACCESS_TOKEN="<LS_ACCESS_TOKEN>", which looks way cleaner.

If you wish to send your OTel data via a Collector instance first, rather than direct from your application, you would do this instead:

opentelemetry-instrument \
    --service_name test-py-auto-launcher-server \
    --exporter_otlp_traces_endpoint "" \
    --exporter_otlp_traces_insecure true \

Noteworthy items:

  • Do not set LS_ACCESS_TOKEN, since that's already configured in the Collector's config.ymlconfig.yml file.

  • If you attempt to override exporter_otlp_endpoint to send traces to a Collector, the traces will be sent directly to instead of via the Collector. Instead, you need to override exporter_otlp_traces_endpoint,

  • exporter_otlp_traces_endpoint sends traces to a Collector running on (gRPC). If you wish to use a different Collector address, simply include exporter_otlp_traces_endpoint, using your own Collector's endpoint.

  • exporter_otlp_traces_insecure is set to true. This is required if you are using a Collector and if a certificate isn't configured in the Collector.

  • There is currently no HTTP support for Python Launchers.

Sample output:

Screen captuere of Python server app sample output

3- Call the /rolldice service

Open up a new terminal window, and run the following:

curl http://localhost:8082/rolldice

Sample output:

Screen captuere of Python server app sample output for Launcher

Notice that since our opentelemetry-instrument call didn't specify a --traces_exporter, it's the equivalent of saying --traces_exporter otlp_proto_grpc. I also means that there's no trace output to the console (stdout).

4- See it in Lightstep

Screen capture of trace in Lightstep - Launcher

Should I always use the auto-instrumentation binary?

Is opentelemetry-instrument still helpful even if you’re not using a Python library that’s not auto-instrumented? Personally, I think so! Consider this file,

from sys import argv

from requests import get

from opentelemetry import trace
from opentelemetry.propagate import inject
tracer = trace.get_tracer_provider().get_tracer(__name__)

assert len(argv) == 2

with tracer.start_as_current_span("client"):

   with tracer.start_as_current_span("client-server"):
       headers = {}
       requested = get(
           params={"param": argv[1]},

       assert requested.status_code == 200

Let’s run the above program with the auto-instrumentation binary. Be sure to replace <LS_ACCESS_TOKEN> with your own Lightstep Access TokenLightstep Access Token.


opentelemetry-instrument \
    --traces_exporter console,otlp \
    --service_name test-py-auto-client \
    --exporter_otlp_endpoint "" \
    python test

Notice that aside from creating spans in, there’s no OTel configuration in there. You don’t configure the service name, the exporter, or the endpoint. That’s all taken care of when you run opentelemetry-instrument. Plus, if your code happens to use a library that is auto-instrumented, you don’t have to do anything else.

Note: If you’re wondering why we’re executing the command python test, it’s because takes a single parameter, which in this case is called test.

gRPC Debugging

Do you ever wonder if your gRPC calls are going into a black hole? I definitely do! When I was mucking around with gRPC for the Golang OTel libraries, I learned about some gRPC debug flags that would make my life easier for troubleshooting gRPC connectivity issues. Which of course got me wondering if there was a Python equivalent. Turns out there is. Set these environment variables before running your app, and you’re golden:

export GRPC_VERBOSITY=debug
export GRPC_TRACE=http,call_error,connectivity_state

This means that when we start up our, we get something like this:

Screen captuere of Python server app sample output with gRPC debug

And then when we call our endpoint via curl, we get this:

Screen capture of Python server app sample output with gRPC degug showing successful gRPC call

The part highlighted above tells me that our gRPC call was successful!

Final Thoughts

Auto-instrumentation in Python is pretty freaking awesome, and it really lowers the barrier to entry for OpenTelemetry. As we saw with the Direct from Application and Collector examples, the code stays pretty much the same. The only difference is that you need to change up some flags so that the auto-instrumentation binary knows where to send your traces to. Nice and easy!

In case you’re wondering, there is a totally pure OTel Python manual instrumentation approach, which I will cover in a future blog post, so stay tuned! For now, bask in the fact that you learned something super cool today about OTel Python auto-instrumentation!

And now, I will reward you with a picture of my rat Phoebe getting some cuddles.

Phoebe the rat gets cuddles

Peace, love, and code. 🦄 🌈 💫

Got questions about OTel instrumentation with Python? Talk to me! Feel free to connect through e-maile-mail, or hit me up on TwitterTwitter or LinkedInLinkedIn. Hope to hear from y’all!

August 26, 2022
12 min read

Share this article

About the author

Adriana Villela

Adriana Villela

Read moreRead more

From Day 0 to Day 2: Reducing the anxiety of scaling up cloud-native deployments

Jason English | Mar 7, 2023

The global cloud-native development community is facing a reckoning. There are too many tools, too much telemetry data, and not enough skilled people to make sense of it all.  See how you can.

Learn moreLearn more

OpenTelemetry Collector in Kubernetes: Get started with autoscaling

Moh Osman | Jan 6, 2023

Learn how to leverage a Horizontal Pod Autoscaler alongside the OpenTelemetry Collector in Kubernetes. This will enable a cluster to handle varying telemetry workloads as the collector pool aligns to demand.

Learn moreLearn more

Observability-Landscape-as-Code in Practice

Adriana Villela, Ana Margarita Medina | Oct 25, 2022

Learn how to put Observability-Landscape-as-Code in this hands-on tutorial. In it, you'll use Terraform to create a Kubernetes cluster, configure and deploy the OTel Demo App to send Traces and Metrics to Lightstep, and create dashboards in Lightstep.

Learn moreLearn more

Lightstep sounds like a lovely idea

Monitoring and observability for the world’s most reliable systems