Lightstep from ServiceNow Logo

Products

Solutions

Documentation

Resources

Lightstep from ServiceNow Logo
< all blogs

Python Auto-instrumentation with OpenTelemetry

We've all been there. You want to get started with distributed tracing, but don't have the time to revisit the codebase of dozens, if not hundreds of dozens of services in your system. Don't worry, OpenTelemetry's got you covered. Thanks to a great community effort from members of many organizations, the OpenTelemetry project has been able to quickly ramp up its ability to auto-instrument code in a variety of languages for many widely used third-party libraries.

Why auto-instrument OpenTelemetry Python?

For many organizations' engineering teams, the biggest barrier to using distributed tracing, and therefore making the lives of operators 10,000 times better at least, is the time it takes to instrument a system. Auto-instrumentation helps alleviate this burden by hooking directly into existing code. With auto-instrumentation, engineers can

  • enable instrumentation with few, if any, changes to their code

  • gain visibility into what libraries are doing without having to understand all the details first

  • expect consistent instrumentation across applications leveraging those libraries

Instrumentation is the first step in developing observabilityobservability in your applications, and gives developers the superpower to ask meaningful questions about their code.

How do I use Python auto-instrumentation?

I could go on and on about how auto-instrumentation makes life better and easier, but what better way to demonstrate it than by trying it out. The following example will walk through instrumenting a Python application with OpenTelemetry. If you already have an application that uses any of the supported libraries, feel free to skip this step and go straight to the section on configuring OpenTelemetry.

Otherwise, we'll create a small application that receives web requests and makes a request upstream.

Requirements

OpenTelemetry Python Example

First, we'll install the Python packages that our application will use:

pip3 install flask requests

Save the code below in a new file, server.py. You can find all the code for this example in the lightstep/opentelemetry-exampleslightstep/opentelemetry-examples repo.

NOTE: This is potentially the worst proxy server ever

# server.py
from collections import defaultdict

import requests

from flask import Flask, request

app = Flask(__name__)
CACHE = defaultdict(int)

@app.route("/")
def fetch():
  url = request.args.get("url")
  CACHE[url] += 1
  resp = requests.get(url)
  return resp.content

@app.route("/cache")
def cache():
  keys = CACHE.keys()
  return "{}".format(keys)

if __name__ == "__main__":
  app.run()

Configuring OpenTelemetry

To allow our application to externalize its telemetry, we need to configure OpenTelemetry to use an exporter, a span processor and a tracer provider. To simplify the process, we'll use Launcher, the Lightstep Distro for PythonLightstep Distro for Python which allows us to configure everything we need via environment variables:

NOTE: You'll need a Lightstep access tokenLightstep access token which can be obtained from your free-forever Lightstep accountLightstep account

Let's install the Launcher and auto-instrumentation packages via pip.

pip3 install opentelemetry-launcher opentelemetry-instrumentation-flask opentelemetry-instrumentation-requests

We're now ready to run our application! Open a terminal and run the opentelemetry-instrument executable with our application as an argument:

export LS_SERVICE_NAME="auto-instrument-example"
export LS_SERVICE_VERSION="0.7.0"
export LS_ACCESS_TOKEN="<your token here>"
opentelemetry-instrument python3 ./server.py

In another terminal, we'll make a few requests to our app. One of these requests is going through a proxy that introduces a delay in the response.

curl -s "localhost:5000/?url=https://en.wikipedia.org/wiki/Mars" > /dev/null
curl -s "localhost:5000/?url=http://slowwly.robertomurray.co.uk/delay/3000/url/http://www.google.com" > /dev/null

Looking at the traces

Here comes the exciting part. Let's go and search for traces in our Lightstep dashboard, available at https://app.lightstep.comhttps://app.lightstep.com. Right away, we can see traces for all the requests that we made to the app and we wrote ZERO instrumentation code.

Don’t have a Lightstep account yet? Sign up hereSign up here for a free trial.

opentelemetry-python-auto-instrumentation-lightstep

A valuable piece of information available through the auto-instrumentation are tags attached to the spans for each library. In the case below, it makes identifying the root cause of a slow response in the application trivial.

opentelemetry-python-auto-instrumentation-lightstep-root-cause

Limits of auto-instrumentation

Auto-instrumentation is tightly coupled with the libraries it instruments. If an application is reliant on libraries that are not yet supported by auto-instrumentation, no additional insights will be gained. Thankfully, there are a lot of people in the OpenTelemetry project working to increase the number of supported libraries every day. Take some time to read through the registryregistry to see supported languages and frameworks. Got a framework you're using, but don't see it in the registry? Create an issue in the project and let's collaborate on it! Follow the project at OpenTelemetry.ioOpenTelemetry.io to stay up-to-date on the latest news.

What's next? Start tracing! Auto-instrumentation may never be quite as thorough as manual instrumentation, but it's a great starting point in your observability journey.

Interested in joining our team? See our open positions herehere.

May 11, 2020
4 min read
OpenTelemetry

Share this article

About the author

Alex Boten

From Day 0 to Day 2: Reducing the anxiety of scaling up cloud-native deployments

Jason English | Mar 7, 2023

The global cloud-native development community is facing a reckoning. There are too many tools, too much telemetry data, and not enough skilled people to make sense of it all.  See how you can.

Learn moreLearn more

OpenTelemetry Collector in Kubernetes: Get started with autoscaling

Moh Osman | Jan 6, 2023

Learn how to leverage a Horizontal Pod Autoscaler alongside the OpenTelemetry Collector in Kubernetes. This will enable a cluster to handle varying telemetry workloads as the collector pool aligns to demand.

Learn moreLearn more

Observability-Landscape-as-Code in Practice

Adriana Villela, Ana Margarita Medina | Oct 25, 2022

Learn how to put Observability-Landscape-as-Code in this hands-on tutorial. In it, you'll use Terraform to create a Kubernetes cluster, configure and deploy the OTel Demo App to send Traces and Metrics to Lightstep, and create dashboards in Lightstep.

Learn moreLearn more
THE CLOUD-NATIVE RELIABILITY PLATFORM

Lightstep sounds like a lovely idea

Monitoring and observability for the world’s most reliable systems