We all know that in a modern distributed software system your application is only as fast as your slowest dependency. With the proliferation of microservices and containerization, any given application request can now potentially traverse multiple services to complete its work. Understanding the performance for a particular request as it hits all of the service boundaries is becoming more and more important because it’s useless to optimize one part of the stack when the bottleneck could be somewhere further downstream.
Lightstep offers a unique approach to understanding your operation dependencies and system performance using our Change IntelligenceChange Intelligence workflow and Operation diagramsOperation diagrams. This allows even the most inexperienced engineers to quickly understand their system dependencies and performance bottlenecks.
How do Operation diagrams work?
Lightstep captures and analyses 100% of your distributed tracing requests. In Lightstep the requests that are important to your service (such as API endpoints and other ingress points) are marked as Key OperationsKey Operations. For root cause analysis purposes, each key operation is tracked for a 10 day period, during which tens of thousands of traces are stored. These traces are intelligently chosen to represent the entirety of your latency distribution, including outliers and errors. When any performance regressions occur, we have the historical data stored within our system to effectively perform a simple and powerful root cause analysis.
Let’s take a look at one of these scenarios:
Here is a service dependency map of our application in question. There are several web/mobile services that communicate with a set of backend services by going through an API gateway,
In this scenario, we are investigating a problem with the iOS service. As you can see in the image below, there is an operation called
update-catalog which is performing slowly. Its P99 response time went from 266ms to 1.29secs.
As you saw in the original service dependency map above, that iOS service is potentially hitting 15 different downstream services. How do we understand what dependencies are hit with this single operation? How do we understand where the bottleneck actually originated? Do we wake up all the teams and put them into one big war room? Do we go ask the senior engineer who has been around forever (and knows where all the bodies are buried) for their expert opinion? Lightstep’s operation dependency diagrams can help even the most junior engineers understand and pinpoint performance bottlenecks and behaviors. Let’s see how.
Above is an image of Lightstep’s Operation diagram. We are able to analyze a large number of traces in aggregate and show not only the service and operation dependencies but also the aggregate time spent in each operation as well as any errors. Using this data, the system automatically shows you where the bottlenecks and errors are originating.
In the above scenario, we can quickly see that the iOS service’s update-catalog operation calls downstream to the Krakend API gateway which in terms calls down to an Authentication service as well as the Warehouse service. Within the Warehouse service, we see that there are two large yellow circles in the
database-update operations, indicating high overall latency contribution. In just a glance, we can see all the downstream services and operations that this one type of request hits as well as where the bottlenecks are at a systemic level.
How does this help my team?
Using this information we can easily identify the offending service (the Warehouse service). Because of this, we quickly eliminated the need to notify all the different service teams in the stack. We can even eliminate all but one of the service teams within this one request chain because we now pinpointed the problem to one service and two operations within that service.
Traditionally, this type of root cause analysis exercise would involve a large number of people, taking time away from other important tasks. But with Lightstep’s Operation diagrams and root cause analysis capabilities, we can easily pinpoint the cause and remove the need to involve so many resources unnecessarily.
Having the ability to visualize the operation and service flow of your important requests in aggregate allows you to quickly identify systemic problems. This type of analysis has traditionally been impossible because tracing systems did not analyze your entire traffic volume and could therefore not provide an aggregated/systematic view of your requests. The best they could do was to show you the performance of a single request. With Lightstep’s Operation diagrams and root cause analysis capabilities, this type of analysis becomes quick and simple for anyone on your team.
If you would like to find out more about this subject or would like to learn more about Lightstep, please feel free to reach out to us herehere or on social @LightstepHQ@LightstepHQ.
Interested in joining our team? See our open positions herehere.
April 7, 2021
4 min read
About the author
Andrew CheeRead moreRead more
Explore more articles
How to Operate Cloud Native Applications at ScaleJason Bloomberg | May 15, 2023
Intellyx explores the challenges of operating cloud-native applications at scale – in many cases, massive, dynamic scale across geographies and hybrid environments.Learn moreLearn more
2022 in reviewAndrew Gardner | Jan 30, 2023
Andrew Gardner looks back at Lightstep's product evolution and what's in store for 2023.Learn moreLearn more
The origin of cloud native observabilityJason English | Jan 23, 2023
Almost every company that depends on digital capabilities is betting on cloud native development and observability. Jason English, Principal Analyst at Intellyx, looks at the origins of both and their growing role in operational efficiency.Learn moreLearn more
Lightstep sounds like a lovely idea
Monitoring and observability for the world’s most reliable systems