Lightstep from ServiceNow Logo

Products

Solutions

Documentation

Resources

Lightstep from ServiceNow Logo
< all blogs

Monitoring MySQL with OpenTelemetry and Lightstep

Databases are essential components in most business applications. Among database servers commonly used today, MySQL is a widely adopted solution. To ensure that your MySQL-dependent applications have reliable and optimized databases supporting them, it's essential to monitor performance and usage metrics.

If a database like MySQL exhibits degraded performance, dependent applications—along with the end users of those applications—will suffer.

In MySQL, metrics can be found through the show global status command. You can use the OpenTelemetry Collector to gather these metrics and ingest them into an observability platform for automated, continual monitoring and alerting.

In this guide, I’ll cover:

  • A brief overview of metrics available from MySQL

  • How to set up the OpenTelemetry Collector to ingest MySQL metrics and then send them to Lightstep

  • How to use Lightstep to create charts and alerts to help with MySQL monitoring

MySQL Metrics

From the MySQL client command line, execute show global status to retrieve current metrics for your MySQL server. Check out some of the command output below(truncated for readability):

mysql> show global status;
+-----------------------------------+-----------------------------------+
| Variable_name                     | Value                             |
+-----------------------------------+-----------------------------------+
| Aborted_clients                   | 0                                 |
| Aborted_connects                  | 0                                 |
| Acl_cache_items_count             | 0                                 |
| Binlog_cache_disk_use             | 0                                 |
| Binlog_cache_use                  | 0                                 |
| Binlog_stmt_cache_disk_use        | 0                                 |
| Binlog_stmt_cache_use             | 0                                 |
| Bytes_received                    | 387                               |
| Bytes_sent                        | 15728                             |
| Connection_errors_accept          | 0                                 |
| Connection_errors_internal        | 0                                 |
| Connection_errors_max_connections | 0                                 |
| Connection_errors_peer_address    | 0                                 |
| Connection_errors_select          | 0                                 |
| Connection_errors_tcpwrap         | 0                                 |
| Connections                       | 8                                 |
…
| Handler_commit                    | 588                               |
| Handler_delete                    | 8                                 |
| Handler_discover                  | 0                                 |
| Handler_external_lock             | 6295                              |
| Handler_mrr_init                  | 0                                 |
| Handler_prepare                   | 0                                 |
| Handler_read_first                | 41                                |
| Handler_read_key                  | 1726                              |
| Handler_read_last                 | 0                                 |
| Handler_read_next                 | 4038                              |
| Handler_read_prev                 | 0                                 |
| Handler_read_rnd                  | 0                                 |
| Handler_read_rnd_next             | 1172                              |
| Handler_rollback                  | 0                                 |
| Handler_savepoint                 | 0                                 |
| Handler_savepoint_rollback        | 0                                 |
| Handler_update                    | 332                               |
| Handler_write                     | 499                               |
…
| Prepared_stmt_count               | 0                                 |
| Queries                           | 5                                 |
| Questions                         | 4                                 |
…
| Table_locks_immediate             | 2                                 |
| Table_locks_waited                | 0                                 |
…
| Threads_cached                    | 0                                 |
| Threads_connected                 | 1                                 |
| Threads_created                   | 1                                 |
| Threads_running                   | 2                                 |
…
+-----------------------------------+-----------------------------------+


The list of MySQL metrics is extensive and MySQL documentationMySQL documentation does an excellent job of explaining them and their significance. There are several metrics worth noting, though:

  • Threads_connected is the number of currently open connections, while Threads_running is the number of threads that aren't sleeping. By monitoring the number of active connections to your database server, you can look for excessive connections and how they might impact performance. 

  • Connections is the number of connection attempts (successful or not) to the MySQL server.

  • Innodb_buffer_pool_reads is the number of read operations that MySQL’s storage engine, InnoDB, couldn't satisfy from the buffer pool and required a read from the disk. If this value begins to increase unexpectedly, then it might indicate caching or querying issues.

  • Innodb_buffer_pool_size is the total available size (in bytes) of the buffer poolbuffer pool, which is used by InnoDB to cache table and index data. Innodb_buffer_pool_bytes_data, on the other hand, gives the total number of bytes in the buffer pool currently containing data.

Since the list of MySQL metrics is extensive, it’s important to become familiar with what's available so you can fine-tune your monitoring for the best results. Now, let’s look at how to use OpenTelemetry Collector to gather metrics from MySQL and export them to Lightstep.


Configure OpenTelemetry Collector

The MySQL receiver integration for OpenTelemetry CollectorMySQL receiver integration for OpenTelemetry Collector gathers metrics from the status and InnoDB tables. The specific metrics captured can be found herehere

Install the Collector

Because the MySQL receiver is part of the contributor distribution of the OpenTelemetry Collector (GitHubGitHub), you need to install this distribution. Follow the installation instructions based on your environment. For this demo, install the Collector like this:

$ wget https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.72.0/otelcol-contrib_0.72.0_linux_amd64.deb
$ sudo dpkg -i otelcol-contrib_0.72.0_linux_amd64.deb


Next, verify that the Collector is running:

$ sudo systemctl status otelcol-contrib
● otelcol-contrib.service - OpenTelemetry Collector Contrib
     Loaded: loaded (/lib/systemd/system/otelcol-contrib.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-03-30 08:56:30 MST; 4s ago
   Main PID: 380502 (otelcol-contrib)
      Tasks: 14 (limit: 18864)
     Memory: 137.9M
     CGroup: /system.slice/otelcol-contrib.service
             ├─380502 /usr/bin/otelcol-contrib --config=/etc/otelcol-contrib/config.yaml
             └─380519 /usr/bin/dbus-daemon --syslog --fork --print-pid 4 --print-address 6 --session


Configure the Collector

The configuration file for the Collector can be found at /etc/otelcol-contrib/config.yaml. You’ll modify this file, adding the MySQL receiver. For this configuration, you need to specify a MySQL endpoint (localhost:3306), user credentials, and how often the receiver should collect metrics (every 30 seconds).

receivers:
  mysql:
    endpoint: localhost:3306
    username: root
    password: [ENTER YOUR MYSQL ROOT PASSWORD]
    collection_interval: 30s

processors:
  batch:

exporters:
  logging:
    verbosity: detailed

service:
  pipelines:
    metrics:
      receivers: [mysql]
      processors: [batch]
      exporters: [logging]


Make sure that the user who accesses MySQL has permission to perform show global status and read the innoDB table.

Along with your MySQL receiver, you'll use the batch processorbatch processor and the logging exporterlogging exporter. For now, use the logging exporter to verify that the Collector is working properly; later, you'll export your metrics to Lightstep further down.

Next, save your configuration file, then restart the Collector.

$ sudo systemctl restart otelcol-contrib


Then, verify that the Collector is gathering metrics from MySQL:

$ journalctl -u otelcol-contrib -f
…
Everything is ready. Begin running and processing data.
Mar 30 08:57:01 demo otelcol-contrib[380502]: 2023-03-30T08:57:01.377-0700        info        MetricsExporter        {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 22}
Mar 30 08:57:01 demo otelcol-contrib[380502]: 2023-03-30T08:57:01.377-0700        info        ResourceMetrics #0
Mar 30 08:57:01 demo otelcol-contrib[380502]: Resource SchemaURL:
Mar 30 08:57:01 demo otelcol-contrib[380502]: Resource attributes:
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> mysql.instance.endpoint: Str(localhost:3306)
Mar 30 08:57:01 demo otelcol-contrib[380502]: ScopeMetrics #0
Mar 30 08:57:01 demo otelcol-contrib[380502]: ScopeMetrics SchemaURL:
Mar 30 08:57:01 demo otelcol-contrib[380502]: InstrumentationScope otelcol/mysqlreceiver 0.68.0
Mar 30 08:57:01 demo otelcol-contrib[380502]: Metric #0
Mar 30 08:57:01 demo otelcol-contrib[380502]: Descriptor:
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Name: mysql.buffer_pool.data_pages
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Description: The number of data pages in the InnoDB buffer pool.
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Unit: 1
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> DataType: Sum
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> IsMonotonic: false
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> AggregationTemporality: Cumulative
Mar 30 08:57:01 demo otelcol-contrib[380502]: NumberDataPoints #0
Mar 30 08:57:01 demo otelcol-contrib[380502]: Data point attributes:
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> status: Str(dirty)
Mar 30 08:57:01 demo otelcol-contrib[380502]: StartTimestamp: 2023-03-30 15:56:31.100878568 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Timestamp: 2023-03-30 15:57:01.121721216 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Value: 0
Mar 30 08:57:01 demo otelcol-contrib[380502]: NumberDataPoints #1
Mar 30 08:57:01 demo otelcol-contrib[380502]: Data point attributes:
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> status: Str(clean)
Mar 30 08:57:01 demo otelcol-contrib[380502]: StartTimestamp: 2023-03-30 15:56:31.100878568 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Timestamp: 2023-03-30 15:57:01.121721216 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Value: 971
Mar 30 08:57:01 demo otelcol-contrib[380502]: Metric #1
Mar 30 08:57:01 demo otelcol-contrib[380502]: Descriptor:
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Name: mysql.buffer_pool.limit
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Description: The configured size of the InnoDB buffer pool.
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> Unit: By
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> DataType: Sum
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> IsMonotonic: false
Mar 30 08:57:01 demo otelcol-contrib[380502]:      -> AggregationTemporality: Cumulative
Mar 30 08:57:01 demo otelcol-contrib[380502]: NumberDataPoints #0
Mar 30 08:57:01 demo otelcol-contrib[380502]: StartTimestamp: 2023-03-30 15:56:31.100878568 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Timestamp: 2023-03-30 15:57:01.121721216 +0000 UTC
Mar 30 08:57:01 demo otelcol-contrib[380502]: Value: 134217728
…

If it looks like you're up and running, transition toward exporting metrics to Lightstep.


Send metrics from OpenTelemetry Collector to Lightstep

To send your MySQL metrics to Lightstep, you’ll first need to set up a new accountnew account.

01-SQL-lightstep-sign-up


Next, create an access token. The OpenTelemetrey Collector will use this token for authenticating requests when exporting metrics to your Lightstep account.

Navigate to Project Settings > Access Tokens. Click on Create New to create a new access token. Copy the token value.

02-SQL-access-token

Configure the Collector to export to Lightstep

Go back to your OpenTelemetry Collector configuration file (/etc/otelcol-contrib/config.yaml). Remove the logging exporter and then set up an exporter to export data to Lightstep with OTLPexport data to Lightstep with OTLP. Our resulting file looks like this:

receivers:
  mysql:
    endpoint: localhost:3306
    username: root
    password: [ENTER YOUR MYSQL ROOT PASSWORD]
    collection_interval: 30s

processors:
  batch:

exporters:
  otlp/lightstep:
    endpoint: ingest.lightstep.com:443
    headers: {"lightstep-access-token": "INSERT YOUR TOKEN HERE"}

service:
  pipelines:
    metrics:
      receivers: [mysql]
      processors: [batch]
      exporters: [otlp/lightstep]

Note: Don’t forget to insert the access token.


After saving the configuration file, restart the Collector.

$ sudo systemctl restart otelcol-contrib


Now, as the Collector gathers metrics from MySQL, it will send them in batches to Lightstep.

Work with metrics in Lightstep

To get started, create a dashboard to house all of your charts and visualizations for MySQL-related metrics. (You can follow along as we create some basic charts, but there are additional examples herehere.)

Create a dashboard

Create a dashboard and assign a name and a description.

03-SQL-name-dashboard-image


Add a chart

For the first chart, you'll graph the buffer pool usage against your buffer pool limit. In MySQL, InnoDB uses the buffer pool to cache table and index data. You want to make sure that your buffer pool usage doesn’t get too close to the limits, or you'll encounter some performance issues.

First, click Add a chart. Then, assign a name for the chart (“Buffer Pool Usage vs. Limit”).

04-name-chart-buffer-pool-usage-vs-limit-image


Lightstep offers two ways to build queries. You can use the Query Builder, with its helpful UI, or you can write your queries directly in the Query Editor, if you’re familiar with Lightstep’s Unified Querying Language (UQL)Unified Querying Language (UQL). For illustration purposes, we’ll use the Query Builder.

To build the chart, start by selecting the telemetry type that you want (Metric).

05-select-telemetry-type-image


Because the metrics listed will include those from all of your components and receivers, you can narrow the list by typing mysql in the search box. From the list of metrics displayed, select mysql.buffer_pool.usage.

06-choose-mysql-buffer-pool-usage-image

Immediately, you'll see a line chart that shows your buffer pool usage over the last 60 minutes.

07-usage-60-minutes-image

If you're interested in a specific time window, you can adjust the time range displayed. For example, we can set the time range to the last 10 minutes.

08-adjust-time-scale-image


After you apply the time range, the chart is updated to show buffer pool usage for the last 10 minutes.

09-usage-10-minutes-image

If you're also interested in how this compares to the buffer pool limit, click on Plot another metric to show both lines.

10-plot-another-metric-image

For this metric, query for mysql.buffer_pool.limit. Now, the chart shows two lines—a purple line for the buffer pool usage, and a blue line for your buffer pool limit.

11-buffer-pool-both-lines-shown-image

Now, save the chart and add it to your dashboard.

12-dashboard-one-chart-image



Add a chart with a formula

Seeing usage and limit lines together is helpful, but if you want to see the actual percentage of our buffer pool usage, relative to the limit, you'll need to create another chart.

For this chart, which you’ll call “InnoDB Buffer Pool Usage”, your first two metrics will be identical to the two metrics from the previous chart. Next, click Add a formula.

13-add-a-formula-image

The formula that you'll use is ((a/b)*100). This formula divides the usage by your limit and then represents the quotient as a percentage.

14-show-formula-image

You don’t want a line chart for this chart; you only want to see the final percentage value. To do this, modify what is displayed by unchecking metrics a and b.

15-display-query-uncheck-a-b-image

Next, change our chart type to view as Big Number.

16-view-as-big-number-image



You're left with a single big number display of the buffer pool usage percentage (relative to the limit). Add “%” as the chart subtitle so that the units are clear.

17-usage-as-percentage-image



Add other charts

You can perform these same techniques to show other MySQL metrics side by side.

For example, you can add a chart called “Buffer Pool Operations (per second): Reads vs. Write Requests”. This chart uses the mysql.buffer_pool.operations metric, but it filters the first metric to capture the operation=reads data point, and the second metric to capture the operation=write_requests data point. Because this type of metric comes in as a cumulative number, you'll want to learn how this number changes over time. So, you'll aggregate the data points to show the rate of change.

18-buffer-pool-operations-metrics-image

The resulting line chart looks like this:

19-buffer-pool-operations-chart-image

You can create another chart called “Threads: Connected vs. Running”. This chart looks at the mysql.threads metric filtering down to kind=connected and kind=running data points and showing the latest values.

20-threads-metrics-mage

The chart ends up looking like this:

21-threads-chart-image


Saved charts are added to your dashboard. Over time, the dashboard starts to take on a useful shape.

22-dashboard-four-charts-image


Work with alerts and notifications

While it’s helpful to monitor our metrics, you can't be certain that somebody will be monitoring this dashboard every moment of the day. That’s where alerts and notifications come in handy.

Let’s say you wanted to alert your team when the buffer pool usage percentage exceeds a critical threshold of 80%. Start by opening up your “InnoDB Buffer Pool Usage” chart. From there, click Create an alert.

23-create-alert-button-image

Because of how you've set up the big number chart (displaying the percentage, but not displaying the a or b metrics), Lightstep determines what we’re probably interested in, and configures the alert to pay attention to the ((a/b)*100) formula.

From there, you can set a critical threshold of 80. You could even set a warning threshold of 50.

24-alert-configuration-image

Then, under Notification rules, you can specify how you want Lightstep to notify you whenever the threshold condition is met. You can set up a notification to PagerDutyPagerDuty, SlackSlack, BigPandaBigPanda, or even a generic Webhook.

25-notification-rule-image


After you complete the configuration, save the alert. From here, Lightstep will track your buffer pool usage percentage and notify you if that number creeps too high.


Conclusion

As organizations look to trim costs and increase operational efficiency, observability is key. With Lightstep, you can capture helpful metrics to not only optimize the health of your tech estate, but help minimize risk and overhead, while improving productivity.


When you’re ready to learn more about how OpenTelemetry Collector and Lightstep can help you on your observability journey, schedule a demoschedule a demo!

April 11, 2023
9 min read
Technical

Share this article

About the author

Andrew Gardner

Andrew Gardner

Read moreRead more

Monitoring Apache with OpenTelemetry and Lightstep

Andrew Gardner | May 2, 2023

Continue your observability journey by ingesting metrics from Apache and sending them to Lightstep.

Learn moreLearn more

Monitoring NGINX with OpenTelemetry and Lightstep

Robin Whitmore | Apr 6, 2023

Learn how to start ingesting metrics from NGINX and send them to Lightstep for more intelligent analysis and monitoring.

Learn moreLearn more

Monitoring PostgreSQL with OpenTelemetry and Lightstep

Robin Whitmore | Feb 15, 2023

Get an in-depth walkthrough of how to set up monitoring of your PostgreSQL instance with OpenTelemetry and Lightstep.

Learn moreLearn more
THE CLOUD-NATIVE RELIABILITY PLATFORM

Lightstep sounds like a lovely idea

Monitoring and observability for the world’s most reliable systems