Using MySQL Shell to Get MySQL Diagnostic Information
Collecting comprehensive diagnostic information can be daunting. Knowing what queries to run to get the needed data is more of an art form than anything else. Fortunately, for those not that artistically inclined, MySQL Shell makes it easier to get this information. Let’s take a look.
The Setup
Before we can get started, we need to connect to a MySQL instance. For this demo, I am using a production MySQL database for a web application I wrote to help me manage a golf league.
When I first tried to get diagnostic information, I received a message that I needed to change two global variables: slow_query_log
needed to be set to ON
, and log_output
needed to be set to TABLE
. Your mileage may vary.
The Command
The MySQL Shell command we run to gather the diagnostic information is util.debug.collectDiagnostics()
. This method accepts two arguments.
- The path to where the data file will be saved.
- This argument is required.
- This is the path on the machine running MySQL Shell, not on the server we are connected to.
- If you provide a path, a file will be created with a name similar to:
mysql-diagnostics-<timestamp info>.zip
. - You will need to use an absolute path here. You will receive an error if you use a path similar to
~/path/to/folder
.- This is a known issue and has been reported.
- An options JSON object.
- This argument is optional.
The command I ran against my MySQL instance was:
When completed, this command will create a file named mysql-diagnostics-<timestamp info>.zip
in the diag
folder under my user’s home directory. The data collected will also include information about slow-running queries.
The output to the console after this command was finished is the following text:
The Information
After running this command, this is the file I saw in the diag
folder:
I have to admit, I was surprised at the size of the zipped file (just over 10MB). Even though the database has been used for over a decade, it does not contain a lot of data.
The Files
Here is the list of files included in the .zip file.
Even when you consider that each file is duplicated (one in yaml format and one in tab-separated format), that is a lot of information gathered with just a single command. We can see files for global variables, replication information, memory usage, binary log status, slow query information (because we asked for it), and a slew of other data.
Taking a Look at the Data
Let’s take a look at the contents of some of these files.
Global Variables
I am going to start with 0.global_variables.tsv
. Here is the top of that file (I am not going to show the entire file because it is pretty long).
The complete file lists all the global variables for my server.
When I first opened this file, I was pleased to see that the query run to get this information is included at the top of the file. This way, if I need to make any changes to my system, I can check those changes by running the query rather than rerunning the entire diagnostic collection.
I think the yaml files are easier to read. Here are the same variables as above, but in yaml format.
Binary Log Status
Some files may contain little data, but that does not mean the data is less critical. Let’s take a look at 0.SHOW_BINARY_LOG_STATUS.tsv
.
And here is the yaml version.
These files both show us the name of the current bin log and the position in the file that references the last committed transaction.
Memory Usage
Several files relate to memory usage. Let’s look at 0.sys.memory_by_user_by_current_bytes.tsv,
which shows memory usage by each database user.
We can see that the user named db_user
utilizes the most memory. This is likely because it is the only defined user that does any read or write operations on the database. Because I don’t use root
unless absolutely necessary (I can’t even recall the last time I logged in as root
), that user is not currently using any resources.
Once again, I find the yaml file much easier to read. Here is what the contents of that file look like.
Slow Queries
Since we specifically asked for slow query data, let’s examine one of the slow query files. Here are the contents of 0.slow_queries_in_95_pctile.tsv
.
For longer queries, we will not see the entire query, but we get enough that we should be able to identify the query in our code. Here is the yaml version so you can see the data more easily.
What stands out to me is how many rows are examined in some of these queries compared to how many are returned. I need to investigate further and tweak the queries or the schema to make it more performant.
The Wrap-Up
With MySQL, we can gather copious amounts of information about our database server and our schemas. We would typically run multiple queries to collect this information. Using MySQL Shell, we can get more diagnostic information than we likely need with a single command. This information is provided in a tab-separated format and in yaml format. To learn more about gathering diagnostic information or the options available when running util.debug.collectDiagnostics()
, check out the documentation.
Photo by National Cancer Institute on Unsplash
Related Entries
- Running External Scripts in MySQL Shell
- Executing System Commands in MySQL Shell
- Getting Help in MySQL Shell
- Sandboxes in MySQL Shell
- Server Upgrade Checks with MySQL Shell
- MySQL Connection Status with MySQL Shell
- Managing MySQL Shell Configuration Options
- Advanced Data Dump with MySQL Shell
- Advanced Data Load with MySQL Shell
- MySQL Shell CLI Integration
- MySQL Shell Table Export Utility
- Copy Data Directly To A MySQL Instance With MySQL Shell
- Create an InnoDB Replica Set With MySQL Shell
- Customizing The MySQL Shell Prompt
- Extending MySQL Shell with Startup Scripts
- Extending MySQL Shell with Plugins