Planet MySQL

Planet MySQL -
  1. MySQL 8.0.22 was released roughly a week ago. It includes some nice additions to replication that we would like to call out. Here they are: Automatic Asynchronous Replication Connection Failover (WL#12649). This work, by Hemant Dangi, implements a mechanism in asynchronous replication that makes the replica automatically try to re-establish an asynchronous replication connection to another replication source, in case the current connection gets interrupted; New terminology for replica related statements (WL#14171). … Tweet Share
  2. Software development is always moving forward, and the latest version is always the best... until the next one arrives.  When you visit the MySQL Ndb Cluster downloads page, you are naturally recommended to pull the latest MySQL Cluster 8.0.22 (announcement), but sometimes it is good to look back at the journey taken to reach this point. 7.x release series Prior to the 8.0 releases, MySQL Ndb Cluster had a sequence of 7.x (x=0..6) releases based on MySQL Server versions 5.1, 5.6, 5.7.  In each of the 7.x release series, MySQL Ndb Cluster was under feature development for some time, producing a number of minor releases, until eventually one minor release was validated as being acceptable as the first generally available (GA) version.  Once a release series 7.x was generally available, further changes in that series were confined to bug fixes and small improvements, with new feature development becoming only available in a new release series 7.x+1. Each of the 7.x releases significantly improved and extended MySQL Ndb Cluster, improving SQL execution, threading, performance, async replication, checkpointing, recovery, functionality etc, as well as picking up a number of fixes and minor improvements discovered by customers, users and our own QA. MySQL Ndb Cluster includes all of the functionality of MySQL Server, plus the unique Ndb Cluster software.  As a result, each MySQL Ndb Cluster release series has also benefited from improvements in the related MySQL Server version - new features, optimisations and bug fixes. 7.5 release series The 7.5 release series was based on the MySQL Server 5.7 release, becoming GA (announcement) with version 7.5.4 in October 2016. The new features included : Support for JSON columns. Stored or virtual generated columns. Read + Join performance optimisations : read from any replica, read from any node. Improved performance, memory management, usability. Use of InnoDB for replication state management. Since then we have released a further sixteen minor versions of the 7.5 series, one every quarter.  Each minor version includes small bug fixes and improvements to both the Cluster and MySQL Server software, with the current latest version being 7.5.20 released in October 2020. 7.6 release series The 7.6 release series was also based on the MySQL Server 5.7 release series, becoming GA (announcement) with version 7.6.6 in May 2018.   The new features included : Optimised checkpoints giving reduced disk space usage, reduced write amplification, reduced recovery time - increased availability, increased practical data limits. Improved distributed SQL join support. Unified memory management. Optimised node-local communication. Optimised distribution awareness. New parallel import utility . Many performance improvements. Since the 7.6.6 release of MySQL Cluster 7.6, we have released a further ten minor versions, with small bug fixes and improvements.  The current latest version is 7.6.16 released in October 2020. 8.0 release series The MySQL Cluster 8.0 series is based on the MySQL Server 8.0 series, which first became generally available (announcement) as release 8.0.11 in April 2018.    The MySQL Server 8.0 release included many features in this first version, and has been continuously extended in each of the eleven releases since.  MySQL Ndb Cluster is following this release model with its 8.0 releases, with new features arriving in each minor release. The first generally available release of MySQL Cluster 8.0 was 8.0.19 (announcement), released in January 2020. MySQL Cluster 8.0.19 (January 2020) Features included : Simplified configuration, elastic memory management. Better support for 3 and 4 replica clusters. Support for up to 144 data nodes in a cluster. Improved multithreading for parallel backup and restore. Increase row size limit. Improved distributed SQL execution performance and efficiency. Disk stored columns performance and stability improvements . Integration with MySQL Server 8.0.19 (Transactional MySQL Data Dictionary, many other MySQL Server features). Usability improvements. Since 8.0.19 there have been three further releases of MySQL Ndb Cluster 8.0 :  MySQL Cluster 8.0.20 (April 2020) Features included : Further backup restoration process parallelism Parallel intra-nodegroup links for high write volume applications Improved distributed SQL processing of outer and semi joins Performance improvements Data integrity tooling improvements MySQL Cluster 8.0.21 (July 2020) Features included : Support for up to 16TB of in-memory data per Data node Read any replica tables by default Further disk stored column performance and monitoring improvements Improved distributed schema synchronisation MySQL Cluster 8.0.22 (October 2020) Features included : Support for generating and restoring encrypted backups Support for IPv6 addressing between all cluster node types Improved performance for Blob column Insert, Update, Delete Improved replication applier performance for Blob columns As you can see, MySQL Ndb Cluster is constantly evolving and improving with several new releases every quarter.  The 8.0 releases continue to bring new capabilities on a regular basis, and there is no shortage of ideas in the pipeline. For more information about MySQL Ndb Cluster, you can check the official documentation, see the getting started guide, follow a tutorial, read blogs, watch a video, check out recent Ndb 8.0 101, architecture or performance slides, or even read a book or two. Edit : typo
  3. MySQL on ARM is gaining consistent momentum and community is excited about it. Beyond performance, users also tend to explore other aspects like feature-set, ecosystem, support, etc… Let’s explore what users would gain/lose by moving to mysql on arm. Evaluation aspects There are 4 main aspect user tend to consider while migrating database/database environment/database-setup Feature Set Performance Ecosystem Community Support Let’s analyze MySQL on ARM from these perspectives Feature Set MySQL on ARM supports all the features that MySQL has to offer. We are not aware of any feature that doesn’t work or has been marked as beta on ARM. This means you don’t lose on the feature front if you decide to run mysql on arm. Beyond the mainline feature binlog-replication, group-replication, in-build plugins, authentication/security plugins all works fine. MySQL also has been actively fixing bugs found on ARM with the same priority like other supported platform. Performance MySQL’s recent efforts to fix performance in general is tuned to consider the increasing number of cores there-by we see increasing usage of distributed counters/locks/etc… All these improvements are supporting ARM as ARM is all about more cores. Lately, we also saw MySQL folded few performance patches from the community that were ARM specific. There are still 30+ patches that are pending in the queue. Hopefully all of them will get attention now. Even without these patches, MySQL continues to perform/scale consistently on ARM so the patches are just going to make it better and better. Of-course we will see some real performance numbers along with some analysis in part-2 of the blog. Ecosystem When users think of migrating to a new db-environment they care about surrounding ecosystem components too. Fortunately, a lot of ecosystem components for MySQL are already present on ARM. Let’s look at them. Backup: Percona Xtrabackup (community evaluated) High Availability: Binlog replication, InnoDB Cluster (inherent to server) Load Balancer: MySQL Router, ProxySQL (work-in-progress) Tools: MySQL Shell, Percona toolkit (community evaluated) Monitoring: Percona Management and Monitoring (PMM) (community evaluated) Connector: available from MySQL. This helps mark the stack complete as there is at-least one tool from each category available on ARM and more tools will be added in near future. Community Support One of the reason, I believe, MySQL is so successful is due to community support it enjoys. #dbonarm as a wider initiative is gaining a lot of traction and mysql is no exception. #mysqlonarm community is expanding. Lot of users/developers are getting interested in evaluating/contributing to help improve mysql on arm. There is dedicated mysql-community-slack (look for #mysqlonarm) channel. Developers from varied organizations are contributing patches, providing feedback. Conclusion #mysqlonarm is going to be the next big thing in mysql space. During this pandemic time all organizations have become cost conscious and if you get a solution that helps save you on cost/increase performance then that is the solution to go with. Stay tuned for part-2 of the blog series where we would see the performance number and analyze results. If you have more questions/queries do let me know. Will try to answer them.
  4. Google Cloud SQL for MySQL is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud Platform. However, there are differences between Cloud SQL and standard MySQL functionality like limited control, restricted resources, data locality, budget and security, which may influence your final decision to move out from the Google Cloud SQL instances and host the database service in the on-premises infrastructure instead. This blog post will walk you through how to perform online migration from Google Cloud SQL to an on-premises server. Our target database on the on-premises server is a Debian server, but the steps and procedures shall apply on other versions of Linux as well as long as packages are properly installed. Our Google Cloud MySQL instance is running on MySQL 5.7 and what we need is: A replication slave user created on the master. The slave must be installed with the same major version as the master. SSL must be enabled for geographical replication for security reasons. Since Google Cloud by default enabled GTID replication for MySQL, we are going to do a migration based on this replication scheme. Hence, the instructions described in this post should also work in MySQL 8.0 instances. Creating a Replication Slave User First of all, we have to create a replication slave user on our Google Cloud SQL instance. Log in to the Google Cloud Platform -> Databases -> SQL -> pick the MySQL instance -> Users -> Add User Account and enter the required details: The is the slave public IP address located in our on-premises that is going to replicate from this instance. As you can see, there is no privileges configuration since users created from this interface will have the highest privileges Google Cloud SQL can offer (almost everything except SUPER and FILE). To verify the privileges, we can use the following command: mysql> SHOW GRANTS FOR slave@\G *************************** 1. row *************************** Grants for slave@ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE ON *.* TO 'slave'@'' WITH GRANT OPTION It looks like our slave user has the required permission to run as a slave (REPLICATION SLAVE). Taking a mysqldump Backup Before we create an external mysqldump backup, we need to configure the client's SSL certificates because of the risk of connecting the instance via a public network. To do this, go to Connections -> Configure SSL client certificates -> Create a client certificate: Download the above files (server-ca.pem, client-cert.pem and client-key.pem) and store them inside the slave server. We are going to use these certificates to connect to the master securely from the slave serve. To simplify the process, all of the above certificates and key file will be put under a directory called "gcloud-certs": $ mkdir -p /root/gcloud-certs # put the certs/key here Make sure the permissions are correct, especially the private key file, client-key.pem: $ chmod 600 /root/gcloud-certs/client-key.pem Now we are ready to take a mysqldump backup from our Google Cloud SQL MySQL 5.7 instance securely: $ mysqldump -uroot -p \ -h \ --ssl-ca=/root/gcloud-certs/server-ca.pem \ --ssl-cert=/root/gcloud-certs/client-cert.pem \ --ssl-key=/root/gcloud-certs/client-key.pem \ --single-transaction \ --all-databases \ --triggers \ --routines > fullbackup.sql You should get the following warning: "Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events." The above warning occurs because we skipped defining the --events flag which requires the SUPER privilege. The root user created for every Google Cloud SQL instance does not come with FILE and SUPER privileges. This is one of the drawbacks of using this method, that MySQL Events can't be imported from Google Cloud SQL. Configuring the Slave Server On the slave server, install MySQL 5.7 for Debian 10: $ echo 'deb buster mysql-5.7' > /etc/apt/sources.list.d/mysql.list $ apt-key adv --keyserver --recv-keys 5072E1F5 $ apt update $ apt -y install mysql-community-server Then, add the following lines under the [mysqld] section inside /etc/mysql/my.cnf (or any other relevant MySQL configuration file): server-id = 1111 # different value than the master log_bin = binlog log_slave_updates = 1 expire_logs_days = 7 binlog_format = ROW gtid_mode = ON enforce_gtid_consistency = 1 sync_binlog = 1 report_host = # IP address of this slave Restart the MySQL server to apply the above changes: $ systemctl restart mysql Restore the mysqldump backup on this server: $ mysql -uroot -p < fullbackup.sql At this point, the MySQL root password of the slave server should be identical to the one in Google Cloud SQL. You should log in with a different root password from now on. Take note that the root user in Google Cloud doesn't have full privileges. We need to make some modifications on the slave side, by allowing the root user to have all the privileges inside MySQL, since we have more control over this server. To do this, we need to update MySQL's user table. Login to the slave's MySQL server as MySQL root user and run the following statement: mysql> UPDATE mysql.user SET Super_priv = 'Y', File_priv = 'Y' WHERE User = 'root'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 Flush the privileges table: mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) Exit the current terminal and re-login again. Run the following command to verify that the root user now has the highest level of privileges: mysql> SHOW GRANTS FOR root@localhost; +---------------------------------------------------------------------+ | Grants for root@localhost | +---------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +---------------------------------------------------------------------+ Setting up the Replication Link For security reasons, the replication slave user has to connect to the master host (Google Cloud instance) via an SSL encrypted channel. Therefore, we have to prepare the SSL key and certificate with correct permission and accessible by the mysql user. Copy the gcloud directory into /etc/mysql and assign the correct permission and ownership: $ mkdir -p /etc/mysql $ cp /root/gcloud-certs /etc/mysql $ chown -Rf mysql:mysql /etc/mysql/gcloud-certs On the slave server, configure the replication link as below: mysql> CHANGE MASTER TO MASTER_HOST = '', MASTER_USER = 'slave', MASTER_PASSWORD = 'slavepassword', MASTER_AUTO_POSITION = 1, MASTER_SSL = 1, MASTER_SSL_CERT = '/etc/mysql/gcloud-certs/client-cert.pem', MASTER_SSL_CA = '/etc/mysql/gcloud-certs/server-ca.pem', MASTER_SSL_KEY = '/etc/mysql/gcloud-certs/client-key.pem'; Then, start the replication slave: mysql> START SLAVE; Verify the output as the following: mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: slave Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 1120160 Relay_Log_File: puppet-master-relay-bin.000002 Relay_Log_Pos: 15900 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 1120160 Relay_Log_Space: 16115 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: /etc/mysql/gcloud-certs/server-ca.pem Master_SSL_CA_Path: Master_SSL_Cert: /etc/mysql/gcloud-certs/client-cert.pem Master_SSL_Cipher: Master_SSL_Key: /etc/mysql/gcloud-certs/client-key.pem Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 2272712871 Master_UUID: 8539637e-14d1-11eb-ae3c-42010a94001a Master_Info_File: /var/lib/mysql/ SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 8539637e-14d1-11eb-ae3c-42010a94001a:5611-5664 Executed_Gtid_Set: 8539637e-14d1-11eb-ae3c-42010a94001a:1-5664, b1dabe58-14e6-11eb-840f-0800278dc04d:1-2 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Make sure the Slave_IO_Running and Slave_SQL_Running values are 'Yes', as well as Seconds_Behind_Master should be 0, which means the slave has caught up with the master. Notice the Executed_Gtid_Set has two GTIDs: 8539637e-14d1-11eb-ae3c-42010a94001a:1-5664 b1dabe58-14e6-11eb-840f-0800278dc04d:1-2 The first GTID represents the changes coming from the current master (Google Cloud SQL instance), while the second GTID represents the changes that we have made when we modified the privileges for the MySQL root user on the slave host. Pay attention to the first GTID to see if the database is replicating correctly (the integer part should be incrementing while replicating). Verify if our slave host is part of the replication from the master's point-of-view. Login to the SQL Cloud instance as root: $ mysql -uroot -p \ -h \ --ssl-ca=/root/gcloud-certs/server-ca.pem \ --ssl-cert=/root/gcloud-certs/client-cert.pem \ --ssl-key=/root/gcloud-certs/client-key.pem And run the following statement: mysql> SHOW SLAVE HOSTS; *************************** 1. row *************************** Server_id: 1111 Host: Port: 3306 Master_id: 2272712871 Slave_UUID: b1dabe58-14e6-11eb-840f-0800278dc04d At this point, you may plan your next move to redirect the database workload from the applications to this slave server as the new master and decommission the old master in Google Cloud. Final Thoughts You can perform an online migration from Google Cloud SQL for MySQL to an on-premises server without much hassle. This gives you the possibility to move your database outside of the cloud vendors for privacy and control when the right time has come. Tags:  MySQL Google Cloud google cloud sql google database migration migration cloud
  5. In a perfect world, we expect all software to run flawlessly and never have problems such as bugs and crashes. We also know that this perfect world doesn’t exist and we better be as prepared as possible to troubleshoot those types of situations. Historically, generating core dumps has been a task delegated to the kernel. If you are curious about how to enable it via Linux kernel, you can check out Getting MySQL Core file on Linux. There are a few drawbacks that pose either a limitation or a huge strain to get it working, such as: System-wide configuration required. This is not something DBA always has access to. Inability or very difficult to enable it for a specific binary only. Standards ways enable it for every software running on the box. Nowadays, with cloud and containers, this task has become even more difficult because it sometimes requires containers to be running on privileged mode and host OS to be properly configured by the provider. The above issues have driven exploration of alternative ways to do create a core dump to help troubleshooting bugs and crashes. More details can be found at PS-7114 . The Libcoredumper The libcoredumper is a fork of the archived project google-coredumper. Percona has forked it under Percona-Lab Coredumper, cherry-picked a few improvements from other forks of the same project, and enhanced it to compile and work on newer versions of Linux as well on newer versions of GCC and CLANG. This project is a Tech Preview, as you may infer from the repository name (Percona Lab). We might not maintain compatibility with future kernel versions and/or patches. One should test the core dumper on their environment before putting this tool into production. We have tested on kernel versions up to 5.4. This functionality is present on all versions of Percona Server for MySQL and Percona XtraDB Cluster starting from 5.7.31 and 8.0.21. If you compile PS/PXC from source, you can control if the library will be compiled by switching -DWITHCOREDUMPER to ON/OFF (default is ON). How To Configure It A new variable named coredumper has been introduced. One should include it under the [mysqld] section of my.cnf and it works independently of the older configuration core-file. This new variable can either be a boolean (no value specified) or with value. It follows a few rules: No value – core dump will have saved under MySQL datadir and will be named core. A path ending with  /  – core dump will be saved under the specified directory and will be named core. A full path with filename  – core dump will be saved under the specified directory and will use the specified name. Every core file will end with the timestamp of the crash instead of PID, for two main reasons: Make it easier to correlate a core dump with a crash, as MySQL always print a Zulu/UTC timestamp on the logs when it crashes:10:02:09 UTC - mysqld got signal 11 ; Operators / Containers will always be running MySQL (or whatever application it is running) as PID 1. If MySQL has crashed multiple times, we don’t want to core-dump to get overwritten by the last crash. How To Know If I Am Using libcoredumper When MySQL attempts to write a core file it stamps the log saying it will write a core file. When it does it delegating the action to Linux kernel, you always see a message like below: . . . Writing a core file The above behavior remains the same, however, when MySQL is using libcoredumper to generate the core file, one should see that message informing that the library will be responsible for the action: . . . Writing a core file using lib coredumper Other Improvements Apart from libcoredumper, starting from the same 5.7 and 8.0 releases a stack trace will also: Print binary BuildID – This information is very useful for support/development people in case the MySQL binary that crashed is a stripped binary. Stripped binaries are a technique to remove part of the binaries that are not essential for it to run, making the binary occupy less space in disk and in memory. When computers had a restriction on memory, this technique was widely used. Nowadays this doesn’t pose a limitation anymore on most of the hardware, however, it is becoming popular once again with containers where image size matters. Stripping the binary removed the binary symbols table, which is required to resolve a stack trace and lets you read the core dump. BuildID is how we can link things together again. Print the server Version – This information is also useful to have at glance. Recent versions of MySQL/Percona Server for MySQL have a fix for many know issues. Having this information helps to establish the starting point investigation. MySQL only prints the server version when it starts, and by the moment a server crashes, its log may have grown significantly or even got rotated/truncated. Here is one example of how a crash with stack trace will look like: 14:23:52 UTC - mysqld got signal 11 ; Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware. Build ID: 55b4b87f230554777d28c6715557ee9538d80115 Server Version: 8.0.21-12-debug Source distribution Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x46000 /usr/local/mysql/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x55) [0x55943894c280] /usr/local/mysql/bin/mysqld(handle_fatal_signal+0x2e0) [0x559437790768] /lib/x86_64-linux-gnu/ [0x7f9d413bcf40] /lib/x86_64-linux-gnu/ [0x7f9d40858729] /usr/local/mysql/bin/mysqld(Mysqld_socket_listener::listen_for_connection_event()+0x64) [0x55943777db6a] /usr/local/mysql/bin/mysqld(Connection_acceptor<Mysqld_socket_listener>::connection_event_loop()+0x30) [0x55943737266e] /usr/local/mysql/bin/mysqld(mysqld_main(int, char**)+0x30c6) [0x559437365de1] /usr/local/mysql/bin/mysqld(main+0x20) [0x559437114005] /lib/x86_64-linux-gnu/ [0x7f9d4076db6b] /usr/local/mysql/bin/mysqld(_start+0x2a) [0x559437113f2a] Please help us make Percona Server better by reporting any bugs at You may download the Percona Server operations manual by visiting You may find information in the manual which will help you identify the cause of the crash. Writing a core file using lib coredumper TL;DR Libcoredumper serves as an alternative for current –core-file functionality for generating memory dumps. In case of any crash of MySQL, a core dump is written and can be later processed /read via GDB to understand the circumstances of such a crash. Users can enable it by adding the below variable to [mysqld] section of my.cnf: [mysqld] coredumper Percona Server for MySQL versions starting from 5.7.31 and 8.0.21 include the library by default. Refer to below documentation pages for more details: Summary If you faced any issue or limitation on enabling core dumps before feel free to test new versions of Percona Server for MySQL/Percona XtraDB Cluster and use libcoredumper. Also, any feedback is very welcome on how we can improve the troubleshooting of bugs/crashes even further.