Backup and Data Recovery Service for MySQL

MySQL Backup Service

The data is vitally important for the business operation and thus the database becomes the core part of enterprise infrastructure. We believe the database must be protected at all times. As the data grows, so does the task of managing it. Data losses have substantial consequences and can lead to large-scale interruptions of your business.

There are many great backup tools on the market, but most of them leave a user alone with problems like scheduling, retention policy, encryption, verification and monitoring. Many users solve these problems in home baked untested scripts and hope for the best. That’s not good enough!

TwinDB Backup Service is the end-to-end solution that takes care of all aspects of successful disaster recovery strategy. Do you know how much time restore will take in case of incident? How much data will be lost? TwinDB Backup Service not only continuously verifies backup copies, but also notifies our engineers if RTO (Recovery Time Objective) exceeds the SLA (Service Level Agreement). We do monitor your backups 24 hours a day, seven days a week, 365 days a year (and the leap day, too).

TwinDB Backup Tool is installed on the database server and it supports MySQL versions 5.5 and above on 64-bit Linux Operating System. It is an efficient, and robust tool with powerful capabilities like encryption, compression, monitoring, verification, and restore.

Backups are taken regularly as per the configured schedule. The tool takes online non-blocking backups with Percona XtraBackup. It takes backup copies periodically – compresses, encrypts, and streams the backup copy to the pre-configured storage.

Within TwinDB Backup Service we treat each problem seriously and solve it following best practices in industry.

Differential backups

It is quite unpractical to take a full backup copy every time. The full backups require more storage, network bandwidth, and take more time. Incremental backups support in Percona XtraBackup is a compelling feature. However it scares off some users because it’s harder to manage the incremental backups. Besides, the incremental backups are less reliable – to recover the database not only the full copy must be valid, but also all subsequent incremental ones. We decided to go with differential backups. It’s a good balance between storage and network requirements and also more reliable option. A Wikipedia article explains differential backups in great details and compares them with incremental backups.

Flexible Backup Storage

We support several remote storages. Amazon S3 is the most popular one, it is reliable and not expensive. If however on whatever reason you would like to store backup copies on your premises, we support remote SSH server as well.

There is ongoing work to implement more storage options like Azure.

Retention policy

We tag backup copies by intervals they’re taken: hourly, daily, weekly, monthly, and yearly. Retention policy defines how many copies of each type to keep. That allows further historical view with controlled storage requirements. Same time it’s easier to comply with existing regulations that may dictate for how long a company is allowed to retail the user data.

Encryption at rest

Whether you use cloud or your own storage it is absolutely necessary the data remains secure at all times. If backup copies aren’t encrypted it becomes a huge problem. Who can access your data in the cloud? If it’s a local storage – how to make sure data is safe if a server is decommissioned, or sent to repair? Just imagine at what a huge risk non-encrypted data is!

That’s why we implemented backups encryption with GPG. The backup copies on any storage are encrypted with a strong cypher suite.

Included restore command

Do you remember how to restore an XtraBackup incremental copy? Fortunately, you don’t have to because TwinDB Backup Tool can itself correctly restore a backup copy. You just need to pick a backup copy and tell the tool to restore it. The tool is smart enough to figure out whether it’s a full or incremental copy and correctly restore either of them. That greatly simplifies restore process and reduces restore time under immense pressure of situations when the backups are needed.

Recent local copy

Sometimes a local backup copy helps to reduce restore time. For example, if the database is dropped due to a human mistake. The server itself is fine and operational. Restoring from a local copy saves time on the network transfer.

Backups verification

If you don’t verify backups you don’t have them. Many of our data recovery customers believed their backup jobs ran regularly but due to a silent error the backups were either missing or non-usable. That’s a big frustration; to prevent unpleasant surprises you must always verify backups.


The last but definitely not the least is the problem of backups monitoring. We integrated TwinDB Backup Tool with Datadog and PagerDuty. Every successful backup job reports backup time to Datadog. Every successful verification job reports restore time. Using these metrics we define Disaster Recovery SLA and alert if either of metrics exceeds SLA: backup time, restore time or point of time objective. Our oncall engineer responds to alerts and fixes a problem ASAP minimizing risk of unprotected data.

Ready to protect your data?

Contact Support

MySQL Data Recovery Service

You understand importance of backups. Maybe you even had configured a regular backup job, or maybe you were a victim of bad luck. If things can go wrong they will go wrong.

Worry not, we can help.

We have developed tools to salvage database after being dropped, deleted or corrupted. If you act quickly it’s possible to recover the database after those incidents. We possess a decade of unique experience and skills in data recovery. Aleksandr Kuzminsky is well known the data recovery man in the MySQL community.

We included the free of charge Data Recovery Service in our Backup Service annual plan.

Supported Failure Scenarios for MySQL Data Recovery Service

Even if you don’t have backups we can salvage database after following accidents:support service

  • Corrupt InnoDB tablespace
  • Unrecoverable XtraBackup copies
  • Wrong UPDATE
  • Corrupt file system
  • Deleted ibdata1 or *.ibd files

In short, if the data is still on media we can get it back.

Immediate Actions for MySQL Data Recovery Service

Time is precious when it comes to data recovery. Any moment MySQL or operating system can overwrite your data. Depending on a failure scenario the first steps you should do
Failure: DROP DATABASE or DROP TABLE, innodb_file_per_table is OFF

immediate actions


  • Kill mysqld_safe
  • Kill mysqld process
  • Kill MySQL as advised above
  • Check where MySQL datadir is mounted
  • Remount the partition read-only
Prerequisites for MySQL Data Recovery Service

prerequisitesFor successful data recovery we need:
1. Table structure. It can be either CREATE TABLE statements or *.frm files. In some cases we can recover the structure from InnoDB dictionary
2. Media with data. File, disk image, etc where the data was stored.

MySQL Data Recovery Service Warranty

Although usually we achieve pretty high recovery rate we cannot guarantee successful recovery in a contract. However we will do our best to get your data back.



Showing all 2 results