1. Recover after DROP TABLE, innodb_file_per_table is OFF

Recover after DROP TABLE, innodb_file_per_table is OFF

UPDATE (01/01/2017): We stopped further development of undrop-for-innodb and do not support its open source versions.

Introduction

Human mistakes are inevitable. Wrong “DROP DATABASE” or “DROP TABLE” may destroy critical data on the MySQL server. Backups would help however they’re not always available. This situation is frightening but not hopeless. In many cases it is possible to recover almost all the data that was in the database or table.
Let’s look how we can do it. The recovery plan depends on whether InnoDB kept all data in a single ibdata1 or each table had its own tablespace . In this post we will consider the case innodb_file_per_table=OFF. This option assumes that all tables are stored in a common file, usually located at /var/lib/mysql/ibdata1.

Wrong action – table deletion

For our scenario we will use test database sakila that is shipped together with the tool.
Suppose we drop my mistake table actor:

Recover after DROP TABLE from ibdata1

Now the table is gone, but information containing in the table can still be in the database file. The data remains untouched until InnoDB reuses free pages. Hurry up and stop MySQL ASAP!
For the recovery we’ll use TwinDB recovery toolkit. Check out our recent post “Recover InnoDB dictionary” for details on how to download and compile it.

Parse InnoDB tablespace

InnoDB stores all data in B+tree indexes. A table has one clustered index PRIMARY, all fields are stored here. If the table has secondary keys then each key has an index. Each index is identified by index_id.

If we want to recover a table we have to find all pages that belong to particular index_id.

stream_parser reads InnoDB tablespace and sorts InnoDB pages per type and per index_id.

Data from database pages is saved by the stream_parser to folder pages-ibdata1:

Now each index_id from InnoDB tablespace is saved in a separate file. We can use c_parser to fetch records from the pages. But we need to know what index_id corresponds to table sakila/actor. That information we can acquire from the dictionary – SYS_TABLES and SYS_INDEXES.

SYS_TABLES is always stored in file index_id 1 which is file pages-ibdata1/FIL_PAGE_INDEX./0000000000000001.page
Let’s find table_id of sakila/actor. If MySQL had enough time to flush changes to disk then add -D option which means “find deleted records”. The dictionary is always in REDUNDANT format, so we specify option -4:

Note number 158 right after the table name. This is table_id.

The next thing do is to find the index id of the PRIMARY index of table actor. For this purpose we will  fetch records of SYS_INDEXES from file 0000000000000003.page (this table will contain information about  index_id and table_id). The structure of  SYS_INDEXES is passed with -t option.

As you can see from the output, necessary index_id is 376. Therefore we will look for the actor data in the file 0000000000000376.page

The resulting output looks correct, so let’s save the dump in a file. To make load simpler c_parser outputs LOAD DATA INFILE command to stderr.

We will use default location of this files: dump/default

And here’s a command to load the table.

Load data back to the database

Now it’s time to recover the data into the database. But, before loading the dump we need to create empty structure of table actor:

Now, the table actor is created. We can load our data after recovery.

Checking recovered data

And the final step – check data quality. We will see total number of records, preview several records and calculate checksum.

As you can see, checksum after recovery is 3596356558 which is equal to the checksum taken before accidental drop of the table. Therefore we can be sure that all the data was recovered correctly.
In the next posts we will see other cases of recovery.

Have a question? Ask the experts!

Previous Post Next Post
  • Pierre

    Thanks a lot. It seems to work great except for BLOBs/LONGTEXTs where it complains a lot “#####CannotOpen_./0000000001058183.page;” and those files indeed do exist. I have only 2 files in pages/ibdata1/FIL_PAGE_TYPE_BLOB and FIL_PAGE_INDEX has the exact size of my ibdata file ..
    Is there anything special to do for BLOBs ?

    • Pierre

      These files indeed do not exist (Sorry)

    • BLOBs are special indeed. InnoDB page size is 16k while a BLOB value may be larger. If a record size is less than ~7kB than InnoDB stores all fields of the record in-place. However if the record is larger than ~7kB then InnoDB stores only first 768 bytes in-place and the rest goes to external pages.

      c_parser needs to know where it can find external pages. For that use -b option:

      ./c_parser -b pages-ibdata1/FIL_PAGE_TYPE_BLOB <all other options>

    • The problem was in wrong page type in external pages. It should be FIL_PAGE_TYPE_BLOB, but on some reason it was FIL_PAGE_INDEX. Because of that stream_parser skipped external pages.
      Workaround is to use -i option with c_parser.
      https://twindb.com/how-to-handle-wrong-page-type-in-external-pages/

  • Ederick Colmenares

    Thank you and God for this incredible tool and tutorial. Everything it works perfectly, thank you thank you thank you

  • Ederick Colmenares

    Thank you and God for this incredible tool and tutorial. Everything it works perfectly, thank you thank you thank you

  • The problem was in wrong page type in external pages. It should be FIL_PAGE_TYPE_BLOB, but on some reason it was FIL_PAGE_INDEX. Because of that stream_parser skipped external pages.
    Workaround is to use -i option with c_parser.
    https://twindb.com/how-to-handle-wrong-page-type-in-external-pages/

  • Ilya Shishkin

    Thank you so much! Just a few comments. Not supported in sql files: CR (r) and spaces in index name (parse error).

  • Ilya Shishkin

    Thank you so much! Just a few comments. Not supported in sql files: CR (r) and spaces in index name (parse error).

  • Raghuram

    Hi I have accidentally dropped a table in MySQL database . I have tried the approach proposed by you. It is worked fine for me. But column values with datetime formats are showing as “0000-00-00 00:00:00” . Please help me to recover the datetime values

  • Raghuram

    Hi I have accidentally dropped a table in MySQL database . I have tried the approach proposed by you. It is worked fine for me. But column values with datetime formats are showing as “0000-00-00 00:00:00” . Please help me to recover the datetime values

  • Raghuram

    Hi quick ran the c_parser application through gdb. And I found that actual data is available in the all the fields , but looks like there is a small screw up in the off set calculation due to which six bytes of data is merged with he previous field.

    For example in the below snippet “raghur” belongs to Field#5, but it is showing up in Field#4.

    Found a table u_det record: 0x3ef956a5 (offset = 693)

    Field #0 @ 0x3ef956a5: length 6, value: 800000000000

    Field #1 @ 0x3ef956ab: length 6, value: 029300000046

    Field #2 @ 0x3ef956b1: length 7, value: 8C019C00000202

    Processing record 0x3ef956a5 from table ‘u_det’

    PHYSICAL RECORD: n_fields 8; compact format; info bits 0

    Field #3 @ 0x3ef956b8: length 8, value: -9146683981072668306

    Field #4 @ 0x3ef956c0: length 19, value: “ssk@gmail.comraghur”

    Field #5 @ 0x3ef956d3: length 20, value: “amdb@yahoo.comV�V”

    Field #6 @ 0x3ef956e7: length 4, value: “2054-07-18 08:21:41”

    Field #7 @ 0x3ef956eb: length 4, value: “1970-01-01 05:41:20”

    I am not much familiar with internal db format hence I am not able to progress further. Any sort of help is highly appreciated.

    NOTE : I am using mysql version 5.5

  • Raghuram

    Hi quick ran the c_parser application through gdb. And I found that actual data is available in the all the fields , but looks like there is a small screw up in the off set calculation due to which six bytes of data is merged with he previous field.

    For example in the below snippet “raghur” belongs to Field#5, but it is showing up in Field#4.

    Found a table u_det record: 0x3ef956a5 (offset = 693)

    Field #0 @ 0x3ef956a5: length 6, value: 800000000000

    Field #1 @ 0x3ef956ab: length 6, value: 029300000046

    Field #2 @ 0x3ef956b1: length 7, value: 8C019C00000202

    Processing record 0x3ef956a5 from table ‘u_det’

    PHYSICAL RECORD: n_fields 8; compact format; info bits 0

    Field #3 @ 0x3ef956b8: length 8, value: -9146683981072668306

    Field #4 @ 0x3ef956c0: length 19, value: “ssk@gmail.comraghur”

    Field #5 @ 0x3ef956d3: length 20, value: “amdb@yahoo.comV�V”

    Field #6 @ 0x3ef956e7: length 4, value: “2054-07-18 08:21:41”

    Field #7 @ 0x3ef956eb: length 4, value: “1970-01-01 05:41:20”

    I am not much familiar with internal db format hence I am not able to progress further. Any sort of help is highly appreciated.

    NOTE : I am using mysql version 5.5

  • Raghuram

    Hi I am successfully able to tweek the c parser provided by you and get the records in correct format. Thanks a lot for the tool .I have some more table with the MyISAM type. Please guide me the right way to recover it. Thanks a lot again.

    • The tool supports InnoDB tables only.
      If you dropped a MyISAM table you need to find a tool that can undelete the *.MYD and *.frm files. It’s a high change the file will be corrupted, try myisamchk to repair it.

  • Raghuram

    Hi I am successfully able to tweek the c parser provided by you and get the records in correct format. Thanks a lot for the tool .I have some more table with the MyISAM type. Please guide me the right way to recover it. Thanks a lot again.

    • The tool supports InnoDB tables only.
      If you dropped a MyISAM table you need to find a tool that can undelete the *.MYD and *.frm files. It’s a high change the file will be corrupted, try myisamchk to repair it.

  • Michael Holm

    Hi, will this by any change work to mysql 5.1?

  • Michael Holm

    Hi, will this by any change work to mysql 5.1?

  • tina louise sadian

    Hi can somebody help me do this on cmd in windows? thank you!

    • The tool supports Linux only.

      On Windows you can use it in a virtual machine.

      Marc T. created an image with the tool and all its prerequisites.
      http://mysql.on.windows.free.fr/

      • tina louise sadian

        Hi Aleksandr, thank you for your quick response on this. 😀

      • tina louise sadian

        Hi Aleksandr, my problem was solved. I’m glad I’d setup another local server to my other pc. After calming myself, I realized it. But I will try your solution sometime and also learn from foolishness. Thank you!

  • tina louise sadian

    Hi can somebody help me do this on cmd in windows? thank you!

    • The tool supports Linux only.

      On Windows you can use it in a virtual machine.

      Marc T. created an image with the tool and all its prerequisites.
      http://mysql.on.windows.free.fr/

      • tina louise sadian

        Hi Aleksandr, thank you for your quick response on this. 😀

      • tina louise sadian

        Hi Aleksandr, my problem was solved. I’m glad I’d setup another local server to my other pc. After calming myself, I realized it. But I will try your solution sometime and also learn from foolishness. Thank you!

  • anil joshi

    Hi,

    Can i install c_parser & stream_parser on my centos machine & what the steps for
    installing it.

    Its a free tool or paid

    Plz guide me.

    • Anil,

      We’ve automated the recovery for most popular data loss scenarios on https://recovery.twindb.com/ . If your case is not implemented yet, please file a bug with detailed description of what happened to the database, how you lost your data.

      The toolkit is behind the web interface, but we don’t sell it.

  • Ade Indra Saputra

    What if the tables were already often in the drop then re-imported, whether the previous table can be refunded according to the date that we want?

    • That happens quite often indeed. Typical case is when wrong SQL dump is executed. So it DROPs, CREATEs and then does a bunch of INSERTs.

      InnoDB is free to reuse pages freed after the DROP, so you might think the data is gone. However sometimes the data from the original table still can be found.

      The best call in cases like that is to open ibdata1 in a hex viewer (or less, whatever) and look for old table.

      If you see records from the original table – it’s recoverable. If you don’t – then not.

  • chengdh

    my database encode is utf8_unicode_ci, flow the step to restore one table,but the data in some field is garbled,where problem is?the origin table data contains some chinese

  • chengdh

    when i run “./c_parser -V -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000014345.page -t il_yanzhao_db/users.sql | head -200”
    the utf8_unicode_ci column output “all_fields[1].name = email, type: 2, size: 0”,
    is is the size problem?

  • Evgeniy Pol

    HI . PLEASEE!! HELP ME!
    What this is mean?
    root@ecoweb:/backups/undrop-for-innodb-master# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000001204.page -t MIBASE/LISTADOS_CONTENIDOS2 | head -5
    Line 12: syntax error at ‘(‘
    11: primary key (ID),
    12: unique (LISTADO, FECHA_CREACION))
    Failed to parse table structure

  • Evgeniy Pol

    HERE CREATE statement. that i write
    cat LISTADOS_CONTENDOS2
    create table LISTADOS_CONTENIDOS2
    (ID bigint not null auto_increment,
    CONTENIDO LONGTEXT not null,
    FECHA_ALTA datetime not null,
    FECHA_CREACION datetime not null,
    FECHA_MODIF datetime not null,
    NOMBRE varchar(255) not null,
    USUARIO_ALTA varchar(255) not null,
    USUARIO_MODIF varchar(255) not null,
    LISTADO bigint not null,
    primary key (ID),
    unique (LISTADO, FECHA_CREACION))
    engine=InnoDB

  • Leo Cavalcante

    You guys just saved my life… and my job 😛

  • Holger Kral (Kombinat)

    Hi there, This is a great tool. Thank you very much for it! It really saved my day (and the day of a bunch of other people)!

  • Ahmad Nabrih

    this the command i use:
    ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000300.page -t sakila/tablename.sql > dumps/default/actor 2> dumps/default/tablename_load.sql

    I got the line like this:
    — STATUS {“records_expected”: 1911, “records_dumped”: 1911, “records_lost”: false} STATUS END
    — Page id: 30869, Found records: 4, Lost records: NO, Leaf page: YES

    But when I refer to tablename_load.sql. there is not found..
    what wrong with my command ?

    • is it full output? Also, is it stdout or stderr?

      • Ahmad Nabrih

        I create my table .sql inside sakila directory, so the result is 1911.
        and i was found my problem is in “gt;” ..
        my command above is second command, i forget to check the result in dumps/default after run it. because in the first command i not yet change “gt;” to “>”. after i comment here, i check in the directory dumps/default. i found it..
        and now i was get back my data 🙂 Thanks and sorry for my fault