2017-03-12

Dump database

Command

DUMP DATABASE myDB TO myDumpDevice

is the simplest form of creating the backup, however a lot of options can be added to that simple command. Lets take look at some examples:

DUMP DATABASE ALFA TO '/home/dumps/01_normal.dmp'

here a database named ALFA is being backed up into the file name 01_normal.dmp. What will happen if there is already such a file on disk? Well - it will be simple overwritten. So one backup - one file forever.

DUMP DATABASE ALFA TO '/home/dumps/01_stripped_01.dmp'
STRIPE ON '/home/dumps/01_stripped_02.dmp'

Here the dba expected the backup will be a really big file, so decided to dump the databases into 2 files. Please note, that it will be only one backup here. Part of database's pages is saved in the first file and the rest into the another. This is not a 'mirroring' of backup file. No way to restore the database when you will lost one of the files. The purpose of using that syntax is just to have smaller files and if those files will be created on different physical disks. also the performance may be better. Smaller files are easier to manage, additionally the file system where the files are being backed up can be smaller in size. Each stripe can be created on separate backup server (so also on separate machine).

DUMP DATABASE ALFA TO '/home/dumps/01_normal.dmp' WITH COMPRESSION=9

WITH COMPRESSION=x allows to determine the level of compression. The highest compression may cause higher CPU utilization, but the file created will be smaller and disk operations will take less time. 9 is the highest compression level and 0 is the lowest (no compression). Apart form valus in range 0-9 it is also possible to put 100 or 101 here. 100 means automatically detected level of compression and deciding factor is CPU. Backup file will be compressed in the fastest way. The option 101 means, the most effective compression should be done.
There is also similar compress:: option, but it is deprecated now and should not be used. This syntax is still acceptable in newer versions of Sybase as backward compatibility option. This is also less convenient as, while loading the backup back from the device again the option compression:: must be specified, while ig the backup was created with the COMPRESSION= option, no information regarding compression must be specified when the database is restored. If you need more specific information see System Administration Guide  on page 316.

DUMP DATABASE ALFA TO '/home/dumps/01_normal.dmp' WITH PASSWD='secret'

Not everyone is aware of this, that data can be stolen not only by hackers cracking the application. Easy way to steal the data is to get access to a backup file. Adding parameter PASSWD='...' makes it much harder to restore the database on non authorized system. Having backups encrypted, on the other hand is also more risky, if the administrator forgets the secret phase.

DUMP DATABASE ALFA TO '/home/dumps/01_normal.dmp' WITH COMPRESSION=8, VERIFY=crc, VERIFY=read_after_write

If the database dump is compressed one or two VERIFY options can be used:
  • read_after_write - causes the page that has been written to the disk be read again and verified if it's content is the same. This way you can make sure that there are any bad pages on disk that would cause to read from the page different data as it was saved just before a moment
  • crc - causes to compute additional checksum, as the pages are dumped to the disk and save the check sums on disk with the data. Later during the restore, the command can also use VERIFY parameter, and if any inconsistencies will be found, an error will be raised. It is also possible to start load with VERIFYONLY parameter that causes this command to not restore the database but simply validate if the content of backup can be read from a file and whether it is correct
There cna be different problems when newer systems will create backup with CRC and you will try to restore the database on older system not supporting it. VERIFY works also only when COMPRESSION was used. If you mess up with all those parameters, engine will ignore the incorrect options and will not tell you about the error. Be careful!

DUMP DATABASE ALFA TO '/home/dumps/02_remote.dmp' AT SAP01_BS

This command allows to redirect the backup to some remote backup server (here SAP01_BS). Remember that each stripe can be saved to different remote backup server.

Tape backups

There are also some parameters specific to backups that should be put on the tape:
  • dumpvolume - allows to specify the name of tape label. Dump libraries allow to mark tapes with labels, sometimes even physically with barcodes. This is not a bad idea to hav backups on labeled tapes
  • dismount/nodismount - after backup operation backup tape will be dismounted
  • unload/nounload - after backup, the tape should be rewinded. In case if multiple databases should be dumped on the same tape, only the last dump command should specify unload
  • init/noinit - causes the tape to be initialized (all previously created backups will be removed). If you wish to append next backup to an existing one specify noinit option, which is the default one
  • notify - if tape should be changed than someone needs to be notified, by default it is the operator console, however this can be changed to client, who initiated the operation. Acceptable values for this parameter are: client or operator_console

No comments:

Post a Comment

SAP Adaprive Server Enterprise

SAP Adaprive Server Enterprise
SAP Adaprive Server Enterprise