Showing posts with label Dump. Show all posts
Showing posts with label Dump. Show all posts

2017-09-03

Making external backups: QUIESCE DATABASE

One of more advanced backup options in Sybase is, that Sybase supports making database dumps using external tools. Generally the problem should not be very complicated. Sybase's databases are at the end of it's implementation just files on disk! However....:

  • When you copy database files all of them need to be from exactly the same time, and it will be very complicated/impossible to copy hundrets of Gigabytes in the same moment
  • All the transactions should be hold in the time when the external backup is run
That is why in Sybase there is a command allowing to freeze all the transactions, so that the files can be easily copied using the external tool. Sybase has found a way this conditions can be met - just all the transactions should be frozen for the time of external copy process. In the same time data can be read, however if you decide to initiate the transaction, that this transaction will hang until the freeze will be finished. Unfortunately since that time also all read operations will hang waiting for the freeze end. 

Take a look at this example:

QUIESCE DATABASE Maintenance1 HOLD MyDb
GO
USE MyDb
GO
SELECT * FROM TEST
GO
INSERT TEST VALUES('A new value')

Here administrator plans to make a copy of MyDb files. So he initiates quiesceing of that database. Such operation needs to have a name, that is here Maintenance1. After this operation INSERT will stop working. The first SELECT will be executed with no additional delay, but the INSERT will hang.

In the same time in a second session similar activities can be initiated:

USE MyDb
GO
SELECT * FROM TEST
GO

In this example, the SELECT will hang as well!. That is caused by a still not executed insert from the first session:


However, when the maintenance will be finished with a command like:

QUIESCE DATABASE Maintenance1 RELEASE
GO

All the other frozen INSERTs or SELECTs will be executed in the correct order:



Such copied files can be later taken to another server, put into correct location, and when data_server will be started with the -q option, than all the quiesced databases will be treated as "in recovery" allowing to:

  • restore them
  • or
  • leave as "in restore" and with time apply additional log dumps, making the second server up-to-date.




2017-08-28

Sybase cumulative/incremental dump and load

There is one option named 'allow incremental dumps', by default not set, that would allow to perform incremental backups of a database.

Incremental backup contains only datapages that were modified since the last full database backup. This allows to build more compact recovery scenarios. For a huge database, where users usually don't perform huge updates, it may be easier to perform full backup once a week and an incremental backups everyday.. If required the transaction dumps can be done as usually.

Let's examine an example of incremental backup. In the code below: we create the devices and database, change option allowing incremental dumps for the database and perform one full backup, two incremental and finally one log dump:

USE master
GO

DISK INIT NAME ='myDevice_A', 
          PHYSNAME = '/opt/sap/databases/myDevice_A.dat', 
          SIZE=10240
GO

DISK INIT NAME ='myDevice_B', 
          PHYSNAME = '/opt/sap/databases/myDevice_B.dat', 
          SIZE=2560, 
          SKIP_ALLOC = true
GO

CREATE DATABASE myDB
ON myDevice_A = '20M'
LOG ON myDevice_B = '5M'
GO

sp_dboption 'myDB','allow incremental dumps', 'true'
GO

USE myDB
GO

CREATE TABLE T1(id INT)
GO

INSERT T1 VALUES (1)
GO

DUMP DATABASE myDB TO '/home/dumps/01_myDB_full.bak' 
GO

INSERT T1 VALUES(2)
GO

DUMP DATABASE myDB CUMULATIVE TO '/home/dumps/02_myDB_cumulative.bak'
GO

INSERT T1 VALUES(3)
GO

DUMP DATABASE myDB CUMULATIVE TO '/home/dumps/03_myDB_cumulative.bak'
GO

INSERT T1 VALUES(4)
GO

DUMP TRAN myDB TO '/home/dumps/04_myDB_tran.bak'
GO

USE master
GO

Now we will try different restore scenarios. The first just restores the full backup. No surprise, only one record will be visible in the restored table.

2017-07-31

Log backup after catastrophic failure

The theory says, that a database admin should always have up to date backups... unfortunately this is not always true. It is possible that you have full database backup, you perform transaction backups every 3 hours, you even have the backup and recovery plan, but... Sybase due to some hardware failure stopped working and you cannot start it and the business waits and waits...

There is a utility called sybdumptran that allows to make a transaction log backup despite the non working sybase server.

Take a look at the following SQL code:

DISK INIT NAME ='MyDbDataDev', 
     PHYSNAME = '/opt/sap/databases/MyDbDataDev.dat', 
     SIZE=10240
GO
DISK INIT NAME ='MyDbLogDev', 
     PHYSNAME = '/opt/sap/databases/MyDbLogDev.dat', 
     SIZE=2560
GO
CREATE DATABASE MyDb
ON MyDbDataDev = '20M'
LOG ON MyDbLogDev = '5M'
GO
DUMP DATABASE MyDb TO '/home/dumps/MyDb_full.dmp' WITH INIT
GO
USE MyDb
GO
CREATE TABLE TEST(info char(30))
GO
INSERT INTO TEST VALUES('Before the 1st log dump')
GO
DUMP TRAN MyDb TO '/home/dumps/MyDb_log_1.dmp'
GO
INSERT INTO TEST VALUES('Never dumped')
GO
SHUTDOWN

As you see a new database on new devices has been created. Next we took a full backup, next user inserted one record, next the transaction low was dumped, next the user continued to insert the next record and then something went wrong and server stopped! Now you will see how to take a log dump despite stopped server:

2017-04-23

Recovering failed master database

As always, to recover a server, you need to have a backup!

USE master
GO
DUMP DATABASE master to '/home/dumps/master.dmp'
GO
SHUTDOWN

Here we additionaly have stopped the server to make it a little destroyed... Here you can check where the masterdevice exists and how to change its name:

[sap@SAPSybase install]$ ls
core.jsa_20161016_210528.1147  core.jsa_20170219_172003.1157  core.jsa_20170309_224222.1180  RUN_SAP01_BS       SAP01.log   veritas
core.jsa_20170219_121910.1154  core.jsa_20170219_173024.1170  core.jsa_20170327_022202.1180  SAP01_BS.log       showserver
core.jsa_20170219_170522.1147  core.jsa_20170219_180321.1181  RUN_SAP01                      SAP01_JSAGENT.log  sybhauser
[sap@SAPSybase install]$ more RUN_SAP01
#!/bin/sh
#
# SAP ASE page size (KB):       4k
# Master device path:   /opt/sap/data/master.dat
# Error log path:       /opt/sap/ASE-16_0/install/SAP01.log
# Configuration file path:      /opt/sap/ASE-16_0/SAP01.cfg
# Directory for shared memory files:    /opt/sap/ASE-16_0
# License properties file:      /opt/sap/ASE-16_0/sysam/SAP01.properties
# Interfaces file directory:    /opt/sap
# SAP Adaptive Server Enterprise name:  SAP01
#
/opt/sap/ASE-16_0/bin/dataserver \
-d/opt/sap/data/master.dat \
-e/opt/sap/ASE-16_0/install/SAP01.log \
-c/opt/sap/ASE-16_0/SAP01.cfg \
-M/opt/sap/ASE-16_0 \
-N/opt/sap/ASE-16_0/sysam/SAP01.properties \
-i/opt/sap \
-sSAP01 \

[sap@SAPSybase install]$ ls /opt/sap/data
empty  master.dat  sybmgmtdb.dat  sybsysdb.dat  sysprocs.dat  tempdbdev.dat
[sap@SAPSybase install]$ mv /opt/sap/data/master.dat /opt/sap/data/master.dat.org
[sap@SAPSybase install]$ ls /opt/sap/data
empty  master.dat.org  sybmgmtdb.dat  sybsysdb.dat  sysprocs.dat  tempdbdev.dat

Of course now Sybase cannot start:

Restore database form a damaged media

In the following example you will see how to recover from a damaged media failure. Let's assume, we have a database named A1 with one database file and one log file. The files are located on different devices (in our case on two separate disk files).

USE A1
GO

SELECT * FROM TestTran
GO

DUMP DATABASE A1 TO '/home/dumps/A1_Full.dmp'
GO

INSERT INTO TestTran (SomeText) VALUES ('The transaction that was not backed up')
GO

SHUTDOWN

The table TestTran contained already 3 rows. Here additional new row has been added, but it was never dumped neither to a full database dump, nor to log dump. After that server was stopped. This will allow us to perform some destructive activities on the database file, however in normal situation this could be a disk failure.

Now in operationg system let's rename the data file for the database. Finally the files of the database devices are like here:

[root@SAPSybase]# ll /opt/sap/databases/a1
total 67584
-rw-r-----. 1 sap sap 62914560 Apr 23 16:49 a1.old.mdf
-rw-r-----. 1 sap sap  6291456 Apr 23 16:48 a1.trn

After this damage Sybase can be started again:

cd /opt/sap/ASE-16_0/install
../bin/startserver -f ./RUN_SAP01

Already during the startup, and also in the error log following error will be displayed:

Sybase ASE dump history

This is good to have backup, but it is also good to know which backups you have! By default the backup history is not enabled in Sybase, so immediately execute:

sp_configure 'enable dump history', 1

This is an  online operation, so no restart is required. Having the option "on", after some backups check the following query:

sp_dump_history @operation = 'list', @database_name = 'ALFA'

The output can  be similar to:


After enabling the history, consider having also a job that purges the old backup history records. For this @operation should be equal 'purge'. The history is saved in a file. The path to the file is returned by @operation listfiles. This is just a text file that can be dispalyed also using the operating system commands:

[root@SAPSybase ASE-16_0]# cd /opt/sap/ASE-16_0
[root@SAPSybase ASE-16_0]# more dumphist

Result is like:




Loading transaction dumps. Point in time restore


The load operation of log files is not so boring as the dump operation, because:


  • during the load operation, the correct sequence of dump files needs to be loaded
  • the databases to be loaded, needs to be created first, and it's size must be big enough to accommodate the backups that should be restored
  • the restore can stop at some, chosen by you date and time
  • the last command during the restore is to ONLINE the database

Let's take look at the following example:

USE ALFA
GO

CREATE TABLE TestTran
(
   id INT IDENTITY PRIMARY KEY,
   SomeText NVARCHAR(300),
   Date DATETIME DEFAULT GETDATE()
)
GO

DUMP DATABASE ALFA TO '/home/dumps/alfa_full.dmp' WITH INIT
GO

INSERT INTO TestTran (SomeText) VALUES ('This is the first transaction')
GO
DUMP TRAN ALFA TO '/home/dumps/alfa_log1.dmp'
GO
SELECT * FROM TestTran
GO

INSERT INTO TestTran (SomeText) VALUES ('This is the second transaction')
GO
--wait a moment, remember the current time (in between the inserts)
SELECT GETDATE()
GO
INSERT INTO TestTran (SomeText) VALUES ('This is the third transaction')
GO
DUMP TRAN ALFA to '/home/dumps/alfa_log2.dmp'
GO

SELECT * FROM TestTran
GO


2017-04-17

DUMP TRANSACTION - command and options

All the transactions before being saved in the database, first need to be put in the transaction log. Of course with time the transaction log would fill up, so the old, closed transactions need to be removed from the log. To do this you can choose one of the solutions:

  • set database option 'trunc log on chkpt', what is good on test systems, but not in production. Why? Because on test systems we don't care about the data itself, while on production we would like to have possibility to restore the database to any point in time, so we need to have all the transactions somewhere. In case the option 'trunk log on chkpt' has been set, when Sybase saves data in the database (checkpoint finishes) all the transactions already saved and closed by users can be and are removed from the log. So the log can fill up only if someone started a really long or huge transaction.
  • from time to time perform log dump, and monitor the space in the logs. 

The syntax for log backup is:

dump tran <db_name> to <device_name>
[, stripe on <device_name>...]
[with <options>]


This command usually takes all the transactions committed already by the user and saves them on the device. As the data is already saved somewhere it can be removed from the transaction log, making space for other transactions. In case the transaction log would not be executed on time, the log can become full and all the transactions may be suspended or rolled back depending on database option named  'ABORT TRAN ON LOG FULL', what was already described in Sybase log space exhausted, transactions frozen -what to do?

Let's take closer look to the options that can be added to the dump transaction command

dump tran db_name with truncate_only


This command causes the log being truncated, without being actually backed up! Use it only if you don't care about the data. You can also try this command when you have problem with the full log in the database. After truncating the log, checkpoint will be written to the log, so if you have problem with full log, this command may fail. You don't need to specify the device name, as the backup will not be done. Remember to perform new full backup of the database, after this operation, as the backup chain will be broken at this moment.

2017-03-26

Sybase ASE database option "abort tran on log full" example

There is database option named "abort tran on log full". When the option is set to true, when the transaction log segment fills up, the transaction that cannot write to transaction log will be aborted. However when this option is not set, the transaction will be suspended and will wait until the space in the log will be available again.

This setting can be displayed using following command:
sp_helpdb 'Beta'

The result may be similar to:


to change it issue:

master..sp_dboption 'Beta', "abort tran on log full", false

Now the options should change to:



The error message recorded in the errorlog in case of filling up the log would look like this:

00:0006:00000:00033:2017/03/26 09:51:53.20 server  Space available in segment 'logsegment' has fallen critically low in database 'Beta'. All future modifications to this database will be aborted until the transaction log is successfully dumped and space becomes available.

That is difference to previous messages recorded in response to the same event in case, when the "abort tran on log full" would be set to false:

00:0006:00000:00007:2017/03/19 12:49:44.92 server  1 task(s) are sleeping waiting for space to become available in the log segment for database Beta.

Last-chance threshold

As discussed in previous article, a long database may cause the log to fill up. Then depending on database settings, the transaction causing problem can be aborted (terminated or rolled back) or it can cause that the entire database will be frozen. All transactions will wait until again some free space will be available in the log. Having this in mind, Sybase has mechanism called Last-Chance Threshold. Shortly before filling up the entire log, in the moment when the transactions are aborted or suspended, Sybase starts procedure sp_thresholdaction.

This procedure by default is not present on the system. It needs to be created manually by the administrator. Usually the procedure should create additional TRANSACTION backup, so some space in the log will be made free again and the transactions should be able to continue.  But generally the body of the procedure can contain any actions.

DBA may decide to create more thresholds that will be called depending on conditions met in the log, but there is only one threshold known as "Last Chance".  An example of such procedure below:

USE sybsystemprocs
GO

CREATE PROCEDURE sp_thresholdaction
        @dbname VARCHAR(30),
        @segmentname VARCHAR(30),
        @space_left INT,
        @status INT
AS
    IF @status=1 --last chance thershold
    BEGIN  
        PRINT 'Last threshold condition met...'
        DUMP TRANSACTION @dbname TO dumpsdev
    END
GO

2017-03-19

Sybase log space exhausted, transactions frozen -what to do?

The main reason for the log file existence is logging all the user transactions incoming to the database. Every transaction needs to go through the transaction log. The main reason of troubles related to log is a full log condition. Indeed, if the log is full, we cannot execute any transaction and all the activity needs to be halted. The log should be big enough to accommodate all the transactions. Transaction can be stored longer or shorter depending on the settings of database

  • if the database has option 'truncate on checkpoint', whenever transaction has been finished, after a CHECKPOINT, when all the modified by this transaction pages have been saved in the database, the log entries are not longer required
  • if the option is not set, then first a backup of the log needs to be taken. After transaction has finished, and dbo has performed the backup of the log, the space in the log can be released.
Let's consider following example:


CREATE DATABASE Beta ON
BetaDataDev='10M'
LOG ON BetaLogDev='5M'
GO

USE Beta
GO

CREATE TABLE TestTran (ID INT IDENTITY PRIMARY KEY, SomeText NVARCHAR(1000))
GO

BEGIN TRAN
    WHILE(1=1)
    BEGIN
        DECLARE @info VARCHAR(100)
        INSERT TestTran VALUES(@info)
        SET @info = Convert(VARCHAR(100),GETDATE())
        PRINT  @info
    END

2017-03-12

Damaged database - how to restore the database

Damaged database could be damaged in many different ways, so it is hard to have one solution for this:

try to online the database

ONLINE db_name

if not working try to force online:

sp_forceonline_db db_name, sa_on

this procedure will try to bring the database online without checking for consistency errors or suspected pages. Those operations should allow the administrator to see the content of database, to perform the last backup of the database, may be do some export and so on

The next point is usually dropping the database and recreating it as an empty database and then  loading the dump. To drop database use:

DROP DATABASE db_name

If there are problems with this command following command could be started:

dbcc dbrepair (db_name, dropdb)

So, the next point is recreating the database. Command displaying information regarding backup can be very helpful as it shows which devices have been used by the database:

Load database - restoring a database

Command used to restore the database i LOAD DATABASE. Not every user can issue that command, actually only a very narrow group of them: sa (system administrator) and dbo (database owner). The load database will overwrite the content of an existing database and is considered as very destructive command. If the incorrect dump would be loaded than a real problem will appear...

Generally options used in the LOAD command are the same as in the DUMP described here, however just after the load, the database will be in offline state. To have it again on online state the ONLINE DATABASE command will need to be issued:

LOAD DATABASE ALFA  from '/home/dumps/01_normal.dmp'
GO
ONLINE DATABASE ALFA
GO

To verify the status of database issue command:

sp_helpdb

If the database would be in offline state you will see such a description in status column.

Only database that is not in use by any user may be restored. If any user's session is still active in the database following error will be recorded:

Dump database

Command

DUMP DATABASE myDB TO myDumpDevice

is the simplest form of creating the backup, however a lot of options can be added to that simple command. Lets take look at some examples:

DUMP DATABASE ALFA TO '/home/dumps/01_normal.dmp'

here a database named ALFA is being backed up into the file name 01_normal.dmp. What will happen if there is already such a file on disk? Well - it will be simple overwritten. So one backup - one file forever.

DUMP DATABASE ALFA TO '/home/dumps/01_stripped_01.dmp'
STRIPE ON '/home/dumps/01_stripped_02.dmp'

Here the dba expected the backup will be a really big file, so decided to dump the databases into 2 files. Please note, that it will be only one backup here. Part of database's pages is saved in the first file and the rest into the another. This is not a 'mirroring' of backup file. No way to restore the database when you will lost one of the files. The purpose of using that syntax is just to have smaller files and if those files will be created on different physical disks. also the performance may be better. Smaller files are easier to manage, additionally the file system where the files are being backed up can be smaller in size. Each stripe can be created on separate backup server (so also on separate machine).

Backups to devices

As described in Dump Devices article, dump devices can make administration much easier, however they can be used only for local dumps...

Let's assume you have some devices defined. To display the list of devices you can use following command:

sp_helpdevice


Knowing that dumpsdev is a valid name of dump device you try to execute following dump command on the Backup Server. Note, that you try to use the name of the device:

DUMP DATABASE ALFA TO dumpsdev AT SAP01_BS WITH COMPRESSION=9


Unfortunatelly the dump command ends with error message:

Could not execute statement.
Error encountered by Backup Server.  Please refer to Backup Server
messages for details.
SQLCODE=8009
Server=SAP01, Severity Level=16, State=1, Transaction State=0, Line=16
Line 31

2017-02-19

Dump devices

To create the backup you work with a dump database command. When such a command is executed manually by administrator it is quite common just to use int the backup command just a file, where the backup should be stored:

DUMP DATABASE master TO '/home/dumps/master_2017_02_19.dmp'

However, when the backups should be created always in the same location, always with the same script and metodology, than it is quite convenient to use backup devices. Backup device is a definition that points to some location on the system. This can be a file but this can also be a tape device. The easiest way to create a device is using following sp_addumpdevice command:

EXECUTE  sp_addumpdevice 'disk', 'dumpsdev','/home/dumps/sap01.dmp'

The first parameter designates the device as a file on disk or a tape device. The second parameter is  called a logical name, and it will be used in commands performing the dump. The last one is the physical name of a device, in case of file this is just the name of a file.

Having such a device following command can be executed:

DUMP DATABASE master TO dumpsdev

This dump of master database will be saved in the dumpsdev this is in the file /home/dumps/sap01.dmp. Should you forget the connections between logical names and physical names, always a procedure sp_helpdevice can be used to display, how the device has been defined:

Backup: Backup Server

There is a special service associated with Sybase installation: Backup Server. It can be used to perform the backup or load:
  • remotely
  • directly from disk, bypassing the main Sybase process
The steps done during a backup or restore process performed this way are:
  • Sybase process (can be from a different machine) sends using Remote Procedure Call (RPC) a request to perform a backup/load operation
  • Backup can be divided into many stripes. It is Backup Server responsibility to decide how big the stripes will  be. Additionally those stripes (files) can be located on local drives, but also on remote shares making the backup less depended on the same hardware as the database itself!
  • It is possible that a local Backup Server will perform the backup on locally installed databases and send the data to remote Backup Server, which in turn will save the received backup on local tapes available on that remote machine.
Of course backup server needs to be installed and configured on the same machine as the Sybase. That is a requirement, because backup server process needs to have access to the same devices as Sybase server has. Backup server listens on dedicated port that is different than the Sybase port. The information regarding ports can be found in the interfaces file. Below, an example content of interfaces file can be found:

2017-02-12

Backup: Some facts

Well, backup is one of the most mandatory duty of database administrator. Let's first answer a couple of backups features:

  • If the database contains a small amount of data, but it takes a lot of disk space on the disk, than the backup will be small, as only used pages are backed up, unused pages are skipped
  • Every administrator should have a backup and recovery plan and from time to time an exercise should be done. The goal of the exercise is to test if all required factors are fulfilled to successfully restore the database
  • To perform backup of the database you need to have permissions. The database owner (dbo). Of course it is enough to be aliased to the dbo. That way sa role is also capable of performing backup of any database. Additionally every member of the server role oper_role also has right to dump any database. Assuming there is login named backup_login, you can assign it to oper_role using following command:

    sp_role "grant" , oper_role, backup_login

    or using the newer syntax:

    GRANT ROLE oper_role TO backup_login
    
    
  • Backup can be done while the database is in use, so generally backups should be performed quite often depending on the load on the server, the number of transactions, the importance of the data stored in the database. Backup increases the load on the server (CPU and disk) ~5%

SAP Adaprive Server Enterprise

SAP Adaprive Server Enterprise
SAP Adaprive Server Enterprise