This article is for HANA DB version 2. Please use a different article for HANA DB version 1. Since this article is a revision of the previous one, some things are omitted and the focus is on the changes from the previous version.
If you are reading this article, you can also be interested in Backup SAP HANA2 using storage snapshot.
Disclaimer: This article has nothing about BACKINT, it is about file backup only. HANA2 database runs in multi-tenant mode, which means running a small SYSTEMDB database and one or more TENANT databases. This document covers only one-tenant database, not more.
Backups are performed with the BACKUP DATA command using the hdbsql CLI. Obviously, a backup can be FULL, DIFFerential (only data changed since the full backup was created) and INCremental (data changed since any previous backup). In addition, every 15 minutes (by default), HANA creates its own backup of current logs to the $DIR_INSTANCE/backup/log/{SYSTEMDB|DB_<SID>}/ directory (by default), similar to the Oracle archived logs. They are essential for recovery to a certain point in time. Every time HANA makes any backup, it creates a copy of the BACKUP CATALOG in the same default location, separate for SYSTEMDB and each TENANT. This file is named similar to "log_backup_0_0_0_0.1520355726710", where a large number is the epoch time for this catalog copy. The CATALOG contains information about all previously made backups, including the mentioned above archived logs.
This place can easily overflow if not cleaned by an administrator. I have a script that compresses outdated files and removes too outdated files. However, the metadata about all of these backups remains in the BACKUP CATALOG, which continues to grow. You can clear it with the hdbsql CLI command:
hdbsql=> BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID XXXX ;
The latest version of script do this automatically for you, leaving only history for last month in catalog.
SAP does not recommend using the SYSTEM user for routine tasks. Let's create a user named BACKUP according to the recommendations:
sidadm> hdbsql -n localhost:30013 -u SYSTEM hdbsql SYSTEMDB=> CREATE ROLE BACKUP_ROLE; hdbsql SYSTEMDB=> GRANT BACKUP ADMIN,DATABASE BACKUP ADMIN,CATALOG READ,MONITORING to BACKUP_ROLE; hdbsql SYSTEMDB=> CREATE USER BACKUP PASSWORD "ComplexPassword" NO FORCE_FIRST_PASSWORD_CHANGE; hdbsql SYSTEMDB=> ALTER USER BACKUP DISABLE PASSWORD LIFETIME; hdbsql SYSTEMDB=> GRANT BACKUP_ROLE to BACKUP;
The created user and role exist only in SYSTEMDB, which is good, but not enough. The same role must exist at the tenant level and the remote user "BACKUP" should be authorized for this role.
sidadm> hdbsql -i 00 -u SYSTEM hdbsql SID=> CREATE ROLE BACKUP_ROLE; hdbsql SID=> GRANT BACKUP ADMIN,CATALOG READ,MONITORING to BACKUP_ROLE; hdbsql SID=> CREATE USER BACKUP WITH REMOTE IDENTITY BACKUP AT DATABASE SYSTEMDB ; hdbsql SID=> GRANT BACKUP_ROLE to BACKUP;
You need to set up a passwordless database connection for this user, otherwise you will have to hard-code the password in your script, which is even worse.
sidadm> hdbuserstore DELETE BACKUP sidadm> hdbuserstore -i SET BACKUP localhost:30013@SYSTEMDB BACKUP
Note that the SET directive is different from the HANA1 one. It should work with SYSTEMDB, not TENANT.
And, finally, a backup script itself. Schedule it to run by user <sid>adm as shown in example in scripts header. A destination directory DDIR has to exist and be writable by <sid>adm. Fix location to your need.
#!/bin/bash # By Oleg Volkov (voleg) # Copy it to sidadm/bin and run it as sidadm cronjob: # 2 */6 * * * [ -x $HOME/bin/hana_backup_local.sh ] && $HOME/bin/hana_backup_local.sh |& logger -t hana-backup # Reload environment (important for cron jobs) source $HOME/.bashrc # This SQL return previous good full backup id from catalog: SQL="SELECT top 1 min(to_bigint(BACKUP_ID)) FROM SYS.M_BACKUP_CATALOG where SYS_START_TIME >= ADD_DAYS(CURRENT_TIMESTAMP,-15) and ENTRY_TYPE_NAME='complete data backup' and STATE_NAME='successful' ;" # Same for tenant ${SAPSYSTEMNAME} SQLTENANT="SELECT top 1 min(to_bigint(BACKUP_ID)) FROM ${SAPSYSTEMNAME}.SYS.M_BACKUP_CATALOG where SYS_START_TIME >= ADD_DAYS(CURRENT_TIMESTAMP,-15) and ENTRY_TYPE_NAME='complete data backup' and STATE_NAME='successful' ;" HDBSQL="$DIR_EXECUTABLE/hdbsql -U BACKUP" HANAv=$($DIR_EXECUTABLE/hdbsrvutil -v | awk '/version:/ {if($2 < 2){print "1"}else{print "2"}exit}') DDIR="/hana/backup/raw" ; [ -d $DDIR ] || mkdir -m 751 $DDIR STAMP="$(date +%Y%m%d%H)" # Cleanup old files (to free space before): find $DDIR -type f -mtime +13 -exec rm -fv "{}" \; TYPE="" if [ $(date "+%H") = "06" ] ; then # Make FULL on Sun (0th weekday) if [ $(date "+%w") -ne 0 ] ; then TYPE="DIFFERENTIAL" fi else TYPE="INCREMENTAL" fi echo " ** Doing ${TYPE:-FULL} backup" OPTS="USING FILE ('$DDIR/$STAMP')" [ 'x'"$HANAv" = 'x2' ] && OPTS="FOR ${SAPSYSTEMNAME} USING FILE ('$DDIR','$STAMP')" $HDBSQL "BACKUP DATA $TYPE $OPTS;" || { echo "Backup failed, script aborted" ; exit 1 ; } if [ -z $TYPE ] ; then # When FULL runs, also make full backup for SYSTEMDB [ 'x'"$HANAv" = 'x2' ] && $HDBSQL "BACKUP DATA $TYPE FOR SYSTEMDB USING FILE ('$DDIR','${STAMP}-SYSTEMDB');" || { echo "Backup failed, script aborted" ; exit 1 ; } # Clean backup catalog from outdated backups: # Same syntax for HANA1 and HANA2 SYSTEMDB catalog IDFILE=$(mktemp) $HDBSQL -o $IDFILE -a $SQL BACKUPID=$(cat $IDFILE) rm -f $IDFILE if [ -z $BACKUPID -o 'x'$BACKUPID = 'x?' ] ; then echo " ** No backup to purge from catalog" else echo " ** BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID $BACKUPID" $HDBSQL "BACKUP CATALOG DELETE ALL BEFORE BACKUP_ID $BACKUPID" fi if [ 'x'"$HANAv" = "x2" ] ; then # Every TENANT DB has its own backup catalog. IDFILE=$(mktemp) $HDBSQL -o $IDFILE -a $SQLTENANT BACKUPID=$(cat $IDFILE) rm -f $IDFILE if [ -z $BACKUPID -o 'x'$BACKUPID = 'x?' ] ; then echo " ** No backup to purge from catalog" else echo " ** BACKUP CATALOG DELETE FOR ${SAPSYSTEMNAME} ALL BEFORE BACKUP_ID $BACKUPID" $HDBSQL "BACKUP CATALOG DELETE FOR ${SAPSYSTEMNAME} ALL BEFORE BACKUP_ID $BACKUPID" fi fi fi echo " ** Saving backup catalog" BCATALOG=$(ls -1tr $DIR_INSTANCE/backup/log{,/DB_${SAPSYSTEMNAME}}/log_backup_0_0_0_0* 2>/dev/null|tail -1) cp -av $BCATALOG ${DDIR}/
Schedule it run hourly or less frequent. It is pre-configured to run a FULL backup in Sun at 06:xx, DIFF at 06:xx on rest days and INC in any other cases.
The script performs backups separately for the SYSTEMDB and TENANT databases. Since there are no frequent changes in SYSTEMDB, it is only backed up with FULL backups.
It remains only to take care of the backup to the tape directory /hana/backup/raw.
First of all, stop the database with the HDB stop command. Then check for are still running processes named "hdbrsutil". These processes hold the memory with previously loaded tables to save load time the next time the database is started. I couldn't find any information about their behavior when restoring a database from a backup, so I stopped them. A clean stop is the opposite of what is shown in the "ps" output, for example:
sidadm> HDB stop sidadm> ps -ef | grep hdbrsutil hdbrsutil --start --port 30003 --volume 2 --volumesuffix mnt00001/hdb00002.00003 --identifier 1598894163then the stop command will be:
sidadm> hdbrsutil --stop --port 30003
As I said, this guide is about recovering to a different server, or recovering as a last resort. Normal recovery from accidental table drop can be done via hdbstudio with a simple right click.
Remove all data in data, log and archive area:
sidadm> rm -rf /hana/data/${SAPSYSTEMNAME}/* /hana/log/${SAPSYSTEMNAME}/* sidadm> find $DIR_INSTANCE/backup/log/ -type f -delete
Check the /hana/backup/raw directory includes required FULL backup, probably followed by DIFF backup and one ore more INC backups. Restore them from tape if needed.
Since my script only creates a FULL backup copy for SYSTEMDB, this is the only option to restore it.
Locate the most recent FULL backup:
sidadm:/hana/backup/raw> ls -1tr *SYSTEMDB* 2020083119-SYSTEMDB_databackup_0_1 2020083119-SYSTEMDB_databackup_1_1
As you can see, the prefix for the full backup in our case is 2020083119-SYSTEMDB. Similar to HANA1 syntax will be used at the time of recovery. Create a command file for restore using detected prefix:
sidadm> echo "RECOVER DATA USING FILE ('/hana/backup/raw/2020083119-SYSTEMDB') CLEAR LOG" > $DIR_INSTANCE/work/recoverInstance.sql
Now clear HANA's log and trace files so that it's easier to read them, then run the instance:
sidadm> cdtrace && rm -rf * sidadm> HDB start
If you received the final OK, the restore was completed, otherwise check the file named like nameserver_HOSTNAME.30001.003.trc.
We have to restore SYSTEMDB first as shown before. Then connect to it using "hdbsql" and check status of our tenant:
sidadm> hdbsql -n localhost:30013 -u SYSTEM hdbsql SYSTEMDB=> \pa Page by page scroll output switched OFF hdbsql SYSTEMDB=> \al Aligned output mode switched ON hdbsql SYSTEMDB=> select * from m_databases; | DATABASE | DESCRIPTION | ACT | ACTIVE_S | OS_USER | OS_GROUP | RESTART | F | | -------- | --------------- | --- | -------- | -------- | -------- | ------- | - | | SYSTEMDB | SystemDB-NDB-00 | YES | | | | DEFAULT | ? | | NDB | NDB-00 | NO | | | | DEFAULT | ? |
The output means that SYSTEMDB knows about the existence of the TENANT database and is ready for its restore.
In any case, you have to restore the TENANT database from a FULL backup first in order to create all the volume and segment files, otherwise you will receive an error like "cannot open file, blah-blah" or "filesystem path does not exist". To find the correct prefix name for the FULL backup, do:
/hana/backup/raw> ls -1tr | egrep -v "log_backup|differential|incremental" 2020083112_databackup_2_1 2020083112_databackup_0_1 2020083112_databackup_1_1 2020083119_databackup_0_1 2020083119_databackup_2_1 2020083119-SYSTEMDB_databackup_0_1 2020083119-SYSTEMDB_databackup_1_1
sidadm> hdbsql -n localhost:30013 -u SYSTEM hdbsql SYSTEMDB=> RECOVER DATA for <TENANTDB> USING FILE ('/hana/backup/raw/2020083119') CLEAR LOG; hdbsql SYSTEMDB=> SELECT * FROM M_DATABASES; | DATABASE | DESCRIPTION | ACT | ACTIVE_S | OS_USER | OS_GROUP | RESTART | F | | -------- | --------------- | --- | -------- | -------- | -------- | ------- | - | | SYSTEMDB | SystemDB-NDB-00 | YES | | | | DEFAULT | ? | | NDB | NDB-00 | YES | | | | DEFAULT | ? |
You are done if your goal was to restore from a FULL backup. But if you want to recover from the following DIFF + INC + INC files, you should stop the TENANT database:
sidadm> hdbsql -n localhost:30013 -u SYSTEM hdbsql SYSTEMDB=> ALTER SYSTEM STOP DATABASE <TENANTDB> ;
Restoring from DIFF and INC backups requires BACKUP CATALOG avaliable. Restore CATALOG (file named like log_backup_0_0_0_0.xxx) and put it next to backup data. The catalog timestamp should be equal or higher than backup file timestamp. Translate backup file timestamp to human form like:
/hana/backup/raw> ls -1tra .. 2020083120_databackup_incremental_1598893310270_1598893823128_2_1 log_backup_0_0_0_0.1598893826050 /hana/backup/raw> date -d @$(echo 1598893823128 |awk '{printf"%d\n",$1/1000}') "+%Y-%m-%d %H:%M:%S" 2020-08-31 20:10:23
The last date will be UNTIL TIMESTAMP option for restore. Run this command in "hdbsql":
sidadm> hdbsql -n localhost:30013 -u SYSTEM hdbsql SYSTEMDB=> RECOVER DATABASE for <TENANTDB> UNTIL TIMESTAMP '2020-08-31 20:10:23' CLEAR LOG USING CATALOG PATH ('/hana/backup/raw') USING LOG PATH ('/hana/backup/raw') USING DATA PATH ('/hana/backup/raw'); hdbsql SYSTEMDB=> SELECT * FROM M_DATABASES; | DATABASE | DESCRIPTION | ACT | ACTIVE_S | OS_USER | OS_GROUP | RESTART | F | | -------- | --------------- | --- | -------- | -------- | -------- | ------- | - | | SYSTEMDB | SystemDB-NDB-00 | YES | | | | DEFAULT | ? | | NDB | NDB-00 | YES | | | | DEFAULT | ? |