Block Change Tracking setup and measurement
For everything you ever wanted to know about Block Change Tracking (BCT) and were afraid to ask, check out the awesome presentation from Alex Gorbechev
also for info on BCT overhead
Here are some quick notes, mainly for my own reference, on BCT. BCT can be enabled while the database is up. Turning on BCT requires giving a tracking file location which can also be done while the database is up. Changing the BCT tracking file location after it has already been set requires bouncing the database. Finally, with BCT enabled we can look at the upper bound size for a database incremental backup. The actual size could be the same size as the upper bound or as small as 1/4 of the upper bound size as BCT only tracks groups of 4 blocks at a time for changes.
select status from v$block_change_tracking; --> DISABLED alter database enable block change tracking; --> ERROR at line 1: --> ORA-19773: must specify change tracking file name alter system set db_create_file_dest='/oradata/orcl/' scope=both; alter database enable block change tracking; select * from v$block_change_tracking; --> STATUS FILENAME BYTES --> ---------- -------------------------------------------------------- ---------- --> ENABLED /oradata/orcl/ORCL/changetracking/o1_mf_6ybbfdjv_.chg 11599872
Query based on original query from Alex Gorbechev to give the number of blocks to read during a level 1 incremental backup. It seems this should also be upper bound on the size of an incremental database level 1 backup. Alex’s original query was based on the reads incurred by one datafile. This query attempts to look at the whole database.
-- from Alex Gorbechev ( I believe the 32 refers to 4 *8K blocks size so if your block size is different you'll have to change this ) SELECT (count(distinct b.fno||' '||bno) * 32)/1024 MB FROM x$krcbit b, (SELECT MIN(ver) min_ver, fno FROM (SELECT curr_vercnt ver, curr_highscn high, curr_lowscn low, fno FROM x$krcfde UNION ALL SELECT vercnt ver, high, low, fno FROM x$krcfbh ) WHERE (SELECT MAX(bd.checkpoint_change#) FROM v$backup_datafile bd WHERE bd.incremental_level <= 1) between low and high GROUP BY fno ) sub WHERE b.fno = sub.fno AND b.vercnt >= sub.min_ver / --> 960
Alex’s original query:
SELECT count(distinct bno) * 32 FROM x$krcbit b WHERE b.fno = 7 AND b.vercnt >= (SELECT MIN(ver) FROM (SELECT curr_vercnt ver, curr_highscn high, curr_lowscn low FROM x$krcfde WHERE fno = 7 UNION ALL SELECT vercnt ver, high, low FROM x$krcfbh WHERE fno = 7) WHERE (SELECT MAX(bd.checkpoint_change#) FROM v$backup_datafile bd WHERE bd.file# = 7 AND bd.incremental_level <= 1) between low and high);
Running incremental backups for a while it’s possible to collect historical ration between number of blocks read and number
and size of the backup. This would as well account for compression.
Note that the query above is just an example and it has the following limitations:
• Chunk size is hard coded to 32K (could it vary on different platforms?)
• First block overhead is not accounted for
• No special case when required bitmap version is not available (purged) and the whole datafile must be read
• No case with backup optimization for level 0 (v$datafile_backup.used_optimization)
• No case when no data blocks in datafile is changed (no bitmap version but the first block must be backed up anyway)
• Only single datafile
• No accounting for unavailable base incremental backup