Block Change Tracking setup and measurement

For everything you ever wanted to know about Block Change Tracking (BCT) and were afraid to ask, check out the awesome presentation from Alex Gorbechev

  • http://www.pythian.com/documents/UKOUG06-10gBCT.ppt
  • http://www.pythian.com/documents/Pythian-oracle-block-change.pdf

also for info on BCT overhead

Here are some quick notes, mainly for my own reference, on BCT. BCT can be enabled while the database is up. Turning on BCT requires giving a tracking file location which can also be done while the database is up. Changing the BCT tracking file location after it has already been set requires bouncing the database. Finally, with BCT enabled we can look at the upper bound size for a database incremental backup. The actual size could be the same size as the upper bound or as small as 1/4 of the upper bound size as BCT only tracks groups of 4 blocks at a time for changes.

select status from v$block_change_tracking;
-->  DISABLED
alter database enable block change tracking;
-->  ERROR at line 1:
-->  ORA-19773: must specify change tracking file name
alter system set db_create_file_dest='/oradata/orcl/' scope=both;
alter database enable block change tracking;
select * from v$block_change_tracking;
-->   STATUS     FILENAME                                                      BYTES
-->   ---------- -------------------------------------------------------- ----------
-->   ENABLED    /oradata/orcl/ORCL/changetracking/o1_mf_6ybbfdjv_.chg     11599872

Query based on original query from Alex Gorbechev to give the number of blocks to read during a level 1 incremental backup. It seems this should also be upper bound on the size of an incremental database level 1 backup. Alex’s original query was based on the reads incurred by one datafile. This query attempts to look at the whole database.

--  from Alex Gorbechev ( I believe the 32 refers to 4 *8K blocks size so if your block size is different you'll have to change this )
SELECT (count(distinct b.fno||' '||bno) * 32)/1024 MB
  FROM x$krcbit b,
    (SELECT MIN(ver) min_ver, fno
       FROM
         (SELECT curr_vercnt ver,
                 curr_highscn high,
                 curr_lowscn low,
                 fno
          FROM x$krcfde
              UNION ALL
          SELECT
                  vercnt ver,
                  high,
                  low,
                  fno
          FROM x$krcfbh )
          WHERE (SELECT MAX(bd.checkpoint_change#)
                 FROM   v$backup_datafile bd
                 WHERE  bd.incremental_level <= 1)
                 between low and high
           GROUP BY fno      ) sub
 WHERE b.fno = sub.fno AND b.vercnt >= sub.min_ver
/
--> 960

Alex’s original query:

SELECT count(distinct bno) * 32
FROM x$krcbit b
WHERE b.fno = 7 AND b.vercnt >=
(SELECT MIN(ver) FROM
(SELECT curr_vercnt ver, curr_highscn high, curr_lowscn low
FROM x$krcfde WHERE fno = 7
UNION ALL
SELECT vercnt ver, high, low
FROM x$krcfbh WHERE fno = 7)
WHERE (SELECT MAX(bd.checkpoint_change#)
FROM v$backup_datafile bd
WHERE bd.file# = 7
AND bd.incremental_level <= 1) between low and high);

Running incremental backups for a while it’s possible to collect historical ration between number of blocks read and number
and size of the backup. This would as well account for compression.
Note that the query above is just an example and it has the following limitations:
• Chunk size is hard coded to 32K (could it vary on different platforms?)
• First block overhead is not accounted for
• No special case when required bitmap version is not available (purged) and the whole datafile must be read
• No case with backup optimization for level 0 (v$datafile_backup.used_optimization)
• No case when no data blocks in datafile is changed (no bitmap version but the first block must be backed up anyway)
• Only single datafile
• No accounting for unavailable base incremental backup


Uncategorized

  1. Trackbacks

  2. June 3rd, 2011: Log Buffer #223, A Carnival of the Vanities for DBAs | The Pythian Blog
  1. Comments

  2. Kyle Hailey
    June 17th, 2011 at 20:30 | #1

    In some cases enabling BCT could run into the old ORA-4031

    SQL> alter database enable block change tracking using file ‘/oracle/block_change.dbf’;
    alter database enable block change tracking using file ‘/oracle/block_change.dbf’
    *
    ERROR at line 1:
    ORA-19760: error starting change tracking

    from init.ora

    Fri Jun 17 17:13:01 2011
    alter database enable block change tracking using file ‘/oracle/block_change.dbf’
    Fri Jun 17 17:13:01 2011
    Block change tracking file is current.
    Starting background process CTWR
    CTWR started with pid=276, OS id=2537
    Fri Jun 17 17:13:02 2011
    Errors in file /oracle/xxx_ctwr_2537.trc:
    ORA-04031: unable to allocate 1826720 bytes of shared memory (“shared pool”,”unknown object”,”sga heap(1,0)”,”CTWR dba buffer”)
    Block change tracking service stopping.
    Fri Jun 17 17:13:02 2011
    Stopping background process CTWR
    ORA-19760 signalled during: alter database enable block change tracking using file ‘/oracle/block_change.dbf’…
    Deleted file /oracle/block_change.dbf

    and the error persists even after flushing the shared pool and in many cases bouncing the database may not be an option.

    Two options are to increase large pool or reduce the BCT buffer

    1) example: alter system set large_pool_size = 100M;
    2) tweak hidden parameter _bct_public_dba_buffer_size but only as a last resort

  3. November 17th, 2011 at 23:54 | #2

    Another good piece of information thanks to Francisco Sanchez, is preparing for the size of the BCT file:
    Minimum size is 11MB, with increments of 10MB as needed.

    The following query can be used to calculate the maximum size the BCT from the number of files in the database, for a non RAC case:
    SELECT((
    (SELECT SUM(ceil(bytes /(7686 * 1024 * 1024))) * 8 bitmap_ext
    FROM v$datafile) +
    (SELECT ceil(VALUE / 252) file_descr_ext
    FROM v$parameter
    WHERE name = ‘db_files’) + 1) *
    32 + 1088) / 1024 bct_file_size_mb
    FROM dual;

    However if the result is less than 11 MB, then the size will be 11 MB.

You must be logged in to post a comment.