Oaktable World video: Database Virtualization and Instant Provisioning
Slides available at: Database Virtualization and Instant Cloning
Thanks to Marcin Przepiorowski for editing videos and Tim Gorman for funding the videos. For a full list of Oaktable World 2013 videos see http://dboptimizer.com/oaktable-world/
A completely new and totally different database virtualization presentation will be given at
- RMOUG Feb 12, 2013 11:15 room 402 “technical” and 1:15 rm 407 “marketing” with technical information
- NoCOUG Feb 21, 2013
- HOTSOS Mar 5, 2013
What is Database Virtualization?
Perhaps the single largest storage consolidation opportunity in history
By Kyle Hailey, Delphix http://delphix.com
January, 2013
Brief
How would you like to
- Double development output
- Lighten DBA work load
- Reduce storage
Existing database cloning technologies allow increased development output, fewer bugs in production, and reduced DBA workload. Database virtualization, built upon these technologies, can greatly increase these gains. In this paper we’ll examine the history of using database clones to improve application development and the technical advances of thin provisioned clones and ultimately database virtualization that allow massive gains in productivity.
Introduction
Oracle estimates that customers deploy, on average, 12 clones of production databases to non-production environments. These database clones are used to support the software development lifecycle – developing new functionality, testing new versions of applications by quality assurance (QA) and user acceptance testing (UAT) prior to production. The clones are also used for reporting and hoc information queries. Further, Oracle predicts this average will double by the time Oracle 12c is adopted.* Today, most cloning is accomplished by creating full physical copies of production databases. These full physical copies are time consuming to make, requiring significant DBA time, storage space, and generally lead to project delays.
Development demands preclude organizations from working directly with the production database. Development of new versions of applications must be performed in a sandbox where schema changes and data additions, subtractions, and manipulations can be performed without affecting business continuity. After development, QA and UAT testing must be done on a system that matches the development specifications, along with suitable data. Finally, ad hoc and reporting queries can have unexpected resource consumption which negatively affects performance on production systems.
Development and QA processes can further exacerbate the need for copies. Developers generally work on separate branches of code which can have associated requirements for database schema changes or specific datasets. If developers are sharing a database copy, the job falls to the developers to make sure they approve any changes and these changes are compatible with what everyone else is working on. This process of approving changes alone can take weeks and add much more time debugging when data or schema changes break others’ code. Ideally, developers would operate in a sandbox with a their own copy of the production test database.
QA generally run multiple regression test suites, validating that the newly developed functionality works and that existing functionality hasn’t broken. When working with a single copy of a production database, this puts QA in a bind – they either have to run all tests suites simultaneously or serially. When the test suites are run simultaneously, teams run the risk of compromising the results as data are modified by multiple independent tests. Test suites can be run serially – refreshing the database copy after each test, but at a massive hit to productivity. Much like with development, the ideal scenario is a production clone for each test suite.
As an example scenario, a customer with a 1 terabyte database with 100 developers and 20 suites would need close to 130 production database copies (one database copy per developer a test suite, and a few extra for branching, merging, ad hoc queries, and reporting). Understandably, very few companies have the resources (DBA time, storage) to provision these, let alone keep them refreshed for the duration of the project.
Given all the high demand for clones of production databases, companies and DBAs often struggle to keep up and must make sacrifices in quality or quantity. The compromises reached are generally fewer, shared databases, partial subset databases, or a mixture of both.
Solutions
Development productivity gains, reduction of production bugs, and DBA time savings have been available without extra licenses through little known functionality in Oracle since version 11.2.0.2. Even greater productivity gains are available with industry leading technologies, supporting additional versions of Oracle and other leading databases. These technologies enable productivity gains by reducing the workload and resource required to provision multiple copies of production databases.
In in our previous example, creating 130 copies of a 1TB database is easily possible in the space of a single copy of the production database using thin provision cloning. Thin provision cloning gives enormous disk savings by sharing the majority of source database data blocks. A large portion of database blocks across multiple copies of a database remain the same, thus thin provision cloning allows those unchanged blocks to be shared between different clones. This technology ultimately led to database virtualization, which goes beyond thin clone provisioning to dramatically reduce the overhead of managing many cloned databases providing significant agility to development teams.
Database virtualization is based on the core technology of thin provision cloning, which provides clones of production databases in less space and time than making full physical copies. Database virtualization evolves this technology to provide specific management controls, allowing virtual databases to be created, refreshed, rolled back, cloned, branched and deleted in minutes. Virtual databases can be provisioned from any time frame (down to the second) within the source database’s retention window.
This functionality allows each developer and each QA test suite to have their own full copy of a production database. Further, developers and testers can have access to weeks worth of backup databases, in the space of a single backup. These backups can be brought online in minutes, data reviewed or extracted and the copy removed in minutes. Database virtualization allows DBAs to quit having to make compromises – they can provide any number of databases without worrying about the scope of the effort or the space required, and developers and testers can ensure significantly higher quality with more complete data.
In recap, the three industry technologies available for making clones are:
- Full physical clone
- Thin provisioned clone
- Database virtualization
Next we’ll describe how each of these technologies solve the problems presented by creating copies of production databases, and the benefits that each evolutionary step provide.
Technologies
Each of the technologies follows along an evolutionary path – full physical clones, thin provision clones, and database virtualization offer the ability to create multiple copies of production databases, but where they differ is in implementation feasibility and automation.
Full Physical Clone
Full physical clones are the classic way to make copies of production databases to non production environments. Full copies are just that – an entirely new instance of a database, separate from the production systems. These clones are time consuming, resource intensive, and space consuming. On average, the time to create a full physical clone is about two weeks from initial request to useable database instance. To DBAs the core issue is clear – significant work and time is invested to make exact copies, much of which is unused meaning that the majority of the data blocks are and will remain identical. Further, the work done by DBAs to create the database copies is immediately out of date and there is no easy management solution for maintaining, refreshing, or modifying these clones. Database copies can be created, however significant effort is required from the DBA, development and QA teams to work around the limitations of the system.
Thin Provisioned Cloning
Thin provisioned cloning was the first technology to address the issue of storing large numbers of identical data blocks. Thin provisioning introduces a new layer over a copy of a source databases. Each clone has a separate thin layer where the clone maintains its changes to the central copy, which remains unchanged. As each clone has a separate thin layer that only it can see, each has the appearance of being a full physical copy of the source database. Thin provisioning can eliminate much of the space demand of database copies, reducing the associate storage cost of non-production database copies.
There are three categories of thin provisioning technology:
- Single point in time
- Multiple but limited points in time
- Multiple but limited points in time in a rolling window
Single Point in Time
Single point in time thin provision cloning is the simplest thin provisioning technology, but the least flexible. Single point in time thin provisioning takes a full database backup at a point in time and allows multiple clones to open this backup. The technical innovation is allowing each clone to write any changes to a private area, thus each clone shares the majority of data blocks with the other clones but the private change area makes it appear to each clone as if they have a full size read/write copy of the database. The downside to this technology is that it does not account for database refreshes – any time a clone requires a newer version of the source database, then an entire new copy of the source database has to be made. Further, it is only appropriate for situations in which high performance is not a key requirements as it is notably slower than its physical counterparts. Finally, there is significant scripting required and limited documentation available, meaning that the onus is on the DBA to manage and own the environment.
Oracle first offered this technology in an obscure feature called DBclone in Oracle 11.2.0.2#, however it has performance and management overhead even in limited use and not appropriate for enterprise level development.
Multiple limited clone versions
To address the issue of database refreshes, EMC and Fujitsu offer thin provisioned cloning technology which allows sharing data blocks across multiple versions of the source databases. This technology is based on file systems that can take point-in-time snapshots. The point-in-time snapshot can be cloned to provide a private read/write version of that file system. As changes come into the file system from the source database, new file system snapshots and clones can be created allowing multiple point in time database views.
Unfortunately, after a limited number of snapshots (generally around ten), the system has to be rebuilt requiring a complete new copy of the original database. In addition to periodic rebuilds, these systems also incur major performance hits. The performance hits can be so serious on VMware’s Data Directory linked clone technology that VMware recommends against using it for Oracle databases.
Continuous data versions
NetApp offers the ability to not only snapshot and then create clones from the snapshots but also drop any blocks from the original snapshot that are no longer needed, allowing a continuous rolling window of snapshots from the source database. Custom retention windows can be set up – new data blocks are constantly added and old data blocks dropped. As an example, if a two week retention window was desired, the system could snapshot the source database once a day and clones could share snapshots anywhere in that two week window. Blocks particular to snapshots falling outside of the two week time window could be dropped, thus allowing the system to run continuously without requiring rebuilds.
While this offers quite a bit of functionality not possible with other thin provisioned clones, there are a number of serious downsides that prevent most enterprises from deploying it.
- Hardware Lock-in: To provision this functionality NetApp requires buying specialized hardware which requires unique administration. Administrators using this functionality with the NetApp hardware are required to write custom scripts to set up the system.
- LUN-Level Snapshots: NetApp works on LUNs, taking snapshots and making clones of the full LUN as opposed to the datafiles. As it works at the LUN level, it can not detect any corruption in the datafiles that would otherwise be found using RMAN APIs to get the database backups.
- Custom Scripting: Custom scripting is required to make the original database backup and keep the backup updated with changes from the source database.
- Clone Creation: NetApp doesn’t supply any functionality to actually provision the clone databases, and clones can only be made from snapshots.
- Clone Flexibility: As clone can only be made from snapshots, a number of key use cases cannot be accomplished – clones can’t be created from any timestamp, can’t be rolled back, and can’t be branched.
Oracle’s ZFS storage appliance has a similar capability as Netapp but requires even more scripting and manual administration than Netapp thus has seen little to no uptake.
Database Virtualization
Thin provisioned clones have been around for almost two decades, yet it has seen very limited uptake due to the need for specialized hardware, expert knowledge, and scripting. These barriers to entry and the limited set of use cases have ensured that thin provisioned cloning remains an underutilized technology. Database virtualization was invented to take the benefits of thin provisioned clones, couple it with simple management, and provide significant more data agility through on-demand database access.
Database virtualization takes the core technology of thin provisioned cloning and extends it providing the ability to:
- Automate initial source database backup, snapshots, and redo log collection.
- Automate data retention, clearing out data older than designated time window
- Automate provisioning a clone from any SCN or second
- Provision clones from multiple sources to the same point in time
- Enable cloning of clones, branching clones, and rolling back clones
- Efficiently store all the changes from source database
- Run continually and automatically
- End user virtual database provisioning
- Easy enough to be run by non-DBA, non-sysadmin
Database virtualization technology allows virtual database to be made in minutes, taking up almost no space since the virtual database only creates new control files, redo log files and a new temporary table space. All the rest of the data is initially shared. This allows the following advantages:
- Databases on demand
- Faster development
- Higher quality testing
- Hardware reduction
Databases on Demand
Virtual databases can be self provisioned in a matter of minutes, eliminating significant bureaucracy. Provisioning full physical copies can takes weeks, virtual databases take minutes now by eliminating both the data copying time of the production database as well as all the time for requesting, discussing, processing and allocating resources. When a developer needs a clone they typically have to ask their manager, DBA, storage admin, etc. The managerial decision making process, administrative tasks and coordination meetings often take weeks. With database virtualization all of the overhead can be eliminated. The developer can provision their own virtual database in minutes, with no storage overhead.
Faster development
As the resource and operational cost of providing database copies are eliminated with database virtualization, teams of developers can go from sharing one full physical production copy to each having their own private copy. With a private copy of the database, a developer can change schema and metadata as fast as they want instead of waiting days or weeks of review time to check in changes to a shared development database.
Higher quality testing
With as many virtual databases as needed, QA teams no longer need to rely on one full copy of the source database on which to run tests. With a single database, QA teams often have to stop and refresh and ensure they’re not overlapping tests. With database virtualization, QA can run many tests concurrently and the virtual databases can be refreshed back to the original state in minutes allow immediate repeated replay of test suites, captured workloads and patch applications.
Hardware reduction
Database virtualization can dramatically reduce the amount of storage required for database copies. As the majority of the data blocks are similar, database virtualization requires storing the changed blocks, and even those can be compressed.
Database virtualization not only saves disk space but can also save RAM. RAM on the virtual database hosts can be minimized because virtual databases share the same data files and can share the same blocks in the file system cache. No longer does each copy require private memory to cache the data.
Database Virtualization Examples
Delphix example
The Delphix Server is a software stack that implements database virtualization using the Delphix file system (DxFS). The Delphix Server automates the process of database virtualization and management, and doesn’t require any specialized hardware. It only requires an x86 box to run the software and access to LUNs with about the same amount of the disk space of the database to be virtualized. The source database is backed up onto the Delphix virtual appliance via automated RMAN APIs, the data is compressed, Delphix automates syncing of the local copy with changes in production, freeing of data blocks outside the time retention window and Delphix handles the provisioning of virtual databases. A virtual database can be provisioned from any SCN or second in time during the retention window (typically two weeks).
Oracle Example
Oracle is enabling database virtualization in Oracle 12c with Snapshot Manager Utility (SMU) a pay for licensed software utility . The utility runs on the Oracle ZFS storage appliance, where the the source database data files are stored.
Summary
Thin provision cloning has been around nearly two decades but has not been widely adopted due to the high barriers to entry. These barriers, including specialized hardware, consistent system rebuilds, specialized storage administrators, and custom scripting have led to the de facto solution being physical clones. Short of a more attractive option, companies have opted to create full or partial physical clones and deal with the ramifications of incomplete datasets, refresh difficulty, and concurrent use. With database virtualization, the hardware and management barriers have finally been eliminated allowing enterprises to offer significant database agility.
Appendix
Here are a list of the technologies that can be used to create thin provision clones
- EMC – system rebuild issues after a few snapshots, hardware lock-in, requires advanced scripting, performance issues
- NetApp – hardware lock-in, size limitations, requires advanced scripting
- Clone DB (Oracle) – single version of source database only, performance issues, requires advanced scripting
- ZFS Storage Appliance (Oracle) – hardware lock-in, requires advanced scripting
- Data Director (VMware) – system rebuild issues, performance issues, x86 databases only, officially not supported for thin provisioning cloning of Oracle databases
- Oracle 12c Snapshot Manager Utility (SMU) – hardware lock-in, requires source database have it’s datafiles located on Oracle ZFS Appliance
- Delphix – automated solution for both administrator and end user. Delphix works for Oracle 9,10,11 on RAC, Standard Edition and Enterprise Edition. Fully automated with time retention windows and end user self service provisioning. Also support SQL Server databases. With Delphix there are no size restrictions and unlimited clones and snapshots. Snapshots can even be taken of snapshots creating branched versions of source databases.
References
- CloneDB
– http://www.oracle-base.com/articles/11g/clonedb-11gr2.php
– http://oracleprof.blogspot.ie/
- ZFS
– http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
- ZFS Appliance
– http://www.oracle.com/technetwork/articles/systems-hardware-architecture/cloning-solution-353626.pdf
- Data Director
– http://www.virtuallyghetto.com/2012/04/scripts-to-extract-vcloud-director.html
– http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1015180
– http://myvirtualcloud.net/?p=1222 linked Clone
- EMC
– https://community.emc.com/servlet/JiveServlet/previewBody/11789-102-1-45992/h8728-snapsure-oracle-dnfs-wp.pdf
- NetApp
– http://media.netapp.com/documents/snapmanager-oracle.pdf
– https://communities.netapp.com/docs/DOC-10323 flexclone
– http://blog.thestoragearchitect.com/2010/08/02/netapp-the-inflexibility-of-flexvols/
- Delphix
– http://delphix.com
* Charles Garry, Oracle keynote at NYOUG in Dec 2012
Trackbacks
Comments