This article can also be found in the Premium Editorial Download "Storage magazine: Is storage virtualization ready for the masses?."
Download it now to read this article plus other related content.
|A proactive approach to laying out data enables the magic of mirroring for|
continuance and replication.
As discussed in the April issue, (see "Integration") implementing business continuance volumes (BCVs) can be a tricky proposition. Data must be mapped from the application to physical storage if mirrors are to be useful for backup, testing or decision support. Storage managers need to plan their disk layouts to facilitate mirroring and sharing.
Storage space is abstracted at many levels within the data path with little visibility from layer-to-layer. Disks are combined into RAID sets. Those sets are then presented to servers as logical unit numbers (LUNs). The LUNs are combined into volume groups, which are carved into volumes. Volumes host filesystems and filesystems contain files, which contain data. Users want data - not LUNs - but arrays can only act on LUNs. While storage vendors are working on automated mapping of content to physical devices, such products are few and immature.
The BCV layout problem stems from this basic fact: Array-based mirroring works at a level far below the application, with no real understanding of the stored data's meaning. Administrators want to protect and share filesystems, but array-based mirroring operates on disks, or LUNs instead. The trick to configuring BCVs is to lay out storage so the mirror images of the disks will contain entire usable filesystems.
A poor configuration can make BCV use impossible. With a single filesystem spread across a number of disks, every one would need to be mirrored, or the copy would be useless. However, when all the disks are mirrored, everything on them would be mirrored - which is unnecessary at best - and problematic at worst. In an extreme case, a customer could use a volume manager to spread parts of a volume across every disk visible to the server, even across multiple arrays. If one of the arrays didn't support mirroring, this would eliminate the possibility of BCV use altogether for this volume. Even if all the arrays do support mirroring, it would be difficult to create a scripted procedure to synchronize or split all the volumes when needed. Storage companies are working diligently on software, such as EMC's Replication Manager, to automate the process of using BCVs, but no solution can make impossible configurations functional.
While BCV use requires that filesystems be consolidated on a few disks, traditional performance tuning wisdom is that they be spread across as many spindles as possible. These contradicting demands are often apparent in design meetings, where storage and database administrators argue the extreme opposite layout choices. Storage administrators should stick to their guns, since the large caches and hardware RAID found in modern arrays obviates the need for extreme performance tuning.
Similarly, ad hoc volume management processes can lead to parts of a volume sprawling across many disks. The ease of adding new disks to a volume group and resizing logical volumes and filesystems can make this a tempting practice. Imagine the jumbled mess after a few years.
Laying out volumes to enable BCVs
Option 1 represents the traditional method of spreading volumes across as many disks as possible. In this case, a mirror of the disks holding the database would also contain the control and log files. There is simply no way to mirror the database separately from the rest.
Option 2 is obviously no good from a performance or space perspective. Here, each volume has its own disk, making BCV operations simple. But control files are tiny, and a dedicated LUN hardly makes sense below 1GB.
Option 3 is the right choice. The database has a pair of disks, and everything else shares another pair. BCV copy of the database would be separate from the rest, giving it more space than the control filesystem.
Rather than mirroring at the array level, a volume manager can be leveraged to create BCVs of logical volumes directly. This approach eliminates the challenge of mapping volumes to LUNs, and seems also to simplify layout planning. However, for volume manager-based mirrors to be considered BCVs, they need to be exportable, so they can be shared with other systems. This is trickier than it may seem.
Volume managers such as Veritas' Volume Manager allow mirrors of logical volumes to be created in a disk group, and then split and exported into a separate disk group. This mirror disk group can then be imported on another server, as long as the second server can see all the LUNs the mirror was placed on. The benefit of using a volume manager to make the copy is that it has complete knowledge of the logical volumes, ensuring that the mirror is a complete and correct copy of the original. The disadvantage is that the creation of the mirror sends data flooding over the server bus, potentially impacting performance.
Creating mirrors within a single server is simple - exporting them as BCVs is the challenge. After the copy is made, the LUNs containing the mirror volumes must be released by the source server and turned over to the target. As shown in "Server-based mirroring," this is impossible if the source and mirror reside on the same LUN. Therefore, sufficient extra LUNs must be reserved exclusively for BCV use, just as with array-based BCV solutions.
For servers with complex volume layouts, using a volume manager for mirroring will be easier to configure and maintain than an array-based solution. Servers with just a few drives, or those with strict performance requirements, would benefit from array-based BCV solutions. Furthermore, both array- and server-based BCVs require the same planning and care as with all other aspects of storage management. A complete understanding of the entire storage environment - from application to disk - makes the magic of mirroring possible.
This was first published in June 2002