Get top performance from database storage

Taking your SAN beyond backup to run production databases requires the right layout, good practices and better integration. The benefits: high performance, floor-space savings and streamlined operations.

This article can also be found in the Premium Editorial Download: Storage magazine: Optimizing your enterprise database storage:

One of the driving forces behind the move to storage area networks (SANs) has been the increasing importance and size of enterprise databases. As databases grew, administrators found that the only way to efficiently manage and expand database storage is to tap into the flexibility of SANs. However, with the expanded use of databases on storage networks, storage administrators now must deal with new performance, backup, restore and management...

issues-not to mention how to get along with the database administrators (DBAs). Here's what storage administrators should know about marrying a database to a SAN.

Oracle's Storage Push
In a sign that database vendors are increasingly aware of how important the storage behind their databases is, Oracle recently announced that it is launching a storage initiative, which seeks to combine the company's various individual storage initiatives into a strategic push to focus on how Oracle runs on storage. Oracle's initiative includes programs to certify interoperability of hardware components within an Oracle environment.

According to Allen Hoke, technical director, system platforms division of Oracle, "This program focuses on storage technology at a high level vs. our former focus on specific areas such as backup or particular storage hardware or storage management technology." In addition, Hoke says, "In order to provide a good customer experience, we are teaming with our key OS, server hardware and storage partners to provide a remote support strategy, aligned between us and our partners."

At this time, Oracle's partners include EMC, HP, Hitachi, Network Appliance, Legato, Sun, and Veritas.

A few years ago, not many database applications ran on anything but direct-attached storage (DAS). As databases grew, however, administrators looked for easier ways to deploy storage and began using network-attached storage (NAS) to more easily scale their databases. With the advent of SANs, administrators are now finding that storage networks can provide even better performance for the most demanding transaction-oriented, high volume databases.

Today, database applications are starting to move from DAS to SANs. The reasons: increased ease and speed of backup, better scalability, as well as enhanced reliability and performance. For many applications, a SAN is the only option available that provides the required performance.

Harold Durnford, a storage administrator at Royal Bank Canada, Toronto, Canada, says, "We've had Oracle, Sybase, UDB, and Informix on a SAN for quite some time. The reason we looked at storage area networks was the challenge of sharing large storage subsystems." In addition, he says, "It wasn't just a distance limitation, but also real estate. With SCSI, we found we quickly ran out of SCSI ports on our arrays before we ran out of storage. We used the SAN to address fan-out."

Marty Ward, vice president of product management at Veritas has also seen this, and says, "Database users are very focused on scaling up and scaling out, which you can't do with dedicated storage."

Bob Rogers, chief storage architect at BMC says, "Consistent level of performance is driving people to storage area networks, as are better quality backup and recovery tools. Those are all very strong motivators to go to a SAN environment, which is why very large customers have embraced SANs and put databases on them quite early."

Jim Booth, director of systems engineering at CreekPath, agrees, saying, "People are moving to SANs primarily to get the performance. You can certainly implement a database on either NAS or a SAN. However, if you're doing it on a SAN, you get consistent throughput. Though it might be harder to manage, a SAN provides better performance and availability."

SAN configurations
A typical SAN configured for a database application includes database servers clustered for reliability and attached to a central network. At a minimum, two separate database servers are clustered together for redundancy, with dual redundant Fibre Channel (FC) fabrics connected to redundant RAID controllers. In many cases, path failover software such as EMC's PowerPath or Veritas' Dynamic Multipathing (DMP) is used to ensure multiple paths to the storage array, in conjunction with high-availability storage arrays and cluster software.

Michael Leo, a database administrator at Minneapolis, MN-based Caribou Lake Software, a consulting firm which helps companies manage their systems, works with a large medical devices manufacturer replacing legacy Alpha VMS systems with Solaris machines, connected to an EMC Symmetrix SAN. "Everything is striped and mirrored [1+0]. We are using the SAN for development, production online transaction processing [OLTP] and production reporting-a data warehouse of the production database used for reporting. Each database is in the 100GB to 200GB range. We have two copies of the OLTP database-one is a replicated hot standby, " says Leo. The systems run Computer Associates' Ingres database.

"To back up the database, we are using the SAN for disk-to-disk backups. We then use Veritas' NetBackup and a separate backup LAN to backup the SAN. To manage the disk space on the Sun, we use Veritas in combination with EMC's SAN management tools," says Leo.

Best Practices
To get the best performance from your SAN, vendors and administrators recommend:
Use RAID 0+1 for your database volumes
Only use RAID 5 for less critical backup volumes
Separate tables and logs
Use the tools available to watch for storage hotspots
Keep your storage cache running at optimal performance by segregating database traffic from other types of storage traffic
Make sure storage and database administrators work together closely when setting up your database volumes

Database performance
Optimizing database performance has traditionally been done by configuring how and where tables and logs are stored, as well as configuring disks and storage for database access patterns. Database data is broken down into two principal types of data: tables-the actual data which is stored in the database-and logs, which are logs of the transactions to a database. Transaction logs are critical for recovering a database if there's a failure and are also used for replicating databases. Many databases can also use their own native file systems to maximize performance, maintain control over tables and avoid potential performance penalties imposed by file system buffering. Aside from fast storage, CPU and memory, administrators have paid a lot of attention to balancing speed vs. redundancy of data.

Database and storage administrators have carefully selected RAID configurations-for example, using RAID 0+1-both mirroring and striping-in order to gain the greatest read performance from their hardware without losing redundancy. While RAID 5 provides for redundancy at a somewhat lower cost, most database vendors today recommend RAID 0+1 for tables as well as logs and only recommend RAID 5 for less performance-critical backup volumes. Caribou Lake's Leo says, "Performance is highly dependent upon how you configure the SAN. We have seen very good performance for databases with RAID 1+0 type configurations. We have also seen decent performance for RAID 5, unless the RAID 5 is in rebuild. If the RAID 5 is in rebuild, write performance is so bad, the system may as well be down."

To further optimize access times and data throughput, database administrators have to also make sure that tables and logs never share the same spindles on a disk. John Eisenschmidt, a DBA at the American Association for the Advancement of Science, Washington, D.C., says for his last install of Oracle, "We made sure not to put the index tablespaces with the data tablespaces to improve read performance, and put the archive logs and redo logs in different places to balance write operations." Eisenschmidt manages about 2.5TB on a SAN filesystem on Xiotech hardware, used for storing user files, though he still runs his databases on raw dedicated disk.

However, with the advent of SANs and virtualized storage, administrators no longer have that level of control of data placement. According to Bob Rogers, chief technology officer of BMC Software, "Most DBAs are absolutely convinced that file placement is an absolutely critical issue, however, most disks today are virtualized-and users don't have a clue where things are physically located."

Luckily, the speed and manageability of storage have improved the situation, despite the loss in control. Rogers says, "Today, you can get a much more consistent level of performance by going to very large arrays such as those from EMC, IBM, and HDS than you could going to direct-attach disk."

Enter virtualization
Despite virtualization's steady march into storage systems, most vendors and users think that some control is still needed to fully optimize database access. Oracle's Hoke says, "One of the big bad things that can happen with virtualization is that you can have two tables related in your business application, with a high level of traffic in certain rows of those tables."

One Way to Dramatically Increase Database Performance
Question: "Have you come across some hard numbers and performance figures for the use of RAID (at any level) in the database environment?"

Securing PDAs with sensitive data - or access to it - should be taken seriously. Basically, you need to apply the same rigor to PDA security that you would to any other enterprise-class device, with a clear security policy and a uniform suite of security software. Consider the following:

Chris Poelker: Database application performance shouldn't only be looked at from an I/O standpoint, but first and foremost from a design perspective. For example, at one customer site, they were using SQL server in a three-tier Web application, where the Web pages were built dynamically from the database. The design used active server page scripts in the application tier to query the database for results and the results were then sent to the Web server to build the dynamic pages.

By moving to SQL - stored procedures at the database level, and tuning the indexing methods - and a slight schema redesign-performance improved by over 800%, all without even touching the back end I/O. Never throw hardware at a poor design as the return on investment is never as good as taking the time to rethink the original design.

Here's the some things to do for good I/O performance:

1) Use a lot of spindles. If you go to and look at the specifications on the designs to achieve maximum database performance, you'll notice they use hundreds of drives. By a lot of spindles, I don't mean large RAID sets. You should concatenate many smaller RAID sets into a single LUN either by using host software or LUN virtualization within the storage hardware, if it has that capability.

2) Use low latency paths to storage. Fibre Channel uses a packet-based network, which allows for a full duplex backend. On a SCSI bus, only one thing can happen at a time. Using dual-pathed Fibre disks, you can have a read command on one side and a write command on the other side of the disk being queued at the same time.

3) Spread the I/O load as much as possible. Use many HBAs in the server that are load balancing your I/O across all the paths. Tools like Veritas' Volume Manager or Solstice Disk Suite can help do this by creating a single LUN from many spindles on each path. By using 2Gb paths and four HBAs, you can get close to 800MB/sec to the LUN.

4) Use the proper RAID level for the log and database. Best practice here is log files should be on RAID 0+1 (RAID 10) and databases on RAID5 (RAID 0+1 can be used on the database for better performance if your budget allows for it).

Performance. Today's hard drives can achieve a sustained throughput of between 30MB/s and 50MB/s and 120 to 140 IOPS (I/Os per second) per spindle. The performance depends on the drive electronics, onboard cache size, rotation speed (15,000 RPM is better than 7,200 RPM of course) and maximum seek latency. If you partition your RAID set into small partitions, and locate your data on the outer cylinders, your performance will go up substantially. Doing this reduces the seek time for a particular partition. Now just add up the number of spindles to figure out your maximum IOPS. Figure 1680 IOPS using 14 spindles on one HBA, or 3360 IOPS using 2 HBAs to 28 disks doing 64,000 sequential I/O(random I/O will be less). Changing the block size of the transfer will either increase MB per second (throughput) or IOPS. Smaller blocks make IOPS go up, larger blocks make throughput go up.

These numbers are for direct I/O, meaning no cache in front of the drives. Adding cache can increase performance dramatically. I have seen 33,000 IOPS on a single LUN during 100% cache hit. So, if you want the best I/O performance available, use a solid-state disk.

Christopher Poelker is a storage architect at Hitachi Data Systems. He answers readers' questions on "Ask the Expert" at
SearchStorage.com.

According to Hoke, a good database administrator will try to spread those tables across multiple devices, so he doesn't create hotspots, and also for availability. However, with virtualization, those two tables may be inadvertently placed on the same disk, because virtualization masks physical locations. "Virtualization is great for provisioning storage because it's easy to move data and add disks. But with a database, because of performance issues, there are potential drawbacks," Hoke says.

CreekPath's Booth agrees: "File placement is still absolutely important with a SAN, although disks have gotten faster. It still comes down to laying down data on a spinning disk, and not putting redo logs and indexes on the same physical drives. That's why it's important for tools to see all the way down to the spindle level, so that when they're laying down data it's not stepping on its own toes."

Marty Ward, director of product marketing at Veritas concurs, although he doesn't think administrators need to be directly concerned with where data is, "DBAs don't need to be considered about which spindle disk data lies on. We take care of that for them. Our logical volumes allow them to move data around on-the- fly to adjust hotspots, even while running." Other vendors are also announcing tools in this area.

Durnford sees things the same way. "Using JBOD, you're much more sensitive to placements [hotspots]. However, using storage arrays, we use their technology to move storage around while running to eliminate hotspots, so that the server doesn't really need to know the physical storage location," he says. Durnford uses built-in tools on his Hitachi, IBM, and EMC arrays to deal with hotspots on his volumes.

SAN issues
"In the old days of NFS [network file systems], due to lack of availability capabilities for Oracle, our customers saw storage corruption," says Oracle's Hoke. Because of those issues, Oracle's validation program for network-attached storage arose out of concern about the nature of NFS," he says.

In a SAN, storage just looks like a SCSI disk-reducing a number of the headaches that came with putting databases on network-attached storage, since the interface is identical to that of SCSI. Unlike NAS - with storage networks Oracle has left it to their storage partners to qualify HBAs - switches and front-end storage FC adapters. Hoke says that with a SAN, "We have not identified anything needing validation."

However, it still takes some work to get SANs performing right for databases. DBA Eisenschmidt - despite running a SAN for his applications and file systems -is skeptical of using SANs for his database storage. "Our SAN, like many I've seen, is optimized for storage capacity and file serving. While Oracle uses Xiotech's Magnitude in their test labs, I was pretty unhappy with the transaction performance we were able to get out of it."

CreekPath's Booth says, "The biggest issue with databases on a SAN is maintaining consistent throughput and access on a database while running other application sharing the same SAN."

Caribou Lake's Leo agrees, saying, "SANs often save an enterprise money by sharing I/O capacity across departments. But sharing I/O resources between the user files of one department and the DBMS storage for another is awful. The database management system's I/O characteristics are completely different than the user data files, and cache coherency becomes an issue."

Royal Bank of Canada's Durnford has opted for separate storage networks to solve some of these problems. He says, "For tape, we're looking at a separate tape SAN. The issue there is database traffic is very time-sensitive, where tape traffic is very backup-oriented. Though you might think it's time shifted, we've found there's overlap."

The other issue most DBAs mention is the complexity of managing storage networks, which are typically managed by separate storage infrastructure staff. "A lot of progress has been made in the last couple years, but most system administrators want SANs that are easier to administrate. I want a more reliable SAN whose caching I'm not afraid to turn on," Eisenshmidt says.

Leo says that when you put a database and storage together, "You realize that your SAN management team must work closely with the DBMS team. We have seen situations where SAN managers have decided to change the SAN configuration because it seems wasteful or odd, not realizing how carefully tuned the setup was."

CreekPath's Booth also agrees, saying that while careful tuning of how storage is configured can eliminate sharing issues, "Both DBAs and storage administrators need to be on the same page."

Despite the issues, some users have had great success running SANs with their databases. John Brooks, vice president of storage engineering at Cleveland, OH-based National City Bank, runs an 18TB SAN, which is based on Brocade switches with IBM and Hitachi arrays. "We have no issues with running on a SAN. Not even performance issues. It just seems to run," he says. Asked if he does anything special for databases, he says "For SQL and Oracle databases, we throw it on the SAN." Brooks attributes the lack of issues to the relationship he has with his storage vendors. He says, "We do a lot of research, primarily in the interoperability area that might account for the lack of issues. We work very closely with our storage vendors to ensure certification and supportability of the environment, in case issues do arise."

Tools and techniques
Because SANs' new role as the primary storage for databases, there's now a need for improved tools and software to manage the combination of database applications and storage. "It would be nice if vendors could provide some sort of management tool to track why a SAN partition was configured in a certain way, thus preventing the loss of historically significant details when SAN management changes hands," says DBA Leo.

"If I could have anything from a SAN to make it better for a database, I'd ask the vendors to write better drivers. Most of the drivers I've seen and worked with just tell the OS the drive is gone when the SAN disconnects, instead of perhaps telling it to wait or trying to free the process," Eisenshmidt says.

Some vendors seem to be listening and are increasingly focusing their efforts on getting their database to run better on SANs, and their storage and infrastructure to better support databases. Take, for example, Oracle's Hardware-Assisted Resilient Data (HARD) initiative (see "Oracle's storage push"). Oracle's Hoke explains, "We're working with hardware partners to verify that when Oracle data is transferred to a disk it can be read. This sounds simple, but with storage management and storage virtualization in storage networks, there are lots of cooks in the broth."

Part of Oracle's HARD initiative, EMC's Oracle Double Checksum tool offers the ability to verify Oracle data as it is written to arrays in microcode, ensuring that corrupted data is never written to disk. Competitor Hitachi Data Systems has also announced support for the initiative, with a microcode-based verification of Oracle data in their Lightning 9900V product.

EMC also offers its Symmetrix Optimizer, which makes recommendations to administrators on where to rebalance data-a must for administrators looking for hot spindles. EMC has a close partnership with Oracle, including a dedicated engineering team at Oracle Redwood Shores. Chuck Hollis, EMC's vice president of products and markets says, "We have 12 to 15 engineers inside Oracle, some dedicated to interoperability, and the rest to feature exploitation such as our Double Checksum product and integration between Oracle and our replication products."

Veritas also offers a host of products designed to work with databases and SANs. The company's Database Edition products for Oracle9i, IBM DB2, and Microsoft Exchange, including mirroring, backup, replication and clustering features designed for those databases. According to Veritas' Ward, an upcoming release of the company's storage resource management product, "will map data all the way from a database environment in Oracle or Exchange, through files, all the way down to disks to show you which spindles and extents the data is written to."

BMC offers its PATROL for Storage Management product. BMC's Rogers says, "When you're working with Oracle or Sybase or Microsoft SQL server and even Exchange, we can expose that from a database perspective. For example; we can expose an Oracle instance that is on a Solaris box and attached to a SAN, and see where it is spread about. We can expose that through a topology manager-all of the connections, single points of failure, and the way the database is laid out on disk."

Despite complexities of managing databases running on storage networks, the benefits are clear. For large databases, there's really no better option that a SAN because of the level of performance, consistency, redundancy, and scalability that a storage network provides. Storage and database administrators alike are finding that the only way to support their mission critical applications is to deploy their databases on a storage network. With so much at stake, storage and database vendors will continue to work more in tandem to make their products work better together (see "Vendor independence: Are the trade-offs worth it?").

This was first published in November 2002

Dig deeper on SAN management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close