Magazine

Tuning storage for database apps

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state adds VROOM to virtual desktops."

Download it now to read this article plus other related content.

How storage can be the problem

Storage can slow down a database application in two ways. First, the storage can be overwhelmed by the number of inbound requests. These requests are stacked into a queue, the number of which is referred to as “queue depth,”

    Requires Free Membership to View

and emptied on a first-in first-out basis. When performance is good, this queue depth is typically in the low single digits. As the queue depth grows, the application has to wait for its requests to be processed. This statistic is relatively easy to monitor as most operating systems will report the queue depth of a given drive or volume. Any operation that involves hundreds or thousands of simultaneous requests will typically add to the disk queue depth.

Storage can also limit database application performance based on its ability to service individual requests. In a traditional hard disk drive array, this “organic” performance is essentially the speed at which a hard drive can locate and retrieve the required data blocks, since the majority of this time is spent waiting for disk platters to rotate to a specific spot. This response time can also be measured with operating system tools. While the number of users may have some impact on response time, it’s often an issue for single-thread operations, like when new data is added or existing data is modified via the log file.

Storage acceleration tune-up

Dealing with indexes. When it comes to deciding which of the three components of the database to pay attention to, indexes are typically the most problematic across a wide variety of use cases. In short, almost all databases are searched and in most cases indexes are used. Generally speaking, the more indexes a database has the slower the update process becomes. Making these indexes update faster and return faster results will almost always improve performance. These indexes are impacted by the number of simultaneous search requests, as well as the time it takes to service those requests. Indexes are mostly read-intensive files, but they do experience significant write activity as indexed values are added or updated.

The correct approach to solving an index performance issue largely depends on the environment. A single database server with data being stored locally can see performance improved by installing a mirrored set of drives on that server, dedicated to storing the index files. Even a single server that’s storing its data on a storage-area network (SAN) can benefit from this approach, especially if the storage network is slow (1 Gb Ethernet iSCSI or 2 Gb Fibre Channel, for example).

If the database is shared across multiple application servers, local disk logging may not be an option. Again, it depends on the database, but usually an improvement to the shared storage environment is needed. This could be as simple as dedicating a set of drives on the SAN to the specific task of storing indexes, once again mirroring them instead of running them in a RAID group. It could even mean short stroking those drives so that only the outer cylinders of each platter are used. Indexes are relatively small and typically don’t need the full capacity of a drive.

In all of the above cases, flash-based solid-state drives (SSDs) offer an alternative to solving the index performance problem. Because index files tend to be small in size, an investment in a couple of enterprise-class SSDs will deliver a very affordable performance boost for a database server using directly attached storage. When shared storage is used, a few SSD drives installed inside the storage system to hold the indexes may alleviate the performance problem, although at a somewhat higher cost. It’s important to note that in both cases a complete storage refresh isn’t required.

Dealing with log files. Log files are similar to indexes except they typically handle write requests exclusively and configurations similar to the ones described above should also improve their performance. There’s a new consideration when flash SSD is used. The write intensity of log files can be very high and flash-based SSDs have limitations in the number of writes they can sustain over their lifetimes. While that write cycle number has improved over the last few years, log files are the one type of workload that can still prematurely wear out a flash-based SSD.

In log file environments where the write intensity will be very high, it’s advisable to use a hot-swappable multi-level cell (MLC) drive technology, to use single-level cell (SLC)-based SSDs or to even consider DRAM-based SSD. DRAM systems now provide very reliable recovery from power failure, offer significantly better write performance than even SLC flash and don’t suffer the endurance issues that flash-based technologies do. But DRAM is considerably more expensive than NAND flash storage.

This was first published in July 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: