Feature

Data storage performance tips: Best ways to improve your data center

When it comes to optimizing data storage performance, each year seems to spawn a new breed of tips, techniques or features that claim to be the ultimate answer.

There were several approaches analysts and TechTarget contributors seemed to agree made a positive impact on the performance of data center storage

    Requires Free Membership to View

in 2013. Practices such as storage tiering aren't new, but they're being offered by an increasing number of vendors. Many vendors are also taking users' struggle with performance into their own hands by building features into their products that take some of the burden off the storage system.

Make use of built-in features

One vendor carving its way to better performance is Microsoft. Many of the features released in Windows Server 2012 R2 relate to storage, and according to Microsoft expert Brien Posey, knowing how to use those features can work wonders for efficiency in the data center.

Storage Quality of Service (QoS) is one of these lesser-known performance boosters. In a tip outlining the Storage QoS feature, Posey explained how administrators can now put a cap on IOPS. This way, the more input-output-intensive virtual machines (VMs) in your environment don't impact the performance of others. VMs were also upgraded in Windows Server 2012 R2, and according to Posey, the new Generation 2 VMs use native SCSI commands instead of emulated controllers, which in turn boost data storage performance.

Place data on the appropriate storage

Auto tiering -- another feature introduced in Windows Server 2012 R2 -- has received a lot of attention as an essential practice in reducing latency over the past year. Though the technique isn't new, 2013 brought tiering in a couple different forms -- similar to the feature in Windows Server 2012 R2, there are auto-tiering products, as well as multi-tiered storage arrays, containing a mix of high- and low-speed and high- and low-capacity shelves. Auto tiering is gaining importance as more people use flash in their enterprise storage arrays.

Jon Toigo, CEO and managing principal of Toigo Partners International and chairman of the Data Management Institute, authored several tips for SearchStorage on the tiered storage model this year. He pointed out that the meaning of tiering -- storing the most critical data on the highest-performing systems -- is evolving to something that resembles cache.

According to Toigo, vendors sometimes use the term tiering to refer to the use of flash to temporarily store "hot data" -- data that is written to disk and receives multiple access requests in a short period of time. "With this hybrid technology for augmenting disk performance with memory, it is possible to obtain industry-leading read-write performance without deploying an excessive number of disk drives striped together for parallel access," he said.

Boost performance with hybrid storage and flash cache

A hybrid storage model mixing flash and spinning disk can be a good way to get the high-performance benefit of solid-state drives (SSDs) without the cost of an all-flash environment.

"The appeal of hybrid storage -- [described] as a cobble of flash-SSD read caching, HDDs [hard disk drives] and smart 'hot data' copy-and-caching technology -- is that you can obtain the same IOPS as arrays with thousands of drives from a kit using only a fraction of the number of HDDs," Toigo said in another SearchStorage piece that outlines best practices for using hybrid storage.

Flash hybrids, according to Toigo, can intelligently determine which written data receives a high number of read requests, and will copy that data to SSDs so it can be accessed more quickly. However, administrators need to be aware of which type of data would benefit from storage with higher IOPS to effectively use hybrid storage. SSD will help when frequently used transactional applications are involved, but a hybrid system won't necessarily solve the storage performance problems created by the hypervisor in a virtual server environment.

Storage virtualization technology

The hype surrounding software-defined storage shows there remains a need for methods of virtualizing and pooling storage to improve performance. Technology that aggregates storage capacity, such as DataCore Software Corp.'s SANsymphony-V or EMC Corp.'s ViPR, spreads the value-added features of the hardware across all the storage. This allows pools to be created, permitting similar results of high-performance storage regardless of the underlying hardware.

According to Colm Keegan, an analyst at Texas-based Storage Switzerland, because storage hypervisors such as VMware Inc.'s Virsto or DataCore's SANsymphony-V are deployed across multiple hosts, they aid in reducing I/O consumption by providing features often found with high-availability storage, such as load balancing. Keegan also pointed out that these products allow low-cost commodity hardware to be pooled to get the same rapid provisioning of more costly, highly available systems.


This was first published in December 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: