Storage is usually associated with words such as overcapacity, over-allocation, unmanageability and far more colorful...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
phrases are routinely uttered behind closed doors. But help is on the way: A new breed of storage resource management (SRM) tools has emerged to help organizations manage their storage more efficiently. But the glut of SRM products is a double-edged sword: picking the one that's best for your environment isn't easy. What's more, choosing the wrong product can cost your company dearly.
SRM design approaches can be broken down into two product generations and six broad categories. The first generation consists of the widely available device-centric and software-centric approaches, which reflect the vendor-specific nature of SRM tools today. The next generation's approaches are the network-centric, API-centric, standards-centric and application-centric approaches that reflect an emerging openness of products and the growing ability of SRM tools to work with other vendors' storage products.
The device-centric approach starts with the premise that the SRM tool will communicate with the interface on an external storage array. SRM vendors adopting this device-centric approach generally are the same ones who already sell external storage arrays such as EMC, Hitachi Data Systems (HDS), and IBM. These vendors' SRM agents communicate with their own storage arrays, providing higher levels of storage management functionality. This functionality ranges from basic storage reporting and visualization of the storage enterprise to more advanced functions such as asynchronous and synchronous volume mirrors, volume snapshots and performance monitoring and tuning.
Pros. While these options and functionality may also be available from the software-centric class of vendors, the device-centric approach differentiates itself in one important manner: It offloads the processing and management of these transactions from the individual servers to the storage arrays. There are two reasons why this solution works best for some environments. First, if software licensing is tied to server CPU speed, it prevents the software from chewing up CPU cycles. Second, if this functionality is needed, it may be achieved without the expense, management and ongoing maintenance of deploying software agents on all of the servers.
Cons. Gaining this functionality almost always requires the deployment of one SRM vendor's storage arrays to utilize their SRM tools. If this standardization on one vendor's hardware platform occurs, it essentially locks your company into this vendor's product line to continue to get this desired functionality.
To address this, most of the vendors in the device-centric camp are seeking to extend some - if not all - of these high-end features to their second-tier storage arrays. EMC is putting some of its high-end Symmetrix software features onto their Clariion arrays, and HDS is doing much the same between their 9960 and 9200 storage arrays. While these efforts may help to lower the cost to deploy high-end features such as mirroring and snapshots, they still don't offer the management of diverse hardware storage arrays some organizations need.
In the software-centric approach, the software manages the storage regardless of which vendor provides the storage. The larger, more established vendors that have taken this approach include BMC Software, Computer Associates, (CA) and Veritas, with most startups also falling into this category.
Under this model, software agents are placed on the individual servers that communicate with a central storage management server. These agents communicate with the management server, monitoring and reporting back on the storage environment of the server on which they reside. Ultimately, the goal of this model is to enable the servers on which they reside to manage their respective server's storage resources based upon policies established and distributed to them by the central server.
|Wheeling and dealing: get the best SRM price|
With so many storage resource management (SRM) products on the market and mergers and acquisitions constantly occurring, customers now have a strong negotiating advantage when purchasing SRM products. Frankly, anyone paying list prices for any of these products is paying too much.
Pitting one vendor against another is an age-old tactic to get a better price, but remember that a better deal from a competitor doesn't necessarily translate as a better solution, or the right product for your environment. It's essential to have a clear understanding of what your storage environment looks like and what you want it to look like before making a purchasing decision.
Making a careless or poorly researched SRM purchase could be costly. All the vendors offering these solutions are making their SRM tools highly proprietary and difficult to migrate off of. In addition, the SRM market is in flux: There are far too many products, and at this point there are no clear winners - small startups are trying to survive while larger, better-known companies are trying to improve market share. The bottom line? It's a buyer's market.
Not surprisingly, each of these vendors' software products reflects this software-centric design philosophy. CA's BrightStor, BMC's Patrol, and Veritas' SANPoint Control as well as many of the smaller SRM newcomers each use a central server to gather and report information from the software agents they deploy on the individual servers.
Pros. The upside to this approach is that a significant amount of storage information is collected, information such as storage allocation, utilization and even performance metrics. This can be invaluable information when trying to determine what tier of storage one wants to place the underlying data onto. This approach also isn't dependent upon the underlying hardware to gain many of the desired benefits.
Cons. SRM tools are rapidly evolving from passive to active. Currently, some SRM tools are passive - they only monitor and report on the storage for the servers on which they reside. But the next generation of SRM tools moves to actively managing the entire storage infrastructure based upon policies set in the SRM tool. This next step in SRM puts some of these vendors at a disadvantage compared to the device-centric model, especially in a SAN environment.
Without the ability to communicate with the underlying storage array, one wonders how existing manual tasks such as LUN security on the storage arrays and switch zoning on the different vendor's switches would be automated when moving storage between servers, especially if the SRM tool isn't currently communicating with these devices.
A number of software and hardware vendors have asked this question as well. They appear to be taking steps in a number of areas, resulting in changes in their next generation of products. These product design changes blend together the aforementioned device- and software-centric views.
This blending of these views reflects a crossroads in the life cycle of SRM products. At least that's what Charles Witt, a systems engineer with ProvisionSoft, Andover, MA, believes. Witt sees customers as not being satisfied with just monitoring and reporting on storage, and being unable to translate that information into action. As vendors make this transition in product design, they are emphasizing one of three general design paths to reach their ultimate SRM objective.
One of three paths some vendors are following, this approach resembles the device-centric philosophy. One larger company that appears to have chosen this trajectory is Fujitsu Softek, based upon its recent acquisition of DataCore Software's virtualization source code. The network-centric approach is different from the device-centric approach in that the SRM agents interact with a new software layer created in the storage network. This software layer in turn manages all of the underlying storage arrays regards of the underlying storage vendor.
Pros. This approach prevents vendor lock-in and creates a common software layer between the servers and the storage arrays that the SRM tool communicates with. Fujitsu Softek believes this new network-based software layer begins to create a storage environment similar to what already exists in the mainframe world. In the mainframe environment, storage exists in two distinct classes: System Managed Storage (SMS) and non-SMS volumes.
System-Managed Storage (SMS) in the mainframe world allows the mainframe operating system to take over and automate management tasks that were previously performed manually. SRM seeks the same in the open systems environment. In this space, SRM server agents interact with this network-based software layer to create an SMS-like environment for all open-systems OS platforms wherever a SRM agent resided.
|Wheeling and dealing: get the best SRM price|
An alternative approach to managing your storage is to buy more storage as you need it, and don't install as SRM tool. Of course, this approach flies in the face of what SRM tools were designed to do: prevent storage overallocation and overcapacity through proactive management. Yet in some shops, storage overallocation and overcapacity may still be the easiest and cost-effective approach for storage resource management.
In at least two circumstances, buying more storage makes sense. For smaller companies with rapidly growing storage needs, it may be cheaper to over allocate and overbuy than try to hire people to manage what they have. Further supporting the argument to buy more hardware is that SRM tools aren't cheap and pricing methods vary. List prices start at $200 per server for the agents with additional costs for the SRM management software and the physical server. Other vendors have pricing models that range from the amount of storage you have managed to the complexity of your storage environment. Costs will likely start at $10,000 and may exceed $50,000, assuming a deployment on 40 servers. Installation, management and ongoing software maintenance tacks on further expenses.
While SRM tools are being deployed to contain storage costs, the rate at which the cost of high-end, mid-tier and low-end storage is dropping, throwing more storage at the problem isn't an entirely illogical approach to solving the problem. Again, this tactic depends on each customer's environment and shouldn't be thought of as the recommended solution. But with the scarcity and relative immaturity of SRM tools, their rapidly changing nature, the lack of personnel with storage management expertise and the continuing drop in storage prices, one shouldn't discount the "just buy more storage" option.
Non-SMS volumes in the mainframe environment are essentially storage volumes on which limited storage management is performed. This network-based software layer benefits the open-systems environment since it gives a central point to introduce a basic level of storage management without respect to the server operating system or the requirement of deploying agents on all of the servers.
Not surprisingly, other vendors such as Veritas and IBM are looking at this approach, simply because it makes sense for the enterprise. Most active proponents for the adoption of this model are the same individuals who helped design and made this work in the mainframe environment. For example, the CTO and VP of Engineering at Fujitsu Softek, Nick Tabellion. He was one of the original designers and developers of SMS for IBM in the mainframe world and was brought in by Fujitsu Softek to develop a similar concept for them.
Cons. A couple of holes exist in the network-centric approach. First, while a new software layer in the network should in theory simplify the entire SAN design and architecture in a number of important ways, mostly importantly it does not give satisfactory end to end performance management in high-end environments.
While one may not initially view performance management as part of an SRM tool, in high-end environments storage and performance management are inextricable and you can not have one without the other. This network centric approach will not natively offer end to end performance management that's a by product of the next two approaches.
The other hole that exists for the network centric approach is that's it essential to have a common network based software layer. Without such a layer, it would appear unlikely vendors taking this angle would be able to provide the full functionality that their customers may want. So if your company did not plan to deploy this software layer, you should look beyond these vendor's SRM tools in light of the next two categories.
This philosophy probably reflects the most widely available next generation approach on the market today. In this design, the SRM tool communicates with the APIs available on the interfaces of each of the vendor's storage arrays. In so doing, this method removes much of the proprietary nature of managing the existing vendor storage arrays today.
Pros. This avenue would appear to make sense in environments where products already exist in a controlled growth environment or where change is not rapidly occurring. It also offers more in-depth reporting and management tools than the other two next-generation approaches. You may want to approach API-centric solutions with caution in environments that experience rapid or unexpected change or aren't well-documented, or if you don't need the depth of management and reporting offered by this solution.
Two vendors, CreekPath and EMC, have already publicly announced plans to deploy their technology using this approach. CreekPath has sought to achieve this by purchasing the APIs for each of the hardware vendors and then coding to the APIs provided to them. EMC, on the other hand, is offering API swaps, where they will freely give away their APIs to their storage arrays if the other vendor gives away their API's in return. So far, only HP has taken EMC up on that offer. So for the majority of vendors not sharing their APIs, EMC is reverse engineering the other vendor's storage arrays to get the code they need to do their job.
Cons. While on the surface, the API-centric approach may sound appealing, writing to each other's APIs is a costly and time-consuming process for vendors. One has to wonder how successful this product approach will be as storage array APIs change and storage arrays come and go. Also, with so many storage vendors in the market and CreekPath and EMC currently only having support from some of the big players (IBM, HDS, HP, and NetApp), users possessing older or newer generation storage arrays may be limited in the scale to which they can deploy this solution.
The standards-centric approach has started to generate interest and support from big and small players. In this approach, vendors code their products to a common set of standards that are present not just on storage arrays, but throughout the entire storage network that would be used to gather, analyze and manage data.
Pros. Standards provide a common interface that shortens the software product development life cycles. This common set of rules is exactly what small startups such as AppIQ are now betting on, while larger companies like EMC are using to hedge their bets. For companies like AppIQ, standards allows them to bring their products to market quickly since they no longer need to worry about the proprietary API's of each vendor's storage array. Plus with this standards-based approach, they will be able to mine data out of any standards-based device irrespective of vendor and give end users far more information on a single screen at a price point most companies can afford.
Cons. First, it is unlikely a standards-based approach will give end users all the information they want in every situation. Some users will need certain functionality that comes from the code of a specific vendor's equipment. Neither this approach nor the similar API-centric approach help to eliminate the complexities as the network-centric approach does. It still requires highly-trained individuals to interpret and understand the data being managed and reported on in this standards-centric approach.
However, there's no disputing this approach merits close attention. Bruce Nash of Fujitsu Softek believes standards-centric solutions may offer 70% to 80% of the storage information organizations may want or need. While it may not provide everything end users may want, this information is probably 70% to 80% more than they have now.
What's next? All indications are that the existing design paths discussed above will merge into a powerful new SRM suite with an application-centric approach that combines the best of each categories. In many cases, it's already happening. Today's SRM tools fit into various niches, but each product is starting to incorporated some elements common to the other categories (see "SRM-only vendors")
Make no mistake: It's the customers not the vendors who are driving this charge toward an application-centric design. Scott Shimomura, product design manager at Fujitsu Softek, and Scott Hansbury at Creekpath Software say their customers want end-to-end storage management and reporting. This end-to-end approach starts at the application level, goes through the storage network to the spindle level on the individual storage arrays.
As SRM products evolve, smaller vendors who have first-generation SRM products typified by passive storage management will be challenged to adapt their products to become increasingly active in storage management. Even large vendors will struggle to adapt their models to the more flexible model that will surely emerge. Vendors such Veritas and BMC will need to build more hardware capability into their legacy software-centric infrastructures, while the EMC's and IBM's of the world will need to introduce more open software designs into their device-centric approaches.
To help make this transition, the big boys will likely try to acquire some of the smaller companies that have technology they want. Sun just recently did this by acquiring Pirus Networks. It can provide the network-centric portion Sun lacked in their product suite. Veritas snatched up the Storage Reporter product from NTP Software, presumably to allow their SANPoint Control product to work on servers lacking Veritas' legacy software. IBM meanwhile offers DataCore's SANsymphony software and recently purchased TrelliSoft. Both of these are said to be interim solutions that IBM will offer until their full suite of products under their StorageTank initiative are ready to ship.
So as these SRM tools develop, choosing the best product for your environment will be difficult because of the constantly changing nature of the products. And we haven't even heard yet from big name computer companies that may enter the SRM space. While unlikely, it is not impossible SRM may follow the same path as it did in the mainframe world. What, for example, is to stop a Microsoft, Sun, IBM, Cisco or Brocade from entering this field with an SRM option, especially with so many small vendors looking to get bought out?
All these major players would have to do is buy a small company, add the new SRM functionality as part of their core OS package and they are in the SRM business. In fact, Brocade's recent purchase of Rhapsody Networks may indicate just such a move to incorporate more storage intelligence onto their switch. One would also have to think that with a portion of the intelligence of this application-centric approach moving into the network, Cisco could be a major player, especially with their army of engineers and dominance in the network, and their recent foray into the Fibre Channel space.
All this competition is good for the end user. As they mature, SRM tools will get better and better. As the market matures, the number of choices will drop and the winners will begin to emerge. What will define the winners are those companies who can successfully execute on the adoption and integration of these different SRM approaches into one robust application-centric solution.