Ask the Expert

Purchasing new storage

We are looking at purchasing a new storage system for our financial application, what should we be looking for? The system needs to provide very high level of performance, be up 24x7 and mirror the information in real-time to our secondary data center. The challenge with our current system is that it has outlived its usefulness and is now costing us more to maintain than a new system with similar performance, better availability, and better features from the same vendor.

Requires Free Membership to View

This is a great question as it evokes the idea that technology in the storage space is evolving on a regular basis and the economics found through innovation are driving down costs, not to mention the competitive pressures that are ever-present in the technology marketplace.

I have to ask about your current solution: Is the performance of your current system not meeting your goals as an organization? What does performance mean for your organization? Is it the storage itself, the interconnect, the application server, the front-end to the users, what specifically is the performance issue related to? I apologize, but to say that the performance is not enough is not enough information for me to go on to provide any feedback on the situation.

I can give you some thoughts on how to lay out the information in terms of RAID levels (if your storage solutions supports laying out different RAID levels), as well as looking at each piece of the equation in the solution, but I think that you should first engage your current vendor in a discussion about their costs for maintenance. It is common practice in the storage industry for the third year of maintenance to cost more than upgrading to a new product. This is an undocumented practice that has been around for many years and, aids in the upgrade in three ways:

  1. The customer's (your) satisfaction -- you get to look like a hero to the management team for reducing the cost of the solution.
  2. The financial community is happy because the capital depreciation rules allow for a 36 month write-down on the new equipment.
  3. The storage vendor gets to increase their revenue.
A true win/win, but did the upgrade truly solve the issue or just lock you in for another three year cycle?

Ok, back to the question at hand: You need high performance because your financial system is running on top of a database, and that database is crunching some serious data. You need to replicate your data in real-time as this is one of the most important applications in your organization. Also, there are regulatory requirements for the application, and the data being reported from it, let alone understanding your current revenue, profit margins, expenses, and many more important metrics that are the lifeblood of your organization. All sound good so far?

By the question, it sounds like you have the replication piece sorted out, and you have been using it for some time. Are you happy with the replication solution? If so, great. If not, then it could be time to look at this part of the solution as well. As the technology has moved forward on the storage platform side, it has also evolved in the replication-mirroring technology. There are new solutions available for replication over longer distances with less disruption in services that not only maximize bandwidth, but lower the Recoverty Time Objectives (RTO) and Recovery Point Objectives (RPO) and are more affordable than previous solutions.

Have you looked at the application server(s) to see if they are due for an upgrade? It is usually wise to use this opportunity to create a project to explore the full solution and do a tune-up or full engine swap out. Understanding that doing this will cause more work-effort, but in the end will yield a better outcome and an understanding of why the environment is the way that it is, as well as providing you with the ammunition needed to justify the purchase.

At this point, you can safely take a look at all of the piece parts that make up the solution. Is the interconnect up to speed. If it is Fibre Channel is it a 1 GB or 2 GB solution? If so, you may want to look at 4 GB technology to make the upgrade happen. You will need to check with your storage vendor of choice to see what they support in terms of the HBAs, switches, etc. for their storage systems. The move to a faster interconnect will speed data from your storage system to your application servers, provided that both ends can support the complete process. I should have asked, before assuming that you were using a Fibre Channel solution. I apologize for making an assumption.

You could have an iSCSI connected host or even be running direct attached storage or a NAS environment. In any case the same discussion holds true. Check the interconnect: If you are running an IP-based protocol, check the settings for Duplex and Jumbo frames, as these are inherently important in maximizing performance on any IP-based storage protocol. Set the Duplex Option on the NICs, Switch Ports and Storage Ports all to "Full" (don't trust the auto setting on all points in the network), and move the Jumbo frames up to an MTU size of 9000 (the base is usually 1500). Each of these will increase the traffic flow performance considerably. If you are using Fibre Channel, you should check the Queue Depth settings on the server HBA and get the numbers from your storage vendor to ensure that you don't have any issues in changing the settings. The higher the queue depth that you can set, based on the horsepower in your server, the faster the performance you can achieve, but this is only to a point. Raising the queue depths for the sake of raising the queue depths when the server's performance can't keep up will not raise the performance. Please check with your vendor and your HBA documentation and understand what you are doing before you make updates to these configuration settings.

One other element to consider is that the information in your database has aged and you are storing information that is no longer needed. This may cause the performance of your database to suffer, not to mention that you are possibly wasting space in the replication, back up, and other activities related to the database. HP purchased a solution from Outerbay that enables one to manage the information within a database understanding what information is old and can be migrated out of the database to lower cost tiered storage (RAID protected ATA disks in array instead of mirrored FC disks for example). Also, a smaller database means faster startup RPO, and RTO times. All goodness.

Ok, back to the storage. Do you have your RAID levels configured based on your database workload? Do you have enough target ports in the mix if you are using Fibre Channel? Have you analyzed your application to understand your workload? If not, I would suggest looking at a Database analysis tool from Quest or CA. These tools will help you understand what your read-write patterns look like to storage so you can match up the right spindle layout and stripping-mirroring configuration to maximize performance and availability.

Did I answer your question, or just start more questions brewing? If this was an e-mail environment like Exchange, I would suggest reviewing this article that I wrote on

Here are some other great articles on database performance management.

This was first published in August 2006

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: