Our biggest challenge occurs under two conditions: 1) immediately after a database update, and 2) when we implement a new version of the application. When the database updates occur, all of the instances of the application begin reading the recently updated records, causing lots of data contention that leads to I/O wait states. When condition #2 occurs, our application runs much slower until much of the data is read into the application's memory, which also takes time to get the data. Until now, storage access has not received much focus related to application performance. Instead, we've tuned the application, tuned the database and bought the fastest CPUs we could find.
- Might a solid state drive provide us with faster access and allow more concurrent requests for data? Some vendors claim to provide constant random access to data at a sustained peak rate of 3 GB per second and 250,000 random I/Os per second at under 20 microseconds each. Perhaps we could connect 10 application boxes to one database box if the data were transferred 50 times faster.
- We currently use Gigabit Ethernet over copper. This is starting to sound like a dumb question to me, but ... would 2 Gb Fibre Channel improve my situation?
Some other criteria include:
Cost: Performance can sometimes come at a high cost. Remember that you are trying to attain a performance footprint getting there may require you to pay additional dollars on the upfront hardware. Also, understand that the cost of the hardware is just one upfront (depreciated) cost; there is also the upkeep and management, which, in the end, can turn out to cost many times what the physical storage costs. Also include the cost of the full lifecycle of the solution, including backup and recovery, mirroring and replication, and high availability capabilities, all of which are important in order to understand the total cost of ownership.
Manageability: If your company is OK with managing the network storage, what would moving to a Fibre Channel setup involve? Retraining the group, purchasing new hardware and software management tools? Possibly outsourcing the management to a third party? Also, what policies and procedures will need to be updated to accommodate a new storage platform? How about on the hosts? I suggest meeting with the end users and understanding their requirements as well, not just looking at the one criteria of performance, which leads to how the solution integration into the current environment.
Some more database background: Most databases write in 8 K data packets, and, depending on the data write and read (random or sequential) patterns, NFS, iSCSI, and or FCP could each meet the needs quite well. The challenge is selecting the best performing protocol isn't usually the physical speed capabilities of the wire, but understanding the requirements of the application, and then creating a requirements document that can be reviewed by the team and shared with the storage vendors so each can respond with their best solution. Once you receive the proposals, go through them and pick the top two or three and invite them into your shop to prototype the environment and prove their solutions to your challenges. Also, make sure that the vendor can product real live customer references for you to talk with and discuss the solution. The best solution is one that is already in production, so you can learn from them and get advice as to what works, what doesn't, and what is coming down the road in terms of the total solution.
The vendor that bests demonstrates their ability to solve your challenge is the clear winner. Try to avoid politics in making the decision. Focus on the costs and the capabilities of the solution to best suite your needs and the needs of your company. You may be betting your happiness and, in the end, your job on the solution.
This was first published in August 2004