This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state adds VROOM to virtual desktops."
Download it now to read this article plus other related content.
What to accelerate
Each database management system has its own naming conventions for its constituent components. In general, however, databases have a transaction log file that receives new data. This relatively small file increases database performance by sending an acknowledgement back to the client that the new or modified information has been accepted. This process is much faster than if the information was simply written directly to the main file. Later, in the background, the log file is flushed into the main data file.
There are two performance concerns with the transaction log. First, it needs to be able to accept multiple writes from hundreds or thousands of users simultaneously. Second, it must empty itself fast enough so as not to create a “write-through” situation where update performance is slowed to the point that the log file is ineffective. These two situations typically occur periodically throughout the day when demand is high, leading to unpredictable performance. Write performance is critical for these files.
Another fundamental set of database components are the indexes, the smaller files used to improve search performance. Searches can be done against these indexes, which point the query directly to the row or group of rows that contain the rest of the information related to the searched-for item. This is much faster than searching the entire database, top to bottom. Typically, the database administrator will select a
Finally, there’s the database itself. In most cases, indexes and log files are the primary bottlenecks since the database is either updated or queried as a result of activities related to those files. But there are situations when an index search isn’t possible, there are too many fields to be searched or indexing won’t provide a performance boost. There are also certain databases where the granular control over where these files are placed isn’t a user function, and is instead hard set by the application.
Is storage really the problem?
The first step is to make sure the performance problem is indeed storage. The simplest way to do that is to review CPU utilization on the server(s) executing the database application. In general, if CPU utilization is low but performance is a problem, there’s a good chance it’s a storage performance issue. But there are also more detailed approaches that can and should be used. Almost every database application includes its own performance analysis tool, plus there are third-party applications and consultants who specialize in database apps. In general, it’s amazing how accurate the instincts of an experienced database administrator can be. If the DBA is complaining about performance, storage managers should listen.
This was first published in July 2012