Hands-On Review: Kashya KBX5000


This article can also be found in the Premium Editorial Download "Storage magazine: How does your storage salary stack up?."

Download it now to read this article plus other related content.

The KBX5000 management console:

Requires Free Membership to View

The Kashya Management Console is clean and uncluttered, and offers a configuration view, KBX5000 status monitor and volume details.

The KBX5000 interface
The KBX5000's management graphical user interface (GUI) is intuitive and, provides a visual representation of your storage infrastructure (see "The KBX5000 management console" on this page). On the left side (New York) of the pane are hosts, switches and storage with clustered KBX5000s (K-Boxes) connected to the SAN fabric and IP WAN pipe. The same configuration, minus the application hosts, is represented on the right side (Boston) of the pane.

Once the KBX5000 on the left discovers the nodes on the SAN, it's easy to include the nodes in a consistency group and replicate volume data to the remote SAN according to service-level policies. You can change the direction of the replication of discovered volumes to allow for disaster recovery support.

Consistency groups are the heart of Kashya's arrangement, and are logical representations of the hosts, storage and applications sharing common business policies as they relate to the level of service volumes experienced while replicating.

By marshalling SAN nodes and applications into consistency groups, a storage administrator can apply replication quality-of-service (QoS) policies to each application.

What's inside
Bandwidth consumption is potentially the largest line item of any replication technology. Kashya tries to control this cost by implementing a patent-pending 7:1 compression algorithm (depending on the application), as well as features found in other replication technologies. For example, when the KBX5000 recognizes that the same block of data is updated within the same snapshot window, only the last version of the changed block is sent across the WAN pipe.

Furthermore, large block sizes (64KB) typically associated with larger database objects or audio/video files can be subdivided by the KBX5000, with only the delta sent to the remote replicated volume, again saving on bandwidth. Additional compression is possible for applications such as Oracle, SQL Server and DB2 because Kashya has tailored its compression algorithms to match the output data characteristics of those applications.

Big buffers, superior compression algorithms and delta differentials are no longer a luxury in long-distance data replication. More and more users expect these benefits to be included as part of an overall replication configuration.

Selecting a replication policy is not a matter of choosing synchronous or asynchronous delivery, as in most solutions. In the Kashya scheme, policies manage the bandwidth of the IP WAN pipe by specifying minimal and maximum lag times, or even a minimum bandwidth to provide QoS functionality on the outgoing KBX5000's IP port.

The KBX5000 will toggle between synchronous and asynchronous delivery modes as needed to keep up with demanding applications or even to provide a choke point for non-critical applications consuming too much bandwidth. This is a smarter, more flexible approach because very few applications experience the same traffic behavior all the time. There are times when an application will benefit more from synchronous delivery, and times when asynchronous is ideal. As long as there's engineering to hide the switching between delivery modes from the user and to ensure that synchronous writes experience the same perceived amount of service as asynchronous writes, then all is well. Otherwise, an application may respond slowly when the KBX5000 switches from asynchronous to synchronous delivery, making the user wait for the remote commit.

Kashya solves this problem by using a large buffer in the KBX5000. When an application server performs a synchronous write to a managed volume, it does so by sending a second copy of the write to the KBX5000's buffer. At that time, the write returns to the application, the user continues to work and the KBX5000 is responsible for guaranteeing the write of the data at the other end.

This implies that the IP WAN pipe required is of high quality and resilient, which also implies increased cost. When the application is told that the remote synchronous write has been completed, the highest probability of successful completion is required to ensure the integrity of the replicated data.

Kashya's implementation is more cost efficient than most array-to-array replication technologies, but its implementation of synchronous writes indirectly increases the cost of the IP network portion of the configuration, as well as the chance that data will be lost if the buffer isn't highly available.

This was first published in December 2004

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: