News Stay informed about the latest enterprise technology news and product updates.

Datrium DVX adds converged turnkey option

Datrium plans to release a major update adding key enterprise storage features such as snapshots and replication and a new 'open converged' Rackscale turnkey system option.

Datrium this month plans to release a major DVX product update that adds important enterprise storage features...

and introduces a "turnkey" option converging its storage appliance and new compute hardware.

The Sunnyvale, Calif.-based startup will add capabilities such as replication, snapshots and encryption with the Datrium DVX Software 2.0 release that is due to become generally available later this month.

Datrium's flagship DVX storage system for VMware virtual machines (VMs) is designed to accelerate data reads through server-based flash cache and scale capacity through DVX Data Nodes, which were formerly known as NetShelf back-end storage appliances. The DVX Software runs on the compute and storage nodes, and users manage the system through VMware vSphere.

Customers had to supply third-party x86 server hardware and solid-state drives (SSDs) for the flash cache with the initial DVX version that began shipping early last year. Datrium now plans to add a turnkey DVX Rackscale system that combines the DVX storage appliances with new DVX Compute Nodes.

The DVX Compute Nodes offer the option of 16-core or 28-core Intel processors and come preconfigured with up to 768 GB of RAM and eight solid-state drives (SSDs), as well as VMware and DVX Software.

Datrium DVX Open Convergence platform

Datrium now refers to DVX as an "open convergence platform" -- as opposed to hyper-converged -- because it gives customers deployment options for the commodity servers and DVX Data Nodes. They can use the new DVX Rackscale system with only Datrium Compute Nodes, mix and match their own x86 servers with DVX Compute Nodes, or supply all the commodity servers as they did in the past.

It locks you into this infrastructure, and it's very hard to get out, so you end up with less pricing control on servers and that kind of thing. This model says there's a 'get out of jail free' card.
Brian BilesCEO, Datrium

"The major thing we hear as a knock against converged vendors is that they're a silo," said Datrium CEO Brian Biles. "It locks you into this infrastructure, and it's very hard to get out, so you end up with less pricing control on servers and that kind of thing. This model says there's a 'get out of jail free' card."

Biles said Datrium's open converged model would be particularly useful for customers who aren't ready to refresh their existing servers and for those who use "a little more exotic" gear not typically found in converged offerings, such as quad-socket servers or NVMe solid-state drives (SSDs).

"We've taken sort of the advantage of incremental scaling that hyper-converged offers versus converged; taken it a step further with performance isolation and much deeper cloud data management integration," Biles said.

Biles said there's a market for the company's original software-defined storage approach, especially among the largest enterprises that have teams to test and make a system work as well as with the smallest companies. He said Datrium collected over 70 customers, representing more than 100 deployments, during its first year of operation.

"But there's a bigger market of people who just want it to be easy" and "have the turnkey alternative," Biles said. "We had a lot of customers that thought that would be a simpler path for them."

Cameron Joyce, a senior systems engineer at Altair Global, a relocation services provider based in Plano, Texas, said his company is expanding its disaster recovery (DR) site with Datrium DVX. He said the DVX Rackscale system is especially appealing because it uses Dell server hardware, and his company is a Dell shop.

"We also really like the idea that we don't really have to think about configuring a host, how much CPU, how much RAM, etc. " Joyce said. "I can just go tell my procurement team, 'Hey, I need this SKU from Datrium.' I know that when it shows up it's going to have the exact amount of compute and SSDs. Everything's going to be perfectly sized for my environment, and I can just rack it, cable it, power it and get it up in the environment without having to do a lot of effort on the front or the back end."

DVX Rackscale configurations start at a single DVX Compute Node and DVX Data Node, listing at $118,000, with the DVX Software included. The DVX Data Node sells for $94,000, and the DVX Compute Node costs $24,000, according to Biles. The Datrium system can scale out to a maximum of 32 compute nodes per DVX Data Node.

Datrium systems include new Adaptive Pathing software to ease switch configuration, especially for customers who buy "white box" switches, Biles said.

DVX Software 2.0 update

The DVX Software 2.0 update includes new Data Cloud software that Datrium claims integrates functionality typically found in third-party backup, DR, copy data management and archiving products.

Datrium's Data Cloud Foundation software adds support for granular snapshots that can be taken at the VM, vDisk and file level and a searchable Snapstore backup catalog. Another feature is Protection Groups that offer the ability to set policies for functions such as replication and data retention and snap all objects within a group at the same I/O point for data consistency.

Scott Weinberg, CEO of Neovera, a managed service provider based in Reston, Va., said he would not have purchased the Datrium DVX last year without assurances that snapshot support was on the way.

"If nothing else was coming and it was just fast storage, then it was not going to make the PO [purchase order] cut," Weinberg said. "Snapshotting is critical because with virtualization, we can snapshot at the VM layer. There's a lot of replication technology out there that utilizes the VM snapshot, but it's slow. Whenever you do that, the VM itself pauses. It's hard to have a high performing application that's processing hundreds of thousands of financial transactions and all of a sudden it pauses every 15 minutes because you've got to take a snapshot.

"If you can do it at the disk, underneath the VM, then it's seamless to the VM. That was critical for us because that's how we do backup and recovery and restore files. That's how we end up doing replication. Anytime we do maintenance, we do a snapshot. Anytime we make any change to the application, the operating system, anything to that VM, we do a snapshot."

Another important DVX Software 2.0 feature for Weinberg is Elastic Replication. Biles said the system uses compute nodes to do the work and replicates from local flash, enabling the system to scale performance based on the number of hosts configured. He said, as long as one host server is available, the Datrium DVX system can replicate, unlike hyper-converged systems that stop replication if a bunch of hosts go down.

Datrium previously announced Blanket Encryption for securing deduplicated and compressed data at the host server, in flight across the network and at rest. The Blanket Encryption capabilities are due to become generally available with the DVX Software 2.0 at no additional charge for customers. Datrium previously said Blanket Encryption would cost $10,000 per DVX system.

Another upcoming capability that Datrium expects to offer in 2018 is support for Elastic Replication of encrypted data to Amazon Web Services for archival purposes.

Eric Burgener, a storage research director at IDC, said Datrium addresses "two big failures of hyper-converged" systems it competes against. "Losing a server doesn't cut off access to any data and you can scale the storage capacity independently of compute," Burgener wrote in an email.

"They don't have quite as mature a set of data services as Nutanix or [VMware's] vSAN, so it's really going to depend on what the prospective buyer thinks is more important. It's not that Datrium doesn't offer a good set of services; it's just not as comprehensive or mature as those same features from the hyper-converged vendors."

Dave Russell, vice president and distinguished analyst of storage technologies and strategies at Gartner, sees Datrium's primary competition as storage arrays. He said Datrium stacks up well from a performance and choice perspective, and the new DVX Rackscale "can overcome the pre-packaging desire."

"The majority of the market seems more interested in exploring preconfigured solutions, and that trend seems to be rising as IT teams become more generalists than specialists," Russell wrote in an email.

Next Steps

Flash caching boosts app performance

Analyzing array vs. server flash options

Hyper-converged not right for everyone

Dig Deeper on Enterprise storage, planning and management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

6 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are the most important capabilities to enable Datrium to challenge existing hyper-converged and traditional storage vendors?
Cancel
Three things: - Cache directly on the host, rather than relying on iscsi connections with cache on the SAN - speed and performance of data transfers, espically reads - deduplication of data, and performance of SSD drives That's to name a few. After purchasing the solution and running on it for a while, I can tell you there are a number of benefits that we were not expecting.
Cancel
What were the unexpected benefits?
Cancel
When we bought into the EqualLogic SANs, we were sold the idea that the more spindles you put into production, the faster everything is going to perform.  By that he was telling us that if we added more arrays to the cluster, data would be produced faster.  At the time we moved to Datrium, we had 3 EqualLogic SANs in the cluster.  Performance (measured in IOPs or MS) was terrible, and latency had reached 20-1000+ MS at times.  (No, you didn't read that wrong.)  

What we came to find out through EqualLogic was that all three SANs were basically only as fast as our slowest SAN, which meant that we had really bad performance because of the amount of traffic we were sending to the SANs.

The benefits that we were not expecting from Datrium was the drastic IOPs improvement, the massive reduction in network overhead, the near 0 MS response times, and the drastic size reduction of storage needed.
Cancel
Datrium allowed us to improve our environments performance alongside simplifying it. Before Datrium, we were running a Nimble SAN with multiple ISCSI targets and performance policies per workload. We still had VDI latency with this and had to add Pernix on top of it to get the server side caching performance we were looking for. Since migrating to Datrium, the end result now is just one NFS target for all workloads, and SSD’s Datrium manages for caching built in to their solution. Additionally, we now only have one management interface for all storage since the Datrium management interface is native in the VMware web client. One other important capability they bring to us is when we do need more performance, we only need to add more capability to our hosts that we choose. We can pick the vendor, model, etc that best fits our environment and budget.
Cancel

For anyone else thinking of a SAN purchase or upgrade... In my opinion, Datrium is superior to alternative solutions in that it takes the best parts of Hyperconvergence (scalability, keeping data as close as possible to the virtual machines that use it) mixed with the best parts of traditional arrays (highly available, highly fault-tolerant, purpose built hardware) without either of their downsides (high-cost, locked vendor relationships, host dependencies, costly upgrades, etc.).  In short, the speed and ease of hyperconvergence plus the safety and reliability of a traditional array, for more value.
Datrium's architecture is simply superior to anything on the market today: It doesn't matter how fast you make a traditional array controller if it's at the other end of a copper or fiber LAN connection - that's physics!  And if you're running a hyperconverged infrastructure, and have to take your hosts down for periodic maintenance, you're affecting your storage. Sure, you can migrate the data off first, but sometimes that's not an option.
But rather than dog the competition, many of which make great products that work brilliantly in their own use-cases, here are some of the less talked about reasons why I like Datrium:
EASY: the DVX Netshelf is like a toaster - add power and network, then never mess with it. All of the administration happens at the virtual machine level. We've been using Datrium in production for years now and it's one of the few things that doesn't cause me heartburn.
CHEAP: rather than buying new enclosures, controllers, RAM/Processors - you have full control over your performance upgrades. We're also seeing amazing compression on all our data, not just snaps and replicas: 7X!
GOOD: I mean really good. It's soooo fast. Because Datrium processes the IO on the host, the more hosts, the better your performance! And unlike traditional SANs that can't touch storage latency, we saw an order of magnitude improvement (from 29ms to 2.9 ms) when comparing our Datrium to a traditional SAN, both being pushed with as mush IO as we could generate.

Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close