BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
AUSTIN, Texas – Dell's blockbuster deal to purchase EMC for $67 billion cast a long shadow over Dell World 2015 this week. But even before that deal was disclosed last week, there was plenty of disruption going on in the storage world to occupy Dell's storage general manager, Alan Atkinson.
In this interview with SearchStorage, Atkinson tackled subjects such as software-defined storage, hyper-convergence, Dell's cutting-edge use of triple-level cell (TLC) 3D NAND flash and Dell storage strategy.
What's your view on the overlap with EMC storage products and the potential for customer confusion?
Alan Atkinson: I think Michael [Dell]'s made a fairly public statement -- it's on Dell.com -- that we're fully committed to our product line, and our customers on the product line are all protected. We put a letter out that pretty clearly stated that.
My opinion is there's actually a lot less overlap than people think there is, because we've rationalized our product line down to a single stack, and that has been our strategy for the last three years. I think it's a lot less rationalization than people may think.
There's obviously a lot of disruption going on in the storage industry. What do you see as the most important trends and the greatest threats to established vendors, such as Dell?
Atkinson: First, flash. And I should probably make that more generic and say solid-state, but today, the only thing that matters is flash. That's probably not true a few years from now, but today, it's flash.
Secondly is the whole movement toward software-defined or server-centric storage. Whether that's direct-attached or internal storage, it's some layer of software on top that's controlling what goes on underneath. Let's basically think about it as dense server.
And the third thing is really a shift toward, I guess I'll call it cloud, but it's really a shift toward more of a service-provider model for the way people are consuming both compute and storage.
I think those three things are extremely disruptive in the industry. You can see it when you look at the earnings reports of other companies that are public. If you look at traditional storage, they're all declining. But if you look at next-generation flash and hybrid, the good products are actually gaining double digits year on year. We're seeing a lot more direct-attached. And honestly, that ranges from Exchange -- where the reference architecture these days is typically directly attached to hyper-converged, where there's a compute node with some disks in it -- to MapReduce big data environments, which are typically scale-out. All those things don't typically live on either traditional arrays or any type of external storage.
Are you prepared to take on the white box server vendors that are holding such sway with some of the hyperscale companies?
Atkinson: I think regardless of what area of the market you want to look at, there's competition. And actually, I think we've got a pretty good answer. We have the DCS [Data Center Solutions], which we've had for years, which is our very high-end, fully customized solution, where we deliver huge quantities stripped down, without a lot of that stuff. We announced DSS [Data Center Scalable Solutions] in the August timeframe, which is the level below that. I guess some people call it semi-custom. It's not quite as barebones as that top end, but it doesn't have a lot of the bells and whistles that a standard PowerEdge would have.
I'm not naïve. I think these white box guys are going to get a certain portion of the market, but I think we're competitive there. They don't have our logistics. They don't have our support. They don't have our technology. They don't have our R&D budget. There's a certain breed of customer that I guess doesn't care about that, but I think more do, and I think more would rather buy it from a vendor they trust.
Does server-based storage become the dominant type of storage for enterprises in the future, or will traditional storage arrays remain important?
Atkinson: I think the direct-attached storage model is almost independent of the type of buyer. There is a certain set of workloads that naturally gravitate there. But I also think -- and this is a controversial thing, it's my opinion, but it's by no means universal -- I personally think that traditional, scale-up arrays are still the best solution for a lot of types of data.
Look, everybody likes to chase the shiny, new thing. Here's the thing: Workloads that do not have a natural correlation between compute and capacity are probably not well suited for direct-attached models. So, a traditional array can scale to petabytes. They have tons of functionality inside. They have things like data placement. They have efficient caching algorithms. They've got redundancy built in. They've got lots of goodness in that box.
Now, where do the direct-attached workloads go? Well, think about things like hyper-convergence, where I want to be able to scale compute and capacity in these kinds of cookie-cutter building blocks. But I think they coexist. It's really about traditional IT sitting side by side with this new paradigm. I personally don't believe one eats the other. I think one eats a lot of the other. I think you go into a data center and it's going to be -- I'm not smart enough to know which percentage -- but let's just make it easy and call it half and half.
Do you agree with those who advocate putting unstructured data on the cheapest commodity hardware available and using flash for everything else?
Atkinson: Yeah, actually I agree with that. If you look at it today, we just introduced TLC flash 3D NAND, which is the latest, greatest technology. And that's about 80% cheaper than [multilevel] MLC flash. If you look at our price points today, we're at a 70, 75 cents a gig usable, $1.50, $1.60 raw, and we see that trending down in our case to probably 35 cents by the first half of next year.
At those price points, all your hot data belongs on solid-state, for sure. But for all the other data? We've taken a hard look at this, and 7200 RPM drives are still going to be cheaper than any type of solid-state available, with any type of data reduction available for the next five years. Now, beyond that? I'm not smart enough to know, but I wouldn't bet against solid-state.
But for the next five years? That's why I'm a big fan of the hybrid models. You hear a lot of talk about all-flash arrays. To me, all flash is a configuration. You still want to be able to support both for exactly that reason. Even at 35 cents, or for that matter, even at 25 cents a gig, who wants to have all their cold data on [flash]? It's too expensive. I want to push that down to 10 cents a gig or cheaper if I can get there, because I'm not going to access it, but I can't throw it away.
Does Dell's storage strategy center on differentiating between the types of flash and educating customers on those, whether read-intensive or write-intensive flash?
Atkinson: I wouldn't say that it's centered around it. That's certainly one thing we consider. The overarching design point is, frankly, to disrupt the economics of the business. When we brought out TLC, it was basically a way of saying, "Hey, we can take all your hot data and we can put it on flash at a point that's within your budget." That's going to continue to be our design point -- not only on just media cost, although that's part of it. But if you look at functionality, we're going to continue to drive that functionality down to a wider and wider audience.
Now, this isn't about being 5% or 10% cheaper. It's about being disruptive. I don't think 5% or 10% is disruptive. I think 50% is pretty disruptive. You're in a different discussion at that point, and we'll continue to do that.
How would you describe Dell storage strategy overall?
Atkinson: It's really threefold. We made a decision three years ago to provide a single stack, all x86-based, same code stack, small, medium, large. And that's what we've done. It's one set of developers writing one set of code, so we have more developers working on the code stack and we can move faster.
And then, flash leadership, which is what we did with 3D NAND TLC. We worked very, very closely with the drive vendors to get that out before everybody else. I think, so far, it's us and there's a small vendor called Kaminario. So, we're at a price point that nobody else is.
And then, the third thing is software-defined storage. And being a server vendor, that shouldn't be a big surprise. But if you look at, in particular, the partnership that we've done with Nutanix in the XC Series, we think we've got the leading position in hyper-converged. And it's something we really want to maintain.
What's the future of Dell's EqualLogic/PS storage line?
Atkinson: We announced three years ago that we were bringing the products together. And this week, we announced live, nondisruptive thin import from PS to SC, [formerly known as Compellent]. We've announced directionally that we will have common management and complete replication between the two families. And that is our way forward.
That being said, we continue to iterate on the PS Series. Within the last 18 months, not only have we released the biggest firmware release ever -- including compression, including new hardware across the entire product family, Virtual Volume support -- I think we were the first vendor out with it. So, we continue to do work there.
There's no question that, directionally, the Dell Storage SC line is what we're going forward with as far as the flagship product. But there's full compatibility with PS. And we're committed to make sure that every PS customer has a path forward on SC.
IT pros react to Dell-EMC merger
Users weigh Dell after EMC acquisition and HP after split
Analysts say Dell-EMC deal carries risks