Server SAN infrastructure model of Google, Facebook poised to spread

Wikibon senior analyst predicts that the server SAN infrastructure model of Google, Facebook, Amazon could spread to enterprise IT, service providers.

Favored by the likes of Google, Amazon and Facebook, the server SAN architecture is a software-led model for managing and scaling the storage and compute infrastructure that some experts predict could catch on in a bigger way with enterprise IT organizations and service providers.

Stuart Miniman, a senior analyst with Marlborough, Mass.-based research firm Wikibon, recently wrote a report explaining the new server SAN market segment. He defined the term server SAN as a pooled storage resource consisting of more than one storage device directly attached to multiple separate servers. The server SAN has a high-speed interconnect and uses disk or flash drives.

In this podcast interview, SearchStorage.com senior writer Carol Sliwa spoke with Miniman about the server SAN approach to storage infrastructure, its distinctions from traditional storage, its use cases, and its potential impact on IT organizations and storage IT professionals.

What is a server SAN, and what are the major points of distinction between the server SAN and traditional forms of storage?

Stuart Miniman: We've been looking at some of the technologies that have been happening at the Web-scale or hyperscale companies. Think of Facebook and Google and Amazon and how they look at the infrastructure very differently than a traditional IT organization. And some of what they're doing is starting to bleed into the enterprise.

Specifically, if you think about IT that we've had for the last 15 years or so, you have your servers. You have your storage. You have your network and that kind of silo-busting that started with convergence. I think even 10 years ago when blade servers first came out, it started some of that consolidation, almost going back to what we had in the mainframe, but different because much more functionality and things are shared much more, and things like flash are having dramatic changes on here. So, we really looked at this as the big trend of what's happening in hyperscale, what's going on with flash and the pull of intelligence, and some of the storage functionality pulling back into the server. It's kind of a next-generation DAS. It's a next-generation SAN [that] flash puts into it. And server SAN is the term that we've latched onto here.

What's different about [server SANs] from [both] DAS or SAN is, first of all, DAS usually is something that's internal to the server. It can be high-performance but it doesn't scale, and it's not shareable, as opposed to SAN, [which] was really designed to be able to pool your resources and build out lots of applications and lots of data. So, how can we gain some of the best of both worlds? Having a separate storage array and the hardware in a storage array versus a server -- that line has been blurring for a bunch of years. You've got to give kudos to companies like Hewlett-Packard that have been blurring that line for a while.

And then if you look at the converged infrastructure architectures, there have been what some have called the "hyper-convergence players" out there, companies like SimpliVity and Nutanix and others that have been making a new compute platform that really takes care of the storage, even if it's not really a storage array in the traditional sense. But we can put our applications on it, and it's got both the server compute functionality and the storage. This is that server SAN functionality. It is a server-based design, but it has storage in it, and it should be by design very simple and very scalable. And when we say scalable, both the performance and the capacity can scale. So, flash is not just a cache layer, but flash and/or disk can really go in there.

There are a number of software solutions out there that have either come to market in the last couple of years or are coming to market now. The notable one a lot of people are talking about is VMware VSAN, as well as others from the likes of Sanbolic. EMC bought ScaleIO, [a company] that has a solution that fits in this space. Even things like what Microsoft does with Storage Spaces can fit in this, and OpenStack can fit under this large umbrella.

Where is the control point for the server SAN?

Miniman: The first thing is, when you look at this solution, I don't want to think about storage the old way. I shouldn't have somebody that has to configure my LUNs and configure my volumes. It really should take care of my performance and capacity so that there is more of a tie between the application and what the storage is doing. So, in that way, we should really just be able to use it simply as storage itself. I shouldn't have to figure the RAID types, and as my needs grow, I should be able to just add in additional capacity or add in additional nodes, and the software should just be able to take care of that. Ideally, it should really be all automated.

Of course, it is early days in this. But ideally, we want to get people away from having to really optimize their configuration for each application. If you look at where IT budgets are spent, the vast majority of it is spent on setting things up and then making sure that they're constantly being adjusted and expanded. So, these solutions should not only be installed much simpler, but the overall maintenance of them should be as automated as possible.

If you look at what's happening at the hyperscale layers, Facebook can manage somewhere between 10 and 20,000 servers with a single administrator, which is just orders of magnitude better than what the traditional data center could do. And we need to push in that direction if companies are going to be able to keep up with the growth that they're fighting with today.

So, it sounds like the software is of critical importance in the server SAN, and the hardware is just a commodity. Is that one of the main distinctions between the server SAN and traditional forms of storage?

Miniman: Absolutely. Software is really where the value is. Even if you look at what I called the appliance solutions that are out there today, those hyper-convergence vendors, they are not building specialized hardware. They're either using Dell servers or white-box servers, or people are going to what are called the [original device manufacturers] ODMs or the Taiwanese guys like Quanta that build standard hardware boxes off of Intel chip sets.

So, x86 has become much more commoditized. I think that was really highlighted by the fact that IBM just sold off its x86 business to Lenovo. So, absolutely, it's the software that does the differentiation and the control of this environment. Obviously, not all compute is exactly the same, and there will be various configurations that people will need, and you want to make sure that your vendor has something that meets the performance and reliability characteristics that you were looking for. But the value of these solutions -- 90% of that is in the software.

For which use cases will the server SAN work best, and for which use cases will it not be such a good option?

Miniman: We think that over time server SAN will be able to fit the vast majority of use cases for most data centers. But, to start with, some of the lower-feature-set but higher-performance environments are going to fit. VDI is a great use case these days, and many of the solutions have started out in that space, as well as some of the expansion of solutions that you might have used [DAS] in the past for.

Microsoft has traditionally pushed customers; when [they're] doing Exchange, there's a preference that [they] would use DAS. Of course, vendors have been selling SAN solutions into Exchange and other Microsoft apps for many, many years. But server SAN would be a great fit for those also.

If you're looking at big data and Hadoop, server SAN should be able to fit into those in the near future, although I wouldn't say that it's a big focus of the solutions that are on the market today.

How will this server SAN architecture have an impact on internal IT organizations and on storage IT professionals?

Miniman: We've been [saying] at Wikibon for quite a few years now that the storage market is undergoing a lot of changes. If you look at how flash is redesigning the way we need to look at applications, the storage team really needs to expand what they're looking at. And what I mean by that is, first of all, storage teams should be working very closely with the application teams because infrastructure is really designed to be able to help enable the applications. So, if it's the database that I'm trying to accelerate or other applications that I need to help, we need to understand them.

The lines between the various disciplines within IT are changing. If someone's job is only storage today, you want to make sure you understand those adjacencies to yourself, such as virtualization [and] what's going on in cloud. And this server SAN is yet another example of ways that the line between storage and the other disciplines is blurring, and absolutely the roles within the IT organization are going to change, and the skill sets needed to excel are going to be different in the future.

This was first published in January 2014

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

2 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close