News Stay informed about the latest enterprise technology news and product updates.

Enterprise storage architecture must re-invent with flash, cloud

IBM Fellow, CTO and strategist Andrew Walls says the company must continuously 're-invent the SAN' with the influx of flash, cloud and software-defined 'elastic' storage.

IBM Fellow Andrew Walls foresees changes to traditional enterprise storage architecture, with flash, cloud and software-defined storage playing key roles.

Walls said he is responsible for setting the strategy for IBM's storage portfolio and defining the architecture for next-generation flash arrays and storage class memories. This year, his title expanded from distinguished engineer, CTO and chief architect of IBM's flash systems to IBM Fellow, making Walls one of only 257 total and 87 active IBM employees (out of more than 400,000) to achieve the distinction.

This week at the Flash Memory Summit in Santa Clara, California, Walls participated in a panel discussion with other storage architects and CTOs on how flash will transform enterprise applications. Walls took time out last week to speak with TechTarget on types of flash technology. The second part of the interview, below, explores the state of enterprise storage architecture.

Several years ago, when you spoke at the Flash Memory Summit, you said that solid-state drives (SSDs) were game changers for servers and you could foresee a return to direct-attached storage over traditional SANs. Do you still feel that way?

Walls: Yes and no, in that if you look at most servers today, including even our Z line, our mainframe line, there are a significant amount of SSDs and flash that goes into those servers. It's not all that well known, but we've actually been shipping flash in our Z system for about a year and a half and it's a very, very popular feature. It is really a paging accelerator and not where you would store persistent data. But it is very important to the value proposition of Z.

You look at other use cases in the x86 and the Power [Systems] server, and you see that we have lots of options for flash in those to accelerate applications and yes, in some cases, to build a clustered set of servers that can supplant a SAN.

So, I think you do see use cases emerging for the Internet companies -- for the Googles and the Facebooks and the others of the world -- where they use direct-attached SSDs in servers and cluster those and then use various applications like Cassandra and key value stores to essentially supplant a SAN.

At the same time, having a centralized storage repository for other applications is still an important use case. It's still going to be important in the future. So, I think both are going to have a rich future. But I think the SAN is changing. It's not going to be the same in a few years. The cloud is ever-present and growing, and we continue to have to kind of re-invent the SAN.

How do you foresee enterprise storage architecture changing over the next few years?

Walls: We often talk about performance and latency, and it's extremely important. But, at the same time, IBM's other initiative -- Elastic Storage and the ability to bring the servers to the storage -- is extremely important here. I happen to believe a lot of that storage would be flash storage for the active data, but there are still tons of cold data out there as well. I believe that software-defined storage and elastic storage are key propositions here to make sure that we can do large deployments, have the automation that is necessary, have the ability to bring the compute technology that you need to the storage. So, this is another key pillar of IBM's strategy in addition to flash.

What's your take on all-flash arrays vs. hybrid arrays?

Walls: I think hybrid was sort of the technology of 2010, 2011, 2012, especially when the cost of flash was really high and certainly there are applications and customers who continue to want it. But I think you're going to see and are seeing a lot of the hybrid [technology] moving to all-flash. The reason for that, in many cases, is simplicity of management. There are sometimes bursts in workloads, and you want to make sure you can handle that burst with low latency as well. Having all-flash is a key of being able to do that.

Also, data reduction has changed the game. Being able to reduce the cost by having data reduction allows you to come very close to the cost of a hybrid. And something else has to be brought in there. I have clients who are maxed out with their electricity in a particular data center. They say they have hybrids; it's great. But they're burning all of the electricity they can. They can't add any more to that data center. So, by going to all-flash and compression, not only do they get a reduction in the cost, not only do they get a capacity improvement, but they reduce their energy spending and can continue to enlarge it. So, I think all of these other benefits will drive many applications to all flash.

Having said that, IBM has a rich portfolio of storage. I think there are use cases that clearly will still benefit from hybrid and we offer both.

When you talk to enterprise customers, what do you advise them on the optimal place for flash? Server? Cache? Storage arrays?

Walls: I don't think there's one answer to this question. We often try to make it that simple because frankly, for IBM or any competitor, it would be great if there was one solution. Then we wouldn't have to offer options.

I talk to clients just about every week, and instead of telling them the best place for flash, I listen to what their use cases are and what their pain points are … and what problem they're trying to solve, then pick from the portfolio that we have and the different topologies to recommend a solution.

If it's a database, [online transaction processing] OLTP application, where they want low latency and a very consistent average response time and they need the other advanced storage services, then the [FlashSystem] V840 is the option for them. If they provide all of the advanced storage functions within the database themselves, then the answer is Tier-0 storage like ours.

If they want to build a hyper-converged network, then the answer may be in the server where it may be using the [Coherent Accelerator Processor Interface] CAPI interface on our new Power servers that can also give a lot of benefits.

Do you think your customers can go 100% flash some day?

Walls: I think some will be all-flash. But if you look at an Internet company with many, many exabytes of data, much of it warehoused and much of it cold [data] most of the time unless you want to do analytics, I think we're some time before all of that is on flash. Active data, I think, will be on flash.

Next Steps

Flash arrays get capacity, feature upgrades

Will storage arrays become obsolete due to emerging technologies?

Dig Deeper on Solid-state storage

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

I was struck by his comment about data centers that are "maxed out" on electricity. I haven't heard that before as a justification for flash. That said, I kind of wonder what he means. The power company doesn't have any more power? It's using up all the power on the line? That doesn't seem like something that's going to be sustainable.