Looking for something else?
Data storage administrators are often faced with the decision of whether or not to automatically tier their storage. While many might think it’s a good idea to quickly adopt automated tiering, it could be a challenging choice, especially if you have applications that require high bandwidth and quick response times.
In this podcast interview, Ashish Nadkarni, senior analyst and consultant at Taneja Group, discusses the ins and outs of automated tiering. Find out if automated storage tiering improves storage efficiency; how to implement an automated tiering project; and the best way to determine if you should manually or automatically tier data. Finally, get pointers on when to be concerned about auto tiering gaining too much control over your environment; what environments aren’t suitable for automated storage tiering; and get the scoop on sub-LUN tiering.
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As
SearchStorage.com: Auto tiering and sub-LUN tiering are becoming more popular technologies. Are you guaranteed great storage efficiency with auto tiering?
Nadkarni: There are no guarantees. This is storage. It’s just a promise from the vendors that auto tiering is one of the ways to improve storage efficiency in the long term and reduce storage costs. The premise for automated tiering is the ability to size storage infrastructure, making it more agile and cost-efficient in the long term.
SearchStorage.com: What are the first two things you need to do when starting a new auto-tiering project?
Nadkarni: The most important thing you have to consider when going with auto tiering is to figure out the cost between an auto-tiered and a manually tiered solution. So when you look at an auto-tiered solution, you’re looking at the licensing costs and any other hardware and software that potentially goes into that whole solution. If you’re doing tiering in a manual fashion, you’re looking at resources; [that is] people resources and the extra number of hours that would go into it for manual tiering for it to be efficient.
So that’s the map that you have to use to figure out of it is viable in the long run or not. And once you’ve figured out that you're going with an auto-tiering environment, you’re going to need to identify and group your applications into ones that are absolutely mission-critical. You probably want to be a little more shy [and not put] them on auto tiering on Day 1. And then the other non-mission-critical application would circle into the second bucket. And when you do want to start up, you probably want to start off with these non-mission-critical applications.
What you need to do with these buckets is to really look at the performance profiles and figure out whether they're consistently high performing or [if] they have certain periods of time where they're high performing and have relatively low performance at other times.
Lastly, you need to make sure that you have a Plan B so if you do put something into an auto-tiered environment and it doesn’t perform well, then you [will] have a backup plan to go with the traditional approach. [For] customers going in for a brand-new environment, I would caution them against putting everything into an auto-tiered environment and to have a more traditional environment as a backup plan.
SearchStorage.com: What is the best way to determine which data should be manually tiered vs. automatically tiered?
Nadkarni: The best way to really determine that is to profile every application. There used to be a cliché in the storage industry that you can’t really do any tiering unless you understand the profile of these applications. It’s still true with auto tiering -- it’s not like you can just forget about these, because now you’re putting the faith in the storage framework to do the tiering. So you need to profile every application in terms of its IOPs and throughput requirements and check to see if it consistently requires high throughput, high bandwidth or low response times. Some applications require millisecond response times. [Some have] certain times of the day they're very active, and other times they’re not. [So some applications could be] latency-sensitive at any time of the day. For example, if you have an application that's high latency-sensitive at every time of the day then you probably want to shy away from putting it on some kind of auto-tiered function; it would constantly hog the tier 0 of that tier because at any given time it needs that kind of [high] performance.
The bottom line is that you need to search for exceptions. Exceptions will decide how much you’re going to be able to do auto vs. manual [tiering]. I think the vast majority of applications will work fine in an auto-tiered environment; but the exceptions are the ones you need to be careful about.
SearchStorage.com: How concerned should storage pros be with assigning too much control to their automated tiering?
Nadkarni: In this day in age, not very. It’s not critical to do everything manually. Budgets are stretched [and] resources are in short supply. You can’t do everything in a manual fashion, especially as your environment grows. At the end of the day, auto tiering is there to replace the manual labor. [It’s not] guaranteed that manual tiering is cheaper and more efficient, even if you have a storage pro at the head of this, because there are only so many hours you can dedicate in a day to do this. We know applications are fairly dynamic in nature these days. So you would want to [approach] automated tiering in a cautiously optimistic manner unless someone is in a position to prove that manual tiering is cheaper and more efficient in the long run by way of mathematical models or other systems like a tiering model or framework put in place by the company. I would say you [should] be cautiously optimistic and not really worry about auto tiering having too much control over you.
SearchStorage.com: Are there environments that aren’t suited for auto tiering?
Nadkarni: Auto tiering is still up and coming, and it definitely has a long way to go before it's considered to be mainstream. Vendors are hoping for it to be that way soon, although at this point the way I would approach auto tiering is this: Applications that constantly require sub-millisecond response times, high bandwidth and that are very latency-sensitive would not be good candidates for an auto-tiered environment. The reason for this is that the whole premise of auto tiering is that you have a relatively small tier 0, which is your buffer, and it moves your data into that tier when that application requires high performance during certain times of the day. But if your application is constantly hogging that tier, then effectively it can’t be cured in an optimal manner. So those are the applications you don’t want to put in an auto-tiered environment.
I will say that when people talk about high-performance applications, they’re talking about that at a 30,000 foot level. If you look at it at a 1,000 foot level, you will find competent applications that aren't high performance. There are only certain components that need a lot of performance requirements. [With] sub-LUN tiering now becoming popular, it can facilitate the high-performance competence in a selective manner and [leave] the not-so-high-performance competence at the lower tiers.
SearchStorage.com: Let’s talk about sub-LUN tiering. Why are we hearing this phrase so much right now? What should storage pros know about it?
Nadkarni: The reason why people are hearing about sub-LUN tiering is because it seems to be the way [that] storage infrastructure is headed. In a traditional model, the lowest construct of data in an auto-tiered environment was to move the entire volume of data to the LUN level. If a volume had a hot spot somewhere, even if that volume had other places that were relatively low access, the entire volume would be moved to a faster tier and back. That process was often slow; cumbersome and largely inefficient, so it didn’t attract a lot of customers in that sense. But these days with improved technology, better algorithms [and] faster computer resources, you can actually take the data and chunk it up. In other words, you can split it into smaller chunks and move it from one tier to another tier in those smaller units. That process is more efficient and faster than the traditional approach. So if your hot spot sits on two or three chunks -- and those chunks could be anywhere from a 760 KB to a megabyte, or even lower in some cases -- you’re only moving those chunks back and forth, not the entire volume. So what storage pros need to know is that the lowest unit of storage is now no longer a volume or a LUN, it’s really a chunk. And you will find that most storage systems will now be talking in terms of chunks of data, not LUNs or volumes of data. Eventually that will be the system to which people will talk about tiering. As of right now, the chunking system is different for every vendor and every solution out there. It’s not uniformly accepted across all of the platforms.