How data storage type can give efficiency a boost

How data storage type can give efficiency a boost

Date: Jan 20, 2014

It's easy to say data storage type can affect efficiency in a data center when storage technologies excel in different areas. Flash storage can improve performance, while tape is beneficial for storing large amounts of data. But is one data storage type able to pump up overall efficiency more than another? According to Jon Toigo, managing principal at Toigo Partners International, new developments in storage, such as LTFS, can make a difference if used correctly. In this TechTalk interview from TechTarget's Storage Decisions conference, Toigo explains how different storage technologies play a role in efficiency. To learn how he thinks tape and the cloud might affect performance and capacity, and how it's all changing the role of storage administrators, take a look at the video or read the full transcript below.

So you believe tape can play a role in greater efficiency?

Jon Toigo: I think tape has always had a role in making a more efficient environment. Those who attend my tape presentation will get an earful, of course, about two innovations that are occurring in the tape world.

One is the dramatic increase in capacity that's coming in tape. A couple years ago, Fujifilm and IBM demonstrated a 35 terabyte [TB] LTO [Linear Tape-Open] cartridge. That wasn't using compression or deduplication or any squeezing technology to squeeze more data. It was a simple LTO cartridge with the same amount of tape on the spool, except that it was coated with something called barium ferrite. When you're creating the tape, you send an electromagnetic charge through it, [barium ferrite] stands all the bits on end. Just like these high-end SATA drives that can store terabytes and terabytes of data. They thought you couldn't do that with tape, because tape is flexible. It's Mylar plastic. They discovered that barium ferrite does it automatically. You don't need to do any of the stuff they do with the disk. It automatically stands the data up on end.

Now, you're going to have a single cartridge with a 35 TB capacity. That's a lot of space. You blend that with another technology that IBM came out with a couple years ago -- and they keep improving it -- called linear tape file system, LTFS. [LTFS] basically returns tape back to a role it played back when I first got into this business, which is that of a massive file server. Every tape has this SKU number on [its side] and basically shows up in LTFS as a file folder. If you double-click the file folder in Windows App or Linux, it opens up a file listing of all the files that are on the tape.

I could rapidly find that file on the tape and then stream it back out to you. Is that appropriate for little files? Not as much. Is it appropriate for files being accessed a lot? Probably not, unless you're talking about human genome sequencing data, where the files are really long -- they're known as long block files. Or if you're talking about videos like Netflix, where the file streams but it's a very long file. It doesn't take that long for the file to be found on the tape, placed in front of the read/write head. And the streaming rate of tape is better than anything you're going to get out of a flash rig or out of a disk rig. So, yeah, tape still has a massive role to play. And now the latest generation of LTFS supports [general parallel file system] GPFS, which is the tiering model that IBM uses. It will actually tier data from disks down to tape and write it there permanently. Tape has 30 years of resilience. Show [me] a disk drive or a flash drive that can do that. It's good for archive, good for you, good for me, good for America.

Is there a place for cloud in creating a more efficient storage environment?

Toigo: I don't know that it makes an environment more efficient but it's a good place to store all your junk. Let's face it, there's a lot of data sitting out on corporate environments that nobody will ever access again. In fact, there's a ton of data that is what we call orphan data. Fifteen percent on average is orphan data. It's data whose owner in metadata and whose server in metadata no longer exist. [Within] the company, nobody knows what the hell it is. They're afraid to delete it because they say, 'Wow, this might be really important,' but the guy doesn't work there anymore who made it and the server that it used to be on doesn't exist in the network anymore. If you have a ton of data like this and store it up in a cloud, who cares? There are [downsides] of clouds, we've gone over them and over them and over them. I mean ask the [Department of Defense] DOD.

They put the next-generation Fighter Aircraft up in the cloud for DOD contractors, and [the] Chinese came right in and took it. Now we don't have to build that airplane because they already know how to defeat all of its defensive systems. Maybe that's good. Maybe that's good for you, good for me and good for America. I don't know. Anyway, at the end of the day, clouds have their challenges just like any other external operation. When it comes to disaster recovery, a lot of people look at the cloud as the alternative to tape for backing up their data.

There's some potential positives for doing things like that, but there are also a lot of real potential negatives. Has anybody actually gone out to check and see what they have at the cloud site? You've got a nice brochure that's usually online somewhere. 'We do this great stuff.' Nobody ever sends anybody out to actually look at the facility. You might be in a banged-up rack in a cage at the end of the managed hosting service facility.

Finally, there's the issue of networks. Cloud providers cannot guarantee, with a straight face, service levels because they depend on the network and the network sets the service level more than anything a cloud provider can do internally at his own shop. Try to get your data back. Try to get access to your data. How many times is Google going to go down in a given month? Couple times a month, right? Is that a big deal? Maybe not, if it's junk data you have stored up there. But if it's mission-critical stuff, I don't know. That would be a little problematic for me.

We hear a lot about the changing role of storage managers, in part because of virtualization and the way it has forced them to work more closely with the server teams, and in part, just because the infrastructures are so complex. How do you see their role changing?

Toigo: I just completed a presentation, as a matter of fact, that I'm going to give over in London. It's on the Starbuckification of storage, this strange concept that it's desirable for users to be able to go and allocate their own storage, much like the way that you go to an automatic coffee machine and you select this kind of coffee, this much sugar, this much cream, whatever. It pops into a cup in front of you. If storage was really that simple, I'd be all in favor of that, but it isn't.

The closest we can come to that is virtualized storage. If all storage is treated as the same thing, blocks, then you can get to atomic units of storage and you can allocate X number of atomic units of storage. We're seeing that right now. There's a great new system that came out of DataCore called VDS, it's a virtual desktop server.

It is just a few SATA hard drives and just a generic server running Windows Server 2008 R2 and [DataCore] software. If you want 50 desktops, 50 Windows 8 desktops, you just roll out one of these. If you want 100, it's a different license and slightly more storage. You can deliver desktops in atomic units. Rack them up, and you have that many desktops. That's to satisfy the real need for virtual desktops which is usually defined by saying, 'I don't want to do 5,000 desktops at once I just want to try a few over here and a few over there.'

They figured out you can do a desktop for about 35 bucks. I think it was dynamite. That's great stuff. Now, when we can get that simplified, we can make a real contribution to computer science, and we can actually roll out infrastructure that is already an atomic unit and can be allocated to support things; then maybe the Starbuckification thing works. However, we are not there yet. Most of these products, VMware [for example], VMware has 190 vendors at VMworld [in 2013] who were sponsors for the show. They're all its partners and ecosystem, right? You sit down with any of them and they say, 'Well, we're a partner because we fix something that VMware breaks.'

An entire ecosystem based on breaking stuff. That sounds strange to me. It doesn't sound like technology is ready for prime time. It sounds to me like a science fair project. If I'm an enterprise IT manager, I'm going to have me a few storage managers who know what they're doing, who can sniff the tin and actually understand what's going on. I don't see that. I think we're losing something when we start to denigrate the need for specialists who understand the need for storage. Storage has not gotten simpler.

More on Data center storage

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: