Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

The resilient and adaptable nature of object data

Just how versatile is object storage? "Whatever the media is of the day, you can use that with object storage," Marc Staimer said. "Object storage doesn't care."

In his keynote at TechTarget's Storage Decisions conference, Staimer pointed out that one of the reasons object storage is so widely used is the adaptable and durable nature of object data. Object storage does not rely on a specific storage medium, be it flash or hard disk drives. "Object data is location-independent, and that enables you some exceptional data resilience and durability."

Along with resiliency, object data has guaranteed immutability in its original form thanks to the unique identifier attached to each object. With object storage, a user can modify object and create new objects, but the original object remains unchanged. "It's a write once, read many," Staimer said. "A WORM." It is up to the user whether they want to delete the original object once a modified version is created, or if they would prefer to keep it as a point in time instance to be referenced later.

Object data is location-independent, and that enables you some exceptional data resilience and durability.
Marc Staimer

This ability to create and hold onto different modified versions makes object data well-suited to multi-copy mirroring, where multiple copies of the data are created in case of hard drive or nodal failure. According to Staimer, just how many copies are created is up to the user to decide. "The number of copies I make matches up with how many concurrent failures I want to be able to withstand and not lose any data."

With all of those copies being created, Staimer emphasized the importance of using erasure coding for object data. Erasure coding separates the data into chunks of information and distributes those pieces. "I'm going to put it on different nodes and different hard drives, so I don't have any chunk combined on the same drive." 

View All Videos

Transcript - The resilient and adaptable nature of object data

Marc Staimer, founder of Dragon Slayer Consulting, discusses the versatility of object data storage during a presentation at TechTarget's Storage Decisions conference. The text has been edited for clarity.

Object data is location-independent, and that enables you some exceptional data resilience and durability. Generally speaking, you have the ability to natively preserve the data in different ways, inherently read-optimized. And what I mean by inherently read-optimized is this is not going to be storage that you're going to do online transaction processing. This is not going to be storage primarily for primary applications. This is not going to be storage where performance is an important issue in the application. This is not that kind of storage.

Because the thing to bear in mind is you're doing all of this additional metadata that's going to add latency. But you are media-independent. You're not dependent on the underlying media, it's not dependent on hard disk drives, it's not dependent on flash drives. Whatever the media is of the day, you can use that with object storage. Object storage doesn't care.

Remember, it's a layer above the physical storage. And because there are unique identifiers with every object, you can guarantee the immutability of a given object. It won't change. It's a write once, read many, a "WORM." You can do that in many of the object servers. Not all, but most. You also have the ability to modify or create objects and keep the old object immutable. You can set it up so that you can delete it, or you can keep it as a point in time instance for future reference.

Because of this, it's well-suited to multi-copy mirroring. Multi-copy mirroring says: "OK, I write once, but I'm going to make multiple copies." Why would I want to make multiple copies? The reason I make multiple copies is for hard drive failures or flash drive failures or nodal failures. The number of copies I make matches up with how many concurrent failures I want to be able to withstand and not lose any data. That's what multi-copy mirroring is.

It's faster than RAID because if you lose a drive, if you lose a node, you just create another copy from a good copy. If I want to protect against three concurrent failures, I have three copies of the data. It's all based on the principle that server-based storage is cheaper than array-based storage.

A 4 TB read-optimized flash drive today has different costs depending what system it's in. An enterprise-class storage system with a 4 TB flash drive is going to have approximately a list price of about $40,000. That's just the drive. The reason that [the vendors] set it at $40,000 is that they expect you are going to demand a 75% discount on that enterprise flash drive. So it's going to come back down to around $10,000 after discount.

That same drive on their mid-tier system is going to have a $20,000 list price. Is there any difference between the drive in the enterprise and the drive in the mid-tier system? Not an iota. Nothing, it's the same physical drive. Different skew, same drive. They expect you're going to want a discount around 60%. That means the net price is less than $10,000. Funny how that works out.

That same drive on the server system that company may be selling has a list price of $4,000. Typically the discount is going to be around 20%. Not anywhere near where it is in the storage system. What is that price going to turn out to be? A net of $3,200. So you have 10 grand versus $3,200. Is it a little less expensive in the server? Yes it is. So the whole principle behind multi-copy mirroring is storage is cheaper in the server, white-box servers are cheap, you can do this.

But if you put enough storage -- petabytes, exabytes -- it adds up. That's why there's erasure coding. Because object storage is very well-suited for erasure coding. There are two elements to erasure coding: width and breadth. What that does is says, "I'm going to take my data and break it up into chunks. I'm going to put it on different nodes and different hard drives, so I don't have any chunk combined on the same drive."

The width is the total number of chunks. The breadth is how many I have to read to reconstitute my data or read my data. And it's usually the first ones back. So in this [example] I have a 12x9: width is 12, breadth is nine. So I need nine chunks. The first nine I read, I've got my data. If I don't get all 12 back, it means I had failures and therefore I just reconstitute the chunk. I don't have to go through a RAID rebuild. A RAID rebuild takes a long time, and I'm reconstituting the entire drive when I only need the chunk. I don't have a reduction in performance like I do with RAID, that's why you do erasure coding. 

+ Show Transcript

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How have you benefited from the use of object storage?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close