Storage performance characteristics News
June 20, 2016
CMA Consulting customers running thousands of queries a day on its Oracle RAC will now get a performance boost from EMC's recently added DSSD rack-scale flash system.
May 25, 2016
Meitav Dash Investments experienced problematic latency with its applications using NetApp filers before turning to newcomer Infinidat's hyperscale hybrid storage.
May 24, 2016
Datrium has added an “Insane Mode” feature that the startup claims can double host storage performance on the fly. Datrium took the Insane Mode tag from Tesla’s rapid acceleration technique for its ...
May 17, 2016
DataDirect Networks leverages PCIe and NVMe in its Flashscale all-flash storage array. The system targets mixed workloads that need a balance of high performance and capacity.
Storage performance characteristics Get Started
Bring yourself up to speed with our introductory content
Because big data can scale to petabytes of capacity, organizations are looking to manage it in ways that are easier and less expensive than traditional scale-out NAS. Object storage and software-defined storage are frequently mentioned as big data tools. Both can add intelligence required for analyzing data and take advantage of low-cost disk storage.
An object storage system handles files differently than a traditional file system. Servers use unique identifiers to find objects, which use metadata in a far more detailed way than file systems do. The unique identifiers mean objects can be geographically dispersed because they can be retrieved without the storage system knowing their physical location. That makes objects a good choice for large data stores or data stored in a cloud.
Software-defined storage has many forms and use cases, but it applies to big data when used to pool and manage data across off-the-shelf commodity hardware. Because the management and analytics happen in software appliances, the hardware can be cheap, deep disk without bells and whistles.
Perhaps the most well known option available is the Apache Hadoop Distributed File System (HDFS), which is a Java-based file system designed to be used in Hadoop clusters. HDFS currently scales to 200 petabytes and can support single Hadoop clusters of 4,000 nodes. It offers storage performance on a large scale and at a low cost, which is atypical of most enterprise arrays that cannot perform all three tasks simultaneously.
In this chapter of "Tools to Tackle Big Data Troubles," we look at some core HDFS features, three HDFS commercial distributions and other Hadoop storage-related tools and their related applications.Continue Reading
Converged infrastructure gives IT shops the opportunity to buy their entire hardware stack -- storage, networking, compute and server virtualization -- in one SKU. They can also add a software management layer and tightly integrate those components with hyper-converged infrastructure (HCI). These all-in-one HCI platforms are ideal for virtual desktop infrastructure for several reasons: They take the guesswork out of buying hardware, they’re scalable, and shops know the pieces will work together because they're all from the same vendor. But with that simplicity comes some necessary back-end changes.
In traditional companies, disparate teams manage the facets that get packaged into HCI. But with the management interface inherent to HCI, the need for bodies in the IT shop is sometimes diminished. It takes fewer people to manage fewer parts. Companies considering deploying VDI on hyper-converged infrastructure must think about the personnel, expertise and management requirements that come with the pod-style platforms. In some cases, HCI will be a boon for businesses looking to deploy or improve desktop virtualization. In other cases, it's not the right tool for the job.Continue Reading
Amazon Elastic Block Store volumes are offered as magnetic hard disk and solid-state drives. How should we use each storage type to ensure proper workload performance? Continue Reading
Evaluate Storage performance characteristics Vendors & Products
Weigh the pros and cons of technologies, products and projects you are considering.
All-flash arrays are a hot technology, but not everybody needs flash for all of their storage. Hybrid flash arrays can strike a balance between using flash for performance while keeping spinning disk drives to lower the price for less frequently accessed data. Flash storage offers blazing speed but at a high cost per gigabyte.
At the other end of the spectrum, multi-terabyte hard disk drives (HDDs) are more economical, but they do not supply the raw IOPS per drive that some applications need. Hybrid flash arrays combining HDDs and a thin slice of flash storage can provide a performance boost and reduce latency while keeping costs in check. Although the difference between HDD prices and flash costs has narrowed considerably, many organizations still don't have the budget to deploy hundreds of terabytes of solid-state storage. Despite differences in architectures, the vendors generally agree on some hybrid vs. all-flash guidelines. If sub-millisecond latency or guaranteed quality of service (QoS) is required, then an all-flash array or a hybrid flash array that can deliver near all-flash performance is the way to go. But with variable and unpredictable workloads, hybrid flash arrays can often serve the need at a lower $/GB.
Candidates for hybrid flash arrays include collaboration, email and any applications where data lifecycle issues mean that not all data requires immediate access.Continue Reading
Healthcare facilities looking to boost their storage performance should be looking into flash, if they haven't invested in it already. Nicole Lewis, a contributor to SearchHealthIT, begins this handbook by describing how one hospital system converted to flash technology and subsequently experienced a noticeable improvement in clinical data processing. That increase in computing speed also helped the hospital system greatly shrink its physical storage setup and reduce the amount of time its employees take to complete financial reports.
Next, contributor Brien Posey explains how facilities that practice telehealth should use flash for their hospital data storage needs. Telehealth is performed through videoconferencing, something that requires plenty of available network bandwidth. Poor storage performance can interfere with telehealth. That issue can be prevented with flash, Posey reasons.
Lastly, Posey breaks down the deciding factors for organizations that are considering going entirely with flash or using a tiered approach for their hospital data storage.Continue Reading
With more than one million customers, AWS has convinced enterprises of all shapes, sizes and industries that its cloud can improve IT operations. But the move isn't always flawless. Continue Reading
Manage Storage performance characteristics
Learn to apply best practices and optimize your operations.
Sharing is great, but can do a number on private cloud storage performance. Consider new SSD and networking options to get the speed you need. Continue Reading
As solid-state storage devices become more commonplace in storage shops, the amount implemented and its use cases are growing, too. Continue Reading
As all-flash arrays become increasingly common in enterprises, IT professionals need to look not just for efficiency, but for important features, such as analytics and data protection. Continue Reading
Problem Solve Storage performance characteristics Issues
We’ve gathered up expert advice and tips from professionals like you so that the answers you need are always available.
With a high number of virtual machines competing for resources and consuming bandwidth, storage performance can suffer in hyper-converged environments. Continue Reading
The high-IOPS, low-latency potential of SSDs moves bottlenecks to the network. SDN offers provisioning adjustments to avoid any slowdown in the data center. Continue Reading
Application performance will change depending on if you want more cores, better clock speed or what you choose for memory. Continue Reading