This content is part of the Essential Guide: Private cloud implementation guide

Private cloud technology presents promising prospects

The benefits of a private cloud implementation are becoming increasingly obvious as costs continue to decline and interoperability with its public relatives continues to improve.

Benefits usually cited for private cloud technology tend to focus on security issues. Certainly, public cloud security has had to evolve from a naïve and somewhat unprotected environment to one that is robustly secure today. Issues such as cross-tenancy attacks have been laid to rest by hardware support in the CPU chips that protect memory spaces from each other, while orchestration software has advanced to strengthen authentication. A third leg, encryption at rest for data stored in clouds, is available, though usage by tenants is still patchy.

Most advocates of the private cloud point out that these issues are all obviated when there are no outside tenants, but this is a halcyon view. Many malware issues are either a result of careless admins or users, who allow the code to enter the cloud, or a result of focused attacks. These are symptomatic of both public and private clouds today, and there is a strong argument to be made that public cloud providers do a much better job than most businesses in both areas due to the economies of scale and their much larger security budgets.

In fact, we need to look elsewhere for a rationale to keep to a private cloud. An obvious benefit is simplicity in operations. Today, a hybrid cloud environment will find discontinuities between the private (likely OpenStack) and public segments. These can occur in instance images, control scripts for orchestration and networking and, more generally, differences in how the clouds are operated.

Public and private working together

Interoperability between the public and private cloud segments of a hybrid cloud is the focus of intensive effort among cloud software developers. Not only must a good product interface seamlessly between OpenStack and the likes of Amazon Web Services (AWS), Google Cloud and Microsoft Azure, it also has to handle transactions between different public cloud segments just as effortlessly. The current developments are focused on linking OpenStack to other similar products, and they are maturing well, but total flexibility and seamless interoperability are still a year or so away.

There is an opportunity for the major public cloud service providers to extend their own cloud software into the private sector as a software solution. Microsoft has already tested this with its Azure Stack and is committed to moving forward, so it's a safe bet that AWS and Google will follow suit, probably in 2018. Taken together with the steps toward common operating tools above, the issue of interoperability of apps and orchestration is going away.

Looking to storage changes the picture of private versus hybrid implementations. It's a sad truism that storage is seen as a necessary evil by many software developers who often miss the subtleties of performance tuning and optimized data integrity that made the difference between a great solution and a dog.

Since all apps run on data, consider the situation of building a hybrid cloud with the intention of cloud bursting to handle peak loads. Copying key files when a peak is detected can take tens of minutes to hours, possibly missing the bursting opportunity.

Part of the problem is that, in hybrids, the company keeps key data in the private cloud segment, making a copy to the public segment necessary before bursting can start up. Good data management, with sharding of data sets and databases, can overcome this problem, but it requires continuous tweaking of data placement strategies to adjust to the change.

The problem of data placement looks much worse when we realize we will be using containers, not virtual machines, in a year or two. A container instance can start in milliseconds, putting a real focus on the long data latencies.

One possibility that might ease the data placement issue is the storage-as-a-service approach, where data is stored in the public segment and local caches act as accelerated gateways. This clearly is dependent on the performance of the caching algorithms, though usually the gateways are SSD-based for high-performance access. Another approach is colocation of the private segment at a hosting site with fast fiber trunk connections to the cloud service provider's (CSPs) data centers.

Another consideration for private cloud technology is cost. The big CSPs have enormous buying power, so they often deal at cost plus a few percent, getting bargains that most enterprises would envy. CSPs also tend to buy directly from the Taiwanese and Chinese original design manufacturers (ODMs) that make most IT gear today, including products for major brand names. These approaches to leveraged buying are the reason rented instances and gigabytes of storage from the CSPs are so cheap.

Hardware costs changing

To effectively compete with the public clouds, exponents of private cloud technology have to convince the CEO and CFO that the in-house approach is economical. The good news is that buying patterns are changing, with ODMs offering products directly to large customers, and indirectly through distribution to smaller customers.

At the same time, drive pricing has dropped dramatically, even for SSDs, and the traditional lock-in that forced the purchase of expensive drives from ODM vendors is disappearing in the face of very open ODM competition. Traditional ODM vendors are countering the drop in hardware revenue by value-adding software offerings, such as ready-to-go cloud deployments and software support.

Hardware itself is changing. RAID arrays are fading into the sunset, with small, SSD-based appliances replacing them. The issue driving this transition is the enormous gap in performance between SSDs and HDDs, with as much as 100x the throughput. These appliances are based on commercial off-the-shelf (COTS) technology, and are all very similar in structure and operation. This simplifies the hardware decision, while expanding the platform choice list and driving competition. We are seeing COTS-based reference designs for whole cloud clusters.

Taken together, the removal of risk from the hardware side and the lowering of prices mean a meaningful TCO comparison can be made for private cloud technology versus hybrid or public alternatives. Today, I suspect the numbers still favor the hybrid approach, but looking forward just a year or so, the picture could be different.

Technology will take some giant leaps in the next two years. Server designs will see much more horsepower per server as features such as fast memory interfaces and NVDIMMs speed up the server. Likewise, hyper-converged system approaches will tie in very fast SSDs and much faster remote direct memory access networks. The totality of this is that the server count needed for a given workload will be reduced by a major factor, lowering both acquisition and operating costs by a large percentage. Containers will also expand the horsepower of servers by allowing 3x to 5x the instance count per server.

In storage, SSDs have already sounded the death knell for the enterprise hard drive based on a performance gap, but we are now awaiting 100 TB 2.5-inch SSDs. Combine performance and hardware with emerging data compression and physical size, and the cost of the storage farm will drop like a rock.

Based on what's coming, private cloud technology will look considerably more attractive, though backup and disaster recovery may lead us to still use the hybrid approach, albeit with more emphasis on the private segment. Time will tell ... the major CSPs have access to the new technologies, too.

Next Steps

So you want to implement a private cloud?

Choosing between public and private cloud storage

Private cloud planning tips for your server farm

Dig Deeper on Cloud storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close