After bringing its Kubernetes implementation on premises from AWS, Points International Ltd. needed a file system that could run inside containers and handle Kubernetes stateful applications.
Points International bought Quobyte Inc. file system software and installed it on Dell EMC PowerEdge servers for Kubernetes storage. The Toronto-based loyalty points management company now has about 100 TB of solid-state drive storage capacity, running on servers with redundant NVMe boot disks.
"We used Kubernetes to replace all of our infrastructure," said Points principal engineer Michael Laccetti. "We were heavily Docker-optimized early on, so for us, the Kubernetes stuff was an easy second step to increase scalability, ease of use and monitoring."
Points decided to bring Kubernetes on premises to its Toronto data center to save money and improve scalability and management, Laccetti said. The move saved the company thousands of dollars per month, he estimated.
As part of that shift, the Points IT team started adding stateful services for Kubernetes storage.
"We needed some kind of data store that integrated with Kubernetes and gave us all the functionality for persistent volumes outside of Kubernetes," Laccetti said
The Quobyte Kubernetes storage formula
Berlin-based Quobyte launched in 2015, founded by former Google storage engineers Felix Hupfeld and Björn Kolbeck who helped develop the open source XtreemFS file system. Its distributed file system includes an Operator for Kubernetes that runs Quobyte services in containers and performs rolling upgrades. Quobyte also has persistent volume and Container Storage Interface plug-ins to enable using it as a persistent volume in Kubernetes and to add dynamic provisioning.
Quobyte software uses a Portable Operating System Interface-compliant parallel system and runs on commodity servers. Points took advantage of that to run it on Dell servers.
But although Quobyte is a new vendor, Laccetti was already familiar with its founders' history and their technology.
"Quobyte is new only in the sense that the name is new," he said. "The original file system behind it has been around for quite some time. The Quobyte product was full featured from day one. They had all the features we were looking for, especially for Kubernetes."
Before the switch, Laccetti said, "we had all sorts of different mechanisms for doing storage. We had NFS shares, a Ceph cluster, and that's when we learned we're not good at managing storage ourselves. We had some of those expensive complete storage systems, too, but considering the expense and the functionality they offered, they were not useful in the long term."
He described his storage now as "giant sleds of SSDs -- we have really fast storage. Our Kubernetes services are as many cores and RAM as we can cram into a 2U server. Buy that 20 times and off you go."
Laccetti said he likes that Quobyte's dashboard has built-in alerts and that it enables him to do rolling upgrades.
No single point of failure
Before making the purchase, Points put the setup through a series of tests, Laccetti said.
"There were three scenarios we set up," he said. "First, if a node went offline, we wanted to make sure everything kept working. The second thing it had to do was to maintain a certain throughput. We wanted to make sure we could squeeze 500,000 IOPS out of Quobyte. The third test was, if we killed a client on Quobyte, we had to be sure Kubernetes wouldn't collapse."
He said Quobyte passed the first two tests easily. "The third one is more interesting," Laccetti said.
"When a persistent volume goes away, all the things running that use that persistent volume won't work because the file system it talks to isn't there," he said. "But we found if you design your system so the Quobyte cluster has no single point of failure, then you have no problem.
"Before, nodes would continuously drop out of our giant Ceph cluster. I'm sure it's as resilient as Quobyte once you find the right tweaks and configurations, but the level of effort to maintain it was just not worth it."
Quobyte offered all the features Points needed, which included the ability to dynamically provision storage, Laccetti said.
"We can create tiers with different levels of IOPS depending on the scenario the container is working in," he said. "Some things don't need 1,000 IOPS; they can do with 50 IOPS. So we can tune that per container.
"Quobyte's managing and monitoring is also incredibly useful. We have a multi-tenant Kubernetes thing going on where we have multiple clusters talking to the one Quobyte back end, and we can segregate everything. We have a giant storage pool, and we can split it up as we see fit."
Laccetti said his new Kubernetes storage setup is also easy to scale.
"You can just add hard drives, SSDs or servers on the fly and also remove them, which we've had to do," Laccetti said. "If something's going wrong, we notice a performance impact or we need to physically move a server from one rack to another, you can basically put it out of rotation, rebalance and keep moving."