beckmarkwith - Fotolia

Get started Bring yourself up to speed with our introductory content.

How can I guarantee the best Ceph performance?

Ceph object storage performance is largely based on network speed, but journal disks and the right file system for object storage devices also play a role.

If you are concerned about whether you need solid-state drives to ensure the best Ceph performance, you can rest assured that SATA is enough -- as long as your network is up to snuff, that is.

Inktank, the company that developed Ceph, has said that SATA disks are fast enough to enable good Ceph performance. You don't need solid-state drives (SSDs) for Ceph because the CRUSH algorithm -- which makes decisions about where to store data in the Ceph object store -- is built to ensure fast access to Ceph storage when many nodes work together. Indeed, the CRUSH algorithm helps deliver the binary objects that are stored in Ceph back to the clients as fast as possible, but the network speed must be sufficient. The minimum network speed is 10 Gigabit, but 40 Gigabit works much better.

In terms of Ceph's storage requirements, you can get the best performance from just a few large machines in your data center that are configured with many disks. But it is important to remember that the journal disk should be separate from object storage devices where the binary object storage takes place. A solid-state-drive-based journal is the fastest option for your journal disk. 

The file system that object storage drives use also plays an important role in Ceph performance. It doesn't matter what file system you use because Ceph is file-system agnostic, but you will get the best Ceph performance results from the Btrfs file system. The XFS file system performs well, too, but you should avoid the Ext4 file system.

Next Steps

Tips on Cinder, Swift and other OpenStack storage

Comparing GlusterFS and Ceph

Red Hat's Gluster adds container support, performance enhancements

Dig Deeper on Public cloud storage

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

How do you make sure your Ceph implementation performs well?
Stop using slow network technologies like 10G and 40G. Moving to 25G reduces the latency of the backend transfers on the wire and reduces latency overall. In many cases 25G can result in better performance than 40G and if its not fast enough then consider 100G.

Move away from using an underlying filesystem and start using Bluestore as this will in most cases improve the write performance.

When using Bluestore look at using NVME for the WAL and RocksDB and make sure its large enough for your metadata otherwise you can get substantial performance degradation as Bluestore spills its data onto the spinning disks.