News Stay informed about the latest enterprise technology news and product updates.

QLogic Corp. backs Open Compute Project with FC HBA for shared storage

QLogic adds Fibre Channel host bus adapter for Open Compute Project-certified servers for data centers desiring OCP with shared storage.

Fibre Channel adapter vendor QLogic Corp. today said it is embracing the Open Compute Project and launched an 8 Gbps Fibre Channel host bus adapter designed for OCP servers moving toward a shared storage architecture.

The company claims it's the industry's first mezzanine Fibre Channel (FC) adapter for Open Compute Project (OCP) servers, which are built specifically for cloud and virtualized environments that need highly dense, pure-horsepower and energy-efficient configurations. Facebook founded OCP in 2011 to use the concepts behind open source software to create an open hardware movement to build commodity systems for hyperscale data centers.

The QLogic QOE2562 FC OCP host bus adapters (HBAs) are available with OCP-certified Quanta Stratos S215-X1M2Z servers and are expected to be available through other partners beginning in April.

"Right now, OCP platforms use [the] direct-attached storage [DAS] model, and it's not a shared resource," said Tim Lustig, QLogic's director of marketing. "DAS usually uses SATA, so reliability and speed are not there. Fibre Channel is more expensive, but you get reliability. As OCP starts to scale up, the [customer has] to access more disks so they are willing to pay a little more for that."

Lustig said FC is a requirement for enterprise customers running Tier-1 applications with high I/O requirements. The 8 Gb dual-port PCI Express-to-OCP FC adapter can run at 200,000 IOPS per port and has 1.6 Gbps throughput. It's also backward-compatible for 4 Gbps and 2 Gbps FC gear.

The HBA provides N_Port ID virtualization so that each physical port on the adapter can be segregated into multiple virtual ports to get more FC connections. Administrators can assign quality of service (QoS) levels to each port to enable I/O hyperscale capabilities, Lustig said. This allows for optimal security, segregation from the IP network, isolated LUN masking, and zoning and authentication (FC Security Protocols), while only authorized devices can see storage.

The adapters are designed so that each virtual port is assigned a specific QoS -- high, medium or low -- for queuing and the percentage of bandwidth needed. Bandwidth throughput is segregated based on application workload needs, so 10% throughput can be reserved for one application while another can be given 20% throughput.

The adapters use QLogic StarPower technology for dynamic power management at the bus level, so each lane can be powered down or up depending on data requirements.

"OCP's genesis was in cloud-oriented mega-data centers, but OCP servers are now finding their way into the enterprise where FC is the storage interconnect of choice," said Vikram Karvat, QLogic's vice president of marketing.

Some of the main players involved in OCP include Goldman Sachs and Rackspace. This week, Microsoft said it's opening up the server and rack designs that power its online platforms to share with the open hardware community. The company will be contributing specs and designs for the cloud servers that power Bing, Windows Azure and Office 365.

In addition, Mellanox Technologies said it's contributing its 40 GbE network interface card (NIC) to the Open Compute Project. The 40 GbE NIC is based on Mellanox's high-performance ConnectX-3 Pro ICs and is designed to meet OCP specifications.

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.