InfiniBand has near infinite potential as a next-generation interconnect technology, but the field of companies developing the individual hardware and software components are mostly startups and aren't grabbing the attention of major system OEMs. In an effort to change all that, InfiniSwitch Corp., Westborough, Mass., and Lane15 Software Inc., Austin, Texas, on Monday announced a merger that will combine their respective products into an integrated offering for the industry.
The merger will bring together under one brand InfiniSwitch's InfiniBand switch products for high-performance computing and enterprise data centers and Lane15's management software for fabric computing. The move, the experts say, makes a more attractive offering for potential OEM partners.
The company will operate under the name InfiniSwitch and, as part of the merger, InfiniSwitch announced the completion of an additional round of venture financing to the tune of $15 million.
One company spokesman said the new company's combined cash reserve of $20 million will give it the time it needs to make a name for itself in the InfiniBand market.
"This will give the company well over 18 months of runway, assuming zero sales; however, with our products complete and selling, we expect our runway to actually be much longer," said Terry Dickson, vice president of marketing for Lane15 and now InfiniSwitch.
Alisa Nessler, former CEO of Lane15 Software, will become the CEO of InfiniSwitch and, according
Dickson said the Lane15 organization will remain largely intact within InfiniSwitch, but he noted that the company made some cuts last September. "We feel we are about at the right level of head count for today's marketplace," Dickson said.
Mike Karp, senior analyst with Enterprise Management Associates Inc., said the deal is more of an acquisition than a merger.
"One company really did buy the other," Karp said. "They are trying to provide their OEMs with a one-stop shop for [InfiniBand technology]."
The ability to provide the necessary software and hardware components in a single offering is much more enticing to big system makers like Hewlett-Packard Co., IBM Corp. and Sun Microsystems Inc. and, according to Karp, is a good idea.
"It certainly makes sense for OEMs to look for suppliers to supply a completely integrated technology solution. If they can buy something that gives them entry-level up through director-class products, it provides them with a relatively, potentially efficient method of bringing the technology into the marketplace," Karp said.
Dickson said both companies have heard from several OEM customers who were interested in a complete hardware and software solution from a single company. "This request from our many OEM customers led us to consider both technology integration as well as merger discussions," he said.
InfiniBand is an architecture and specification for data flow between processors and I/O devices that promises greater bandwidth and almost unlimited expandability in tomorrow's computer systems. In the next few years, InfiniBand is expected to gradually replace the Peripheral Component Interconnect (PCI) shared-bus approach used in most of today's personal computers and servers, according to online technical dictionary Whatis.com.
InfiniBand offers throughput of up to 2.5 GB per second and support for up to 64,000 addressable devices. The architecture also promises increased reliability, better sharing of data between clustered processors and built-in security.
The downside is that, while it is a promising interconnect, InfiniBand has been slow to catch on in the industry.
"InfiniBand marketing through the companies and [InfiniBand Trade Association] has not been particularly effective," Karp said.
The IBTA has made some efforts toward educating the public on InfiniBand. Last September, it took its message to the people in a series of demonstrations to prove the new interconnect technology was better than Ethernet for deploying data center fabrics.
The IBTA put on multi-vendor demonstrations of the InfiniBand architecture at work, featuring 10 Gbps fabric performance which, the IBTA claims, represents seven times the data throughput of a TCP/IP over Gigabit Ethernet configuration. The performance of the InfiniBand fabric achieves 806 Mbps while maintaining a 3% CPU utilization rate, the IBTA said.
The argument is that the low CPU utilization of an InfiniBand fabric improves CPU efficiency from existing TCP/IP data center configurations. This fact, combined with InfiniBand's high throughput, means that the host server processor is almost completely available to speed data center application performance.
Let us know what you think about the story. E-mail Kevin Komiega, News Writer
FOR MORE INFORMATION:InfiniBand to offset HBA market, but not anytime soon InfiniBand group pushes data center adoption Randy Kerns looks into the future of InfiniBand Comment on this article in the SearchStorage.com Discussion forums