Petya Petrova - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

New Formulus Black CEO talks up persistent memory tech

New Formulus Black CEO's past work at Oracle cued him into the need for performance-boosting memory technology to address CPU and I/O constraints in data-heavy workloads.

New Formulus Black CEO Mark Iwanowski said he was constantly on the hunt for new and better technology to deal with CPU and I/O constraints in the face of exponentially growing data when he worked at Oracle and Science Applications International Corp.

So, Iwanowski jumped on the opportunity to return to a startup when a headhunter contacted him several months ago about Formulus Black. The New Jersey-based startup emerged from stealth in February with software that can let customers use ultrafast persistent memory as the storage media to boost throughput and reduce latency in the types of data-intensive scenarios that Iwanowski struggled with as CIO at Oracle and COO at SAIC.

"As I got into the secret-sauce aspects, I saw that this is some really potentially disruptive technology," Iwanowski said. He claimed test results have left some of Formulus Black's early customers, including eFax, with a "wow reaction, and in some cases, almost disbelief about the performance benefits."

Formulus Black's Forsa software initially supported volatile DRAM to target demanding workloads such as databases, analytics, artificial intelligence and machine learning. The latest 3.0 product release adds support for Intel's non-volatile Optane DC persistent memory.

Iwanowski joined Formulus Black in July after Carr Bettis stepped down as CEO and executive chairman for personal reasons. He most recently worked in venture capital and early-stage companies as CEO and president of Global Visions-SV and managing director of Trident Capital. Iwanowski's prior experience also includes executive positions at Raytheon, Honeywell, Applied Remote Technology (later sold to Raytheon), Quantum Magnetics (acquired by Invison, then GE) and Neohapsis (bought by Cisco).

Before his decades-long career as a technology exec, Iwanowski had a brief stint in the National Football League, playing five games for the New York Jets in 1978 after a standout college career at Penn.

TechTarget recently caught up with Iwanowski to discuss persistent memory use cases and customer and industry trends.

What's driving the need for Formulus Black's technology?

Mark Iwanowski, Formulus BlackMark Iwanowski

Iwanowski: The exponential explosion of data that's occurring in the world. Historically it's been in high performance computing, in areas like the oil and gas industry or universities that have to crunch massive amounts of data. In the oil and gas industry, when you do seismic testing to look for new oil fields, there are huge, huge data sets. They sit and wait for sometimes hours to days to crunch the data set. You go from that market to absolutely exploding markets. One of them is the internet of things. Think of the bazillions, if that's even a word, of sensors that are starting to be deployed, all collecting data that has to then be managed in close to real time. We don't have the ability today to do that well, because that amount of data is so massive that you end up having to put it into data warehouses, crunch it over a long period of time, and then out come the results days to weeks to months later. The third area -- and maybe even the most driven right now from a growth standpoint -- is AI/machine learning. It could be facial features or any kind of information set, and it just says, 'Go learn this and pattern recognize this.' Well, that takes a long time, depending on the size of the data file. These markets have massive pain because the data sizes are growing so much faster than the technology's been able to keep up.

How does Formulus Black's Forsa software address that pain?

Iwanowski: Everybody thinks you can just throw CPUs at the problem, which is what happened when VMware started virtualizing and doing things like that. But the issue still is when you get down to the core of accessing the data from the source. If it's spinning disk, it's a very slow process because the physical media has to spin up. It has to access. And as fast as it's gotten, it's still too slow. So the shift has been to move that data crunching closer to the CPU by putting it in memory. We couldn't do that five or 10 years ago because the cost was too high. Today, you can. The cost is coming down dramatically. There's still a significant gap between solid-state memory and spinning disk, but that curve is coming down fast.

What we do is operate between the operating system level and the database and the application layers. We're down at the block level, in the fabric, operating in a zone that has not really been attacked in the past. We were on the phone the other day with NetApp about the possibility of a partnership arrangement. They sit above us, at the file system level. In the file system, you have these blocks of data. We deal at that block level.

Are you committed to a software-only approach?

Iwanowski: We made a conscious decision to go there. The 1.0 release was actually a hardware-based appliance. But I've always felt that going the software route was the better answer. Take the analogy of what's happened with the software-enabled network. Cisco has moved from selling lots of hardware routers to selling software. That's where we think the right answer is in this market. At the same time, we can package into commodity x86 servers and give somebody an appliance, if that's what they prefer. But we don't want to tell them they have to use our hardware.

Can you shed any light on the Formulus Black product roadmap and new features? 

Iwanowski: Let me tell you what we aren't going to do. We don't want to go and compete with all the ecosystem players out there today. We don't want to become a Red Hat, an Oracle, a VMware. We want to enable them better.

There are two tracks to our roadmap going forward, including the 3.0 release that we're doing now. There's high availability, where if you lose one server, you have the data saved and available on a go-forward basis to another non-failed server. There's also scale out, where you're adding more servers in parallel to carry different parts of the load. We're moving from doing this in a single-server environment to that scaled-out environment where we get both high availability and scale-out functionality leveraging the ecosystem players, like Oracle RAC or VMware.

Our core strength is the APIs to the ecosystem, so that we are plug and play compatible with the top OS solutions. We want to broaden that. As an example, we’re already operating with the Linux kernel very effectively. We also want to add, through our partnering and our API relationships, connectivity to things like Hadoop and Apache Ingite and Cassandra, object storage, Ubuntu in the OS level, CentOS.

Why did Formulus Black add support for Intel Optane DC persistent memory?

Iwanowski: This is one I would put into the [category of] Wayne Gretzky skating to where the puck's going, not where it is right now. Moving to non-volatile storage from volatile storage is where the puck is headed. I was just on a call with the Intel team, and they are planning to introduce us to their venture capital side. It's early. So, it's hard to say whether that will move forward. But they don't do those kinds of things unless they see a potential complement with what they are doing.

Are you getting demand from your customers for Optane or other non-volatile memory?

Iwanowski: We are. When the conversation comes up, I immediately go to this question: 'What is the environment you're operating in, and under what conditions do you feel you need non-volatile?' I don't try to convince them one way or another. I just try to understand. But there are cases where they are adamantly saying, 'Our preference is to go down the non-volatile route.' Being a CIO or running an IT data center is all about risk management. You're trying to minimize your risk relative to the cost you have to spend to minimize that risk. The risk has historically been uptime and, more and more, is becoming cybersecurity.

Non-volatile storage is always going to be better than volatile storage when you talk about risk management. Moving to memory, you are taking on the added risk of losing some data if you are operating in a volatile environment. It's a very small probability if you're in the right kind of data center, but it's not zero. It all comes down to risk, reward and performance. When I used to talk to the Oracle board, I would show them, 'Look, here are our risks. Here's our probability of things happening. Here's how much it would cost to mitigate those risks. You tell me how much you want to spend relative to what risks you're prepared to take on and I'll execute to that.' That's what a CIO in a company should always be thinking, because it comes down to this risk-reward trade-off.

Are storage OEMs showing interest in your technology?

Iwanowski: We haven't really started much dialogue with the OEMs yet, but it is on my agenda. We've had discussions with a couple of the very largest OEMs, and we expect those discussions to accelerate as we start bringing out more data around the benchmarking we're doing within clients.

Dig Deeper on Flash memory

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

For what use cases could persistent memory help your organization?
Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close