Although investment banks are normally reluctant to discuss their technology infrastructures, a top IT executive from The Bank of New York Mellon Corp. (BNY Mellon) recently hosted a group of technology reporters at its New York headquarters to discuss the data storage trends impacting his job.
Swamy Kocherlakota, BNY Mellon's managing director and head of architecture and engineering, said the bank has 13,000 IT professionals out of approximately 49,000 worldwide employees, and invests from 15% to 20% of its noninterest expense on technology.
Kocherlakota said he considers storage a key piece of the IT infrastructure. He sees three shifts in the IT industry: a move to software-defined data centers, the rise of
Storage's role in software-defined data centers
Kocherlakota said BNY Mellon takes an "ecosystem view" of technology, and storage fits into an overall data center strategy. He buys into the software-defined data center concept and sees it as a way to avoid overbuying storage capacity.
"You cannot take storage on its own and have a strategy and roadmap just around storage," he said. "Traditionally, you build a data center first; then you build out the network, buy lots of storage, buy a lot of compute and put them in the data centers. As a result of that strategy, BNY Mellon and a lot of companies have built in a lot of capacity to our environment. And that capacity is going to cost the company. With the innovation we have, whether you call it software-defined data centers, we can challenge that model."
Object storage's role in software-defined storage
Kocherlakota said he envisions that software-defined storage revolving around object storage will change the way application architectures are designed. "We are encouraging our application teams to look at object-based storage as opposed to traditional block and file."
He sees object storage as a replacement for file storage rather than block, which is better suited to a SAN than object storage. "You still need SAN storage for traditional structured data and databases," he said. "A lot of the unstructured data is in NAS now, and we see that going to object storage. And we see many new application architectures designed from the ground up with object storage in mind. I think object storage will have the highest growth rate."
Other than EMC Centera's content addressed storage archiving system, BNY Mellon isn't using object storage yet, Kocherlakota said, but the bank is planning it as part of its software-defined storage strategy. "We're not using an Amazon S3-like interface yet," he said. "We're looking at OpenStack and developments there. And we're looking for the right time to jump on that type of solution."
Changing backup requirements for today's data storage trends
BNY Mellon has gone tapeless because it no longer met the bank's needs for backup and recovery times, he said. Every gigabyte of usable data at the bank requires Kocherlakota to plan for 2.5 GB of storage.
"We no longer have this window where you can say, 'I need to take it down for backups,'" he said. "So how fast we can back up is a key thing. Also, we look at how fast can we restore and how reliably can we restore. Gone are the days where you would do the backup, tell the business you're doing the backup, and when you want to store and recover, you put the tape in. Hopefully you can find the tape, hopefully you can load it without any errors, and hopefully you can find the data that you're looking for on the tape. Those days are gone. We've adopted [EMC] Data Domain and we're 100% tapeless. We feel good about how we can reliably provide restorations for our businesses."
Kocherlakota said BNY Mellon also uses continuous data protection "as opposed to running backups once in the night and running full backups on weekends."
Keeping up with growing data demands
Scale-out architectures are important to keep up with today's data growth, he noted. "I look at how a solution scales," Kocherlakota said. "Every time I want to add a node, do I have to buy a new node and build onto the module tied to that node? So scale-out is key. And the impact on network bandwidth is key -- we want to back it up with little impact on the network because the cost-per-gig price on the network is a lot higher than cost per gig on the storage front."
Flash's role in enterprise storage
"It depends on the business requirement," he said. "What is your performance per file, and what is your cost per file? There are solutions for both. For structured data, you may have to add all-flash storage, and other types of data can use hybrid systems. We see both coexisting."