Storage management has turned political. Users are not only finding resistance and issues with managing heterogeneous...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
storage operations, but challenges with who should be manning the company's data.
For many users trying to set up a dedicated storage department, the technology limitations may not be the real problem. The real issue may fall with internal politics. And, that can be dangerous if you hold a storage department in as high regard as Gartner Group's Vice President and Research Director Ray Paquet.
Paquet says the storage department is the most critical component of any storage management solution. He notes the best way to get the most out of your storage system is not throwing more hardware or software into the infrastructure, but organizing and capitalizing on the skills people bring to the equation.
But, who are the best people to hire or within the organization to make sure that your storage is properly managed?
Paquet says four groups are in the running for storage responsibilities: Network admins, sys admins, DBA or mainframe staff. The winner? Look to the mainframe staff.
"Mainframers" have the skills, mentality and understand the storage problem. They know Hierarchical Storage Management (HSM) techniques, understand recovery and data sets.
The argument against the other "branches" of IT? Paquet says network people, while excellent at cabling and connectivity, are a bit careless when it comes to data. Network admins are used to losing packets, and for them to come right back. Lose data, that's a different story. Systems administrators, while in touch with file systems, are more application centric. And DBAs, good at tasks such as preserving data integrity and change management -- they are highest paid of the bunch -- may see storage management as a demotion.
"I have a client who exactly matches this [mainframe trained] model. They made the attempt at having UNIX and NT teams manage the open systems storage but eventually settled on the mainframers. Their decision had little to do with the understanding of data sets and more to do with politics," said LeRoy Budnick, Founder of Knowledge Transfer, a storage consultancy.
Budnick also says there are there are three practical skills necessary for success in storage -- business acumen, a firm understanding of technical infrastructure and organized operations.
But, the limitations aren't contained to technical or business skills. Internal control, says one SearchStorage reader who wished to remain anonymous says to manage his 80 terabyte (TB) retail infrastructure cause some bickering among the branches of IT.
"While we did have senior management backing, there was still resistance from other groups that were reluctant to give up control -- particularly in the Windows area," said the user, who happens to be an ex-mainframer.
The user went on to say it took a little more than a year before the real benefits of centralized storage management were realized, but now it's clear what the benefits have been. He says he as able to lower total cost, become more flexible and reactive to changing storage needs, more reliable and provide greater functionalities.
Additional advice from Leroy Budnick
LeRoy Budnik of Knowledge Transfer, breaks down the mindsets of mainframe vs. open systems admins:
-- Moves content on a virtual plane that includes tape and disk shared between processors in a collection of trusted domains that can accept delayed retrieval
-- Workloads tend to be more batch oriented. Jobs have well known resource requirements and limits placed on growth during the life of the job. Datasets have life-cycles
-- On delivery of new storage, all storage can be available to all mainframes resulting in less risk of configuration failure
-- Lock content into a virtualized physical space with tape as a separate domain where trust is in the control of several user roles and often cannot accept delayed retrieval
-- Workloads tend to be random, without concept of resource requirements and without capacity plans or limits on the life of a job
-- Datasets continue to grow because the concept of data life-cycle has not been developed and in most cases cannot be supported (consider the oracle database looking for a dataspace archived to tape which it needs to start the database)
-- On delivery of new storage, we parse out what is required by each server and incrementally add on demand resulting in more risk of configuration failure
Has your ability to run or set up a storage depatment been hampered by management? E-mail us and let us know what you issues or sucesses have been.