Home > Articles > Storage Virtualisation – Not The Definitive Answer

Storage Virtualisation – Not The Definitive Answer

December 24th, 2006

Companies recognise the need for new storage initiatives, such as simplifying the infrastructure, improving resilience and managing information over its lifecycle. But in spite of all the industry hype, storage virtualisation is NOT the definitive answer to all of these data storage problems.

IT managers are looking for a way to cut through the disparate legacy silos of information so that they can manage all of their data as a single logical, structured entity, wherever it is located, and ensure the accessibility, availability, security, integrity, resilience, and compliance of that data.

This problem is bigger than virtualisation. Virtualisation is just one of a number of industry technology trends, including Information Lifecycle Management (ILM) and Intelligent Storage Networking, which need to come together in order to address the real problems companies have with managing the growth, accessibility and regulatory compliance of such huge volumes of data.

Whilst the name might be new, the broad concept of virtualisation is not. Twenty years ago, when the mainframe dominated the date centre, before open systems was a major part of the IT Managers’ life, storage vendors used a technique called device emulation to ‘hide’ the physical characteristics of the storage devices so that they could all present the same ‘logical’ view and be utilised in any mainframe environment.

The complexities of providing a single logical view of the storage in today’s open systems environments are arguably greater than those of the mainframe days. Even so, there are parallels to be drawn and the need for device emulation was, in essence, similar to today’s need for virtualisation.

While device emulation meant that data could be placed on any storage device attached to the mainframe without concern for which vendor provided the hardware, it did not address the increasing complexity of trying to manage the placement of huge volumes of mainframe data. That became the job of HSM software, which would undertake automatic backup, recovery, migration, and space management functions according to pre-defined storage policies.

Around the same time, there was also another important development. Storage devices had been directly attached to mainframes and it was impossible to share storage between applications running on different servers. However, the development of the ESCON Director – a forerunner to today’s SAN Director – provided the ability for applications running across multiple mainframes and partitions to access all of the storage devices in just the same way as today’s SANs.

With this any-to-any access to a single storage view, HSM became so vital to the management of the mainframe that soon, together with complementary software, which provided data movement, copy, backup, and additional media, device and space management functions, it became integral to the mainframe operating system, as System Managed Storage (SMS).

Fast forward to 2006 and we experience a sense of dйjа vu.

IT Managers are struggling with managing, from creation to expiration, Terabytes and Petabytes of data across their hugely complex open systems environments. Data is so important to every single company that it must be available to employees, partners, suppliers, customers 24/7/365.

But who is managing it and ensuring that it is continually available, archived, backed-up, protected, and compliant? And why is it that the open systems storage administrators are struggling to manage just 1TB per person, while the mainframe boys can manage in excess of 10TB each? Sabanes Oxley, Basel II, the Data Protection Act; they all demand that data is properly managed from creation to expiration.

By now, the answer is pretty clear. Open systems need their own automated system managed storage and according to many of the storage vendors, system managed storage for open systems is already here – it’s called Information Lifecycle Management (ILM).

ILM could be perceived as the new SMS, and virtualisation is the new device emulation. When compared to their predecessors, they both have some extra bells and whistles to contend with the additional complexities which come with more varieties of hardware and software – that is why they have taken longer to make commercially available – but the principles are broadly similar.

Perversely those same silos of information, which companies are trying to eliminate with ILM, are themselves the very cause of the problems hampering deployment of ILM. ILM on its own is incapable of doing the job it was designed for. ILM needs virtualisation, but today, ILM is confined to single vendor storage silos because as yet there is no industry standard, (or method for presenting a single logical view of multi-vendor storage), nor is there an intelligent infrastructure connecting the information silos together.

Put ILM and an industry standard virtualisation together and you can begin to address the problem of managing information silos. Or can you? Look again at the mainframe precedent. In addition to virtualisation, there are still three missing pieces which must be addressed before companies can truly unleash to power of ILM on their data-critical open systems environments.

Physical interoperability between vendors’ servers, applications, storage devices and SAN fabric switches and directors has been answered. In general, everyone’s box talks transparently to everyone else’s box at the application program interface level (API).

Network-based Storage Services, including heterogeneous data replication, copy services and volume management to enable tiered storage/ILM and storage utility strategies, to reduce infrastructure costs, will be a vital component to successful ILM implementation.

While ILM promises to deliver Gold, Silver and Bronze performance SLAs and QoS at the application and storage device level, how can it guarantee it at the storage network level without intelligent functional interoperability in the network? Companies must ensure their storage network can run independent SAN and Security Services to fully leverage and optimise storage network investments, SAN segmentation and security required in today´s heterogeneous environments.

Intelligent storage network must be able to run Protocol Services to allow for efficient resource sharing across both open systems and mainframe environments, enabling improved data access and data sharing throughout the enterprise.

The advantage of putting all this functionality onto a multi-service director in the SAN network is that only the director can see all the hosts, disks and tapes, regardless of the vendor and their geographic location. This allows an organisation to consolidate services that it acquires from multiple vendors, saving money on software licences and simplifying storage management.

It will be vital that the storage network is capable of running Management Services which are interoperable with all of the storage network elements to improve network availability, avoid outages and reduce downtime, accelerate fault isolation and problem resolution and reduce management costs.

Remote Office Consolidation: The Future

With 75 per cent of an organisations’ data residing outside the corporate data centre, the promise of ILM is pretty useless unless it can address all company data. Remote Office Consolidation (ROC) is one such emerging initiative that combines technology, services and infrastructure leasing to deliver LAN-like performance to users in WAN-connected branches. This initiative takes on board and utilises many of the principles explained in this paper. For example, once data is fully back into the data centre, where there are the disciplines and expertise to manage the data, the process of virtualisation is the next step and the classification of data using principles similar to those used to classify mainframe data for SMS are applied. Data is then in a position to move towards the successful implementation of ILM. Once data is in one common environment, it is then well placed to utilise additional emerging technologies such as Grid and Utility Computing.

In spite of all the industry hype, virtualisation is NOT the definitive answer to all of today’s data storage problems. It is just one of a number of industry technology trends, including ILM and Intelligent Storage Networks, which will come together in 2006 and beyond, to address the real problems companies have with managing the growth, accessibility and regulatory compliance of huge volumes of data.

Today, storage networking vendors such as McDATA, are bringing to market products that will bring this intelligence to the storage network in a way which helps deliver the ILM promise. This can be seen via its new ROC solution that helps protect remote office data, reduce costs and provides LAN-like access to data from any location.

McDATA is exhibiting at Storage Expo 2006

Articles

Comments are closed.