Search this website:


Managing complexity and the importance of real-time monitoring

Why you should take a step back before going forward to virtual and cloud infrastructure. By Chris James, Marketing Director EMEA, Virtual Instruments.

 

Date: 6 Mar 2013

The way something is managed always has a direct impact on the results: whether it’s a football match, the construction of a building, or the creation of a data protection strategy, if any steps are left out, if some elements are not suitably reviewed or some considerations not taken into account, the results will fall short of expectations.
Take data backup and disaster recovery strategies for example; they are supposed to protect a company’s most important asset, information. Yet they can be rendered virtually useless if they are not properly designed and resourced. Today many companies aim to provide data access to their staff 24/7—and if this is halted for even a few minutes—the costs can easily add up to millions of Euros. But before we even get to the stage of timing outages we need to look at the possibility of data not being recovered at all.
Managing complexity means seeing it first
One of the world’s leading suppliers of fast-moving consumer goods (FMCG) realised before it was too late how crucial it is to ensure that their SAN and data protection strategies are indeed up to scratch. As the Global Enterprise Computing Director at the FMCG company once said ‘You can’t make good decisions and reduce costs without the right instrumentation. Paradoxically, the more you over-engineer, the higher the level of complexity and associated risks that components will fail. We have 12PBs of storage and nearly 15,000 Fibre Channel switch ports. The multi-path failover wasn’t working as intended and all the traffic from our tape library was going down one side of the fabric. These things send a shiver down your spine but at least you do know and can act.’
The missing link in this case was a holistic and complete view of the overall SAN infrastructure. This is a clear example of how organisations need to take a step back to secure their foundation before they even start thinking about their data protection strategies. It’s fundamental for companies to understand what they need to ensure that they have a satisfactory view of their IT infrastructure as a whole.
Ever-agile infrastructure demands
The promise of virtualised dynamic resource allocation, although extremely compelling, introduces the risk of breaking related management tools. With today’s ability to migrate application workloads throughout virtual systems (internally to private clouds, burst externally to hybrid clouds for increased capacity, etc.)Yesterday’s tools for management are ill-equipped and cannot track real-time performance of those applications as they move across dynamically allocated resource pools.

The physical systems at the core of both virtual and cloud infrastructures perpetually increase in complexity as they become more and more abstracted by the layers of virtualization on top of them. Storage, which is clearly the largest shared resource and, I would argue, the first ‘cloud’ as it was abstracted from the physical server, centralized and shared back out, really has the most limited visibility into use and no visibility into its effect on the performance of the system. There are spot tools that can tell you the performance of an individual array but they’re isolated to that vendor, to that array and to the performance of the array itself, not to the impact of that performance on the overall system.

This introduces several challenges to the market as customers ‘rush to the cloud and drive higher and higher levels of virtualization.’ First, the cloud environment overall is simply demanding new capabilities for what we’re deeming Infrastructure Performance Management (IPM). The fundamental requirements that we see here are: it must be a multi-vendor solution; it must be applicable across multiple hypervisors, storage providers and network providers. It must give real-time, granular visibility into multiple layers of that system and the related performance.

Especially when it comes to data protection and business continuity overall, it’s not sufficient to look solely at the server stack or virtual server stack. It’s not sufficient to look solely at the storage stack.

In order to meet the requirements of Infrastructure Performance Management, you must be able to look across all layers, across a multi-vendor environment’in real-time. In doing so, you enable the ability to eliminate the risk to application performance, and ensure availability from the migration to that virtual infrastructure, or to a private cloud. When you do this throughout a multi-vendor, multi-level system, you can then align costs and SLAs of that infrastructure with application and business requirements.
Guarantee and optimize the whole
It’s insufficient today to look at optimizing a single element to ensure your infrastructure and therefore your data are sufficiently protected. That would be like treating an ankle sprain and inferring that the person is healthy overall. You may be able to optimize storage; but what’s the impact of that effort on the overall performance of the systems that rely upon that storage? Likewise, virtualization may have driven significant increases in the utilization of server assets, but the impact on the underlying I/O may be unknown.

This introduces a second challenge which is that virtual systems management today is platform specific. No customer is running a single platform for all applications and for all server virtualization in their environment. The two most prevalent are both AIX and logical partitioning for the mission critical UNIX based systems and VMware on X86 for the Linux and Windows infrastructure. A true infrastructure performance management system must be able to address both.
Finally, in the physical world, enterprise systems management tools are device-specific and are quite frankly no longer relevant to cloud infrastructure. Storage resource management, the traditional approach to looking at the most costly and complex layer in the infrastructure, has no view into performance of the system and cannot correlate this limited view with specific vendor/device utilization or with overall system utilization.
So what is needed in order to provide a satisfactory data and systems protection strategy?
Today’s application-aligned infrastructure
When you have a cloud infrastructure in particular and even in a purely virtualized space, finding out about a problem or about a performance issue minutes to tens of minutes after the fact is no longer tenable.
Firstly you must measure performance in real time - at line rate (measured to the microsecond and rolled up and delivered to the platform in one-second intervals) and at the protocol level. Real-time is not a look back at an average of averages based on interval polling, averages, or calculated to estimate an actual performance metric.
Secondly, always monitor the system. In order to give context to the protocol level data, you must understand what’s happening all around it. As such, you need a systems level solution acutely focused at increasingly unified computer and storage environments. As large blade centres get deployed against advanced fabrics supported by new virtualized storage technologies, you should always look into each of those device levels and correlate to the underlying protocol data to get a systems level view in real time.
 



ShareThis

« Previous article

Next article »

Tags:

More Exclusive News

Reinforcing your data security model for an insecure cyber world

23 Mar 2015
By Mark Edge, Country Manager UK & VP Sales, Brainloop.

Banks turn to digital in the race to remain relevant

23 Mar 2015
By Len Padilla, NTT Communications in Europe.

Three key Unified Access milestones on the road to the Unified Network

23 Mar 2015
According to Forrester, the number of applications in the US Apple Store is expected to grow from 233,000 in 2010 to a huge 10 million in 2017. Enterprise networks have undoubtedly started to get read...

Managing assets with automated monitoring

23 Mar 2015
By Dr. Thomas Wellinger, Market Manager Data Centre R&M.

How can businesses reap the benefits of big data?

23 Mar 2015
ByMike Hoskins, CTO, Actian.

Software Defined Storage: Future Undefined

23 Mar 2015
By Alex McMullan, Field CTO, Pure Storage.

Fixing weak passwords

16 Mar 2015
By Chris Stoneff, director of professional services at Lieberman Software Corporation.

Ready to launch: Using the cloud to speed up time to market

16 Mar 2015
By Ian Tickle, ‎VP EMEA, SaaS Solutions at Oracle.

Data migration – reconciliation holds key to success

16 Mar 2015
By Jake Sweeney, Director, Asia Pacific, Gresham Computing.

Peugeot cruises to hosting durability with Claranet

16 Mar 2015
Established in France in 1810 as an engineering firm producing a host of mechanical objects, Peugeot is best known today as one of the leading car manufacturers in the world. The car company has a l...

Realty Check: 2015 Technology Drivers

16 Mar 2015
By Rob Bath, VP Engineering, EMEA, Digital Realty.

SMEs - which IT support service is right for you?

16 Mar 2015
By Matt Kingswood from ITS, a Managed IT Services provider.

You can’t always Stop the Breach - but you should always be able to Spot the Breach

9 Mar 2015
One year on from the debacle of Target’s security breach and has anything really changed? Despite the weeks of forensic analysis and the astronomical cost incurred by the company, retailer af...

Direct attach versus structured cabling in a data centre

9 Mar 2015
It’s an important decision to make when building a data centre. Our regular columnist, Willy Rietveld, TE Connectivity, writes.

Delivering business advantage with cloud application

9 Mar 2015
By Ian Tickle, ‎VP EMEA, SaaS Solutions at Oracle Corporation, looks at how the cloud debate is set to evolve with a focus on delivering business advantage.

Recruitment

Latest IT jobs from leading companies.

 

Click here for full listings»