A key objective of IT departments and network administrators is maintaining uptime. These days even minimal amounts of downtime can have a severely detrimental effect to both customers and staff because, in many cases, interfacing with an organisation’s IT infrastructure is part and parcel of customer and staff activities. Maintaining high levels of application availability is a core part of system administration from educational establishments to corporate environments.

What is application availability?

It’s the extent to which an application is operational and capable of being used to fulfil its function. It’s not quite as simple as measuring whether an application is ‘up’or ‘down’, rather it’s a measure used to analyse an application’s performance relative to what is expected of it.

The parameters used are known as key performance indicators (KPIs), and can include measurements of uptime and downtime, transactions completed, responsiveness and errors made amongst other items.

Why is it so important?

Naturally, downtime needs to be minimised. For example, an uptime of 99% may sound good, but in a system that needs to be available 24 hours a day that equates to downtime of around 88 hours a year (7 hours a month). Compare this to uptime rates of 99.9% and the number of hours lost drops sharply to just over 8.5 per year and so on down to 99.9999% being a mere 31 seconds.

Even small amounts of downtime could cost an organisation dear in terms of lost revenue and staff who need to access the system being unable to carry out their tasks properly. Reputations suffer, too – a customer experiencing an organisation with a ‘system gone down’may go elsewhere.

What can be done to improve application availability?

First off, it’s advisable to determine exactly what applications require high availability – list and categorise accordingly. For example, a public facing website that acts as your ‘face’to actual and potential customers and needs to be available all the time may have a higher priority than an email server.

Secondly, you need a ‘fail safe’strategy that works seamlessly if and when disaster strikes. For example, providing some storage capacity in alternative locations, a way of replicating data and keeping it up-to-date with synchronisation so you have the latest versions available if required, and a way of re-deploying resources quickly and efficiently are basic requirements.

The virtual solution

An excellent way of preparing for the worst is to implement a virtual SAN (Storage Area Network). More data centres are implementing server virtualisation, and storage virtualisation is the natural next step.

Using specialist virtual storage software such as StorMagic SvSAN means existing resources can be re-deployed at short notice – often automatically – to keep the system running and applications available.

Look for a SAN that provides clear and easy reporting functions so you can quickly pinpoint where problems lie, and a system that can react fast to outages to keep you operational.