Q

# What to consider when calculating availability

I am trying to calculate availability for a proposed system. The system is to be available for six days a week...

between 0600-2100 -- business hours. However, in the business hours, it is to be available within 30 minutes. If the system goes down, what would the availability be -- 1-30/17*6, 1-30/24*6 or 1-30/24*7?

I'm afraid that availability calculations don't really work the way you describe. You cannot calculate availability based on a recovery requirement, nor can you calculate it before the fact. Availability calculations are traditionally a historic look back at how available a system was over a given period of time. The formula that is most commonly used is:

```         MTTF
A = -----------------
MTTF + MTTR
```

Where "A" stands for the availability, presented as a percentage. "MTTF" is mean time to failure. That's the time that the system is actually operational and "MTTR" is the time it takes to repair, restore, or recover the system. Normally you only consider the period of time when the system was required to be operational. So, if your system is down 30 minutes during a full seven days, 168 hour week, you would divide 167.5 (the uptime) by 168 (the total available time) and get 99.7%. If the requirements were that the system be up only five days a week, eight hours a day, that same 30 minute outage would bring your availability number down to 39.5/40, or 98.75%. In your question, you are not telling me how long the system was up. What you are asking is for a way to calculate availability based on a recovery time objective. If, in your example, the system were to recover in 30 minutes once in a week, your availability figure would be the 99.7% we discussed above. If instead, the system went down for 30 minutes every hour but never for longer than 30 minutes, you would still meet the Service Level Agreement you describe, but you'd only achieve 50% availability. When you do us historical data to calculate uptime percentages, you should base them on the operational hours. So, I would use 90 hour weeks (6 * 15 hours) in my calculations. And. I would count any outage, scheduled or not, that occurred during those periods. If I misunderstood your question, or if you have other questions, don't hesitate to ask. Evan L. Marcus Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our discussion forums.

This was last published in May 2004

## Content

Find more PRO+ content and other member only offers, here.

#### Have a question for an expert?

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

#### Start the conversation

Send me notifications when other members comment.

## SearchSolidStateStorage

• ### IBM DS8880 arrays adopt PCIe 3 in all-flash versions

The expansion of the IBM DS8880 hardware line introduces all-flash models designed around the vendor's High-Performance Flash ...

• ### All-flash array market advances outpace demand, create opportunities

Scott Sinclair says high-density, high-capacity all-flash storage arrays are a disruptive technology ahead of customer demand for...

## SearchConvergedInfrastructure

• ### HPE pays \$650 million for SimpliVity hyper-convergence

The long-awaited HPE-SimpliVity deal cost HPE \$650 million for the hyper-converged pioneer. The buy gives HPE an installed base, ...

• ### Predicting hyper-converged infrastructure vendors' next moves

What's up for HCI vendors in 2017? SearchConvergedInfrastructure's expert contributors expect a busy year, including acquisitions...

• ### Hyper-converged hardware to become more powerful, modular in 2017

As converged and hyper-converged infrastructure find more enterprise use cases, one may start to look more like the other.

## SearchCloudStorage

• ### Showback the value of your data storage expertise

To demonstrate value, IT must provide an easy-to-understand cost model to its business leaders. This has fostered IT showback ...

• ### How to build a private storage cloud

Building internal cloud storage must account for elasticity, choosing the right platform, allowing for workflow, and stack ...

## SearchDisasterRecovery

• ### CMS emergency preparedness rule for healthcare: Compliance tips

The CMS Final Rule details emergency response and business continuity regulations for healthcare. Noncompliance could result in ...

• ### Windows Server 2016 Storage Replica feature aids cluster replication

One of the top storage-related improvements in Microsoft Windows Server 2016 is cluster-to-cluster replication using the ...

• ### Top 2016 disaster recovery planning best practices entail tests, cloud

These top pieces of advice from the year in disaster recovery planning could help your organization get out in front of possible ...

## SearchDataBackup

• ### StorageCraft backup diversifies, acquires Exablox object storage

StorageCraft backup targets server-based storage. Exablox OneBlox is converged hardware, with an object-based file system and NAS...

• ### Demand grows for disk, cloud storage backup market

The cloud and disk-based storage appliances are meeting backup market needs in this age of massive and faster data creation and ...

• ### Ransomware protection best served by backing up your data

A recent survey illustrates how extensive ransomware threats have been, and Rich Castagna says backing up data remains your best ...

Close