During the pandemic, many organisations embraced a hybrid environment, allowing employees to work from home or anywhere else. Furthermore, many have adopted, or are now implementing infrastructure-modernisation initiatives and digital transformation programmes.
These significant changes bring various challenges, including increased complexity, potential vulnerabilities, and the burning question of how to keep operations running smoothly during a natural, hardware, human or cyber disaster. To solve those challenges, IT teams must re-evaluate their approach to business continuity.
The IT metrics to measure business continuity remain the same: uptime, the availability of data and apps, and backup and recovery. But the widespread transition to remote work and digital technologies demands a new approach to business continuity that acknowledges IT’s growing responsibility to enable a hybrid workplace and keep all digital systems up and always running.
Today, the threat of disruption is ever present, as is the possibility of fatal damage through data loss.
This approach applies to every company that relies on technology to do business. For example, the restaurant down the street that uses cloud-based software allows customers to order and pay on their phones. If there is a disruption, if customers place orders that don’t go through, the restaurant loses not only the orders but the trust of that clientele. For every connected company, continuity is now an absolute requirement, whether that company is in the business of high-tech or haute cuisine.
As they become increasingly digital, there is greater pressure than ever on organisations to achieve 24/7 uptime. An independent global study commissioned by Arcserve showed that 83% of IT decision-makers believe 12 hours is the maximum acceptable downtime for critical systems before a measurable negative impact on business.
However, for many businesses, even this is too long. Indeed, according to a 2021 study from ITIC, just one hour of downtime for a single server can cost firms $100 000. So, for an organisation with 1 000 servers, that comes to $10 million per hour.
To minimise downtime, companies must take a next-gen approach to business continuity. Here’s how it can be done.
Create a plan
Every organisation should have a business continuity plan. It is a step-by-step plan that will guide the response to a disruption, a time when speed and clear thinking are of the essence.
The plan should encompass all contingencies − natural disaster, electrical outage or cyber attack − so that the company can address the cause, minimise downtime, and control damage to revenue and reputation.
The plan should be comprehensive. It should list the resources needed in a crisis, such as data backups and storage locations. It should also include workers’ steps to properly alert company leaders, maintain customer communication and sustain productivity.
The plan should be tested regularly to ensure it will work when needed. Testing will help identify and address weak points before being exposed to a crisis.
With a robust and regularly tested plan, the organisation can move forward with confidence that it will be able to safeguard data and restore it, if necessary, when a cyber attack or natural disaster strikes.
Make data backups front of mind
Most companies will suffer a data-loss event at some point. In the recent survey commissioned by Arcserve, 74% of midsize companies said they had experienced data loss in the past five years, and 52% of respondents said they could not recover all their data after a loss.
Businesses should adopt a 3-2-1-1 data-backup strategy to prevent data loss. It means three backup copies of data on two different media (disk and tape, for example), with one copy stored offsite for disaster recovery.
Finally, implement immutable backup storage – these are the key to successful disaster recovery and business continuity. They convert data to a write-once, read many times − the format that cannot be altered, deleted, or encrypted.
Establish recovery point and time objectives
A solid business continuity plan should also include recovery point objectives (RPO) and recovery time objectives (RTO), along with steps to achieve them.
RPO is the amount of data the business can tolerate losing in a disruption before the company experiences serious harm. It’s the benchmark used to decide how often to backup data and determine the infrastructure needed to enable that backup schedule.
Companies can set different RPOs for different functions of the business. For example, dynamic files like financial transactions need a short RPO. Due to the number of variables involved, the recreation of such files is often not possible if they’re lost. Static files like employee records can have a longer RPO.
RTO is the maximum amount of time after a disruption before operations should be up and running again. Once RTO have been established, informed decisions can be made about the data resilience plan.
So, if it is decided the organisation can tolerate only one hour of downtime, it will know it needs to build a recovery programme that enables it to be back up and running within an hour.
Final takeaway
In the old days, companies waited for disruptions to occur, and if they did, they learned, adjusted and moved on.
Today, the threat of disruption is ever present, as is the possibility of fatal damage through data loss. In this climate, companies need a next-gen approach to business continuity. They need a solid and regularly tested plan.
Organisations with such a plan will withstand the threats coming at them fast and furiously, from natural disasters to cyber attacks.
Those that don’t have such a plan will find themselves constantly examining the rear-view mirror.
By Byron Horn-Botha, Business unit head, Arcserve Southern Africa.