COVID-19 has changed daily life for everyone around the world. Social events were cancelled, governments issued stay-at-home orders, tens of thousands of businesses locked their doors, and telecommuting became the new normal for millions of workers.
During this time, digital services saw significant increases in demand, from media streaming to mobile banking to grocery delivery. In China, online penetration increased by 15–20%, while in Italy e-commerce sales for consumer products rose by 81% in a single week. Long-term growth plans may have been shelved in favor of short-term survival tactics, but the businesses best positioned to survive the pandemic were those that already had a plan in place, or were capable of pivoting quickly. This is particularly true since many of the spikes in online demand may not be temporary spikes, but permanent shifts in behavior.
If the pace of the pre-coronavirus world was already fast, the luxury of time now seems to have disappeared completely. Businesses that once mapped digital strategy in one- to three-year phases must now scale their initiatives in a matter of days or weeks.
Digital strategy in a time of crisis, McKinsey Digital
This outbreak underscores an important truth: disaster can strike at any time, and we need to be diligent about preparing for them. This means developing, testing, and maintaining disaster recovery initiatives. In this white paper, we explain the goal behind disaster recovery planning, how Chaos Engineering helps validate disaster recovery plans, and how we can use Gremlin to make sure we’re prepared for even the most unpredictable failures.
Disaster recovery (DR) is the practice of restoring an organization’s IT operations after a disruptive event, such as a natural disaster. It uses defined formal procedures for responding to major outages with the goal of bringing infrastructure and applications back online as soon as possible. Whether you’re making plans as part of a larger initiative such as failure mode and effects analysis (FMEA) or implementing DR outside of a structured mandate, the process involves:
DR is a subset of business continuity planning (BCP). BCP focuses on restoring the essential functions of an organization as determined by a Business Impact Analysis (BIA), and DR is the subset of BCP that focuses on restoring IT operations.
A successful disaster recovery strategy involves recognizing the many ways that systems can fail unexpectedly, and creating detailed plans to streamline recovering from these failures. These are called disaster recovery plans or DRPs (also called playbooks or runbooks), and they provide step-by-step instructions for employees to follow during an incident in order to restore service.
While each DRP is different based on the organization and systems, generally the plans outline:
Many companies maintain DRPs as a requirement of IT service management (ITSM), or as checklists to help with regular maintenance task delegation and onboarding new team members. DRPs are also commonly used in site reliability engineering (SRE) as playbooks, which instruct engineers on how to respond to alerts and incidents. In all cases, they are meant to reduce stress, reduce time to resolution, and mitigate the risk of human error when restoring systems to a fully functioning state.
With each year, our organizations grow more (never less) reliant on technology. Not only do we rely on technology to serve customers, but also to perform normal business operations. If any systems go down unexpectedly, it threatens our ability to generate revenue or even operate at all.
One of the primary goals of IT is to keep systems up and running for as long as possible. This is known as high availability (HA), and it’s achieved using strategies such as distributed computing and replication. While HA is important, it does not account for black swan events, which are events that impact our applications and systems in ways that are extremely difficult to predict, such as natural disasters. These can have severe consequences for our systems including system outages, hardware failures, and irrevocable data loss. As a result, our organization faces:
Because black swan events are inherently unpredictable, we can’t prepare for every possible threat. Instead, we must identify the ways our systems can fail and develop strategies to restore them to full service when these failures happen. Disaster recovery planning helps us accomplish this, but even with a well-designed DRP in-hand, we need a way to verify its efficacy before a black swan event occurs. This is where Chaos Engineering helps.
Chaos Engineering is the practice of systematically testing systems for failure by injecting small amounts of harm. We do this by running chaos experiments, in which we carefully and methodically inject harm into our systems. A chaos experiment involves running an attack against a specific system, application, or service. This can include adding latency to network calls, dropping packets on a specific port or network interface, or shutting down a server.
Chaos Engineering is often used in site reliability engineering (SRE) to test and improve a system’s reliability. Because SREs often handle system failures, many of their practices overlap with those used in DR. For example, runbooks (also called playbooks) behave much like DRPs by providing engineers with detailed instructions on how to restore service after an incident. Teams validate their runbooks by performing FireDrills, which are deliberately created outages meant to test runbooks under realistic—but controlled—circumstances. Many teams also use GameDays in which teams reproduce past incidents in order to test the resilience of their systems.
In the context of disaster recovery, we can use Chaos Engineering to recreate or simulate a black swan event. This gives us the opportunity to test our DRP and our response procedures in a controlled scenario, as opposed to recreating disaster-like conditions manually or waiting for a real disaster. As the world’s first enterprise Chaos Engineering platform, Gremlin allows teams to run chaos experiments in a safe, simple, and secure way. Gremlin provides a comprehensive set of application and infrastructure-level attacks that we can use to recreate real-world outages, letting us fully test our response procedures without putting our systems at risk.
Next, we’ll look at how to develop a DRP, and how we can use Chaos Engineering to test and validate our plan.
While every DRP is company and situation-specific, creating and validating one involves four stages:
We’ll explain each stage in greater detail in the following sections.
The first step in disaster recovery planning is to inventory our applications and systems along with their essentiality. This will help us quickly identify and prioritize systems that need to be recovered or restored after a black swan event.
One approach involves assigning each asset a priority level, which indicates how essential the asset is to business operations. Mapping systems in this way clearly identifies their importance so that engineers can quickly determine which systems to restore first after a black swan event. We start by identifying our business-critical systems, such as those that generate revenue or provide essential services. From there, we identify any services that these assets depend on. These dependencies are our priority 0 (or P0) systems and should be given top priority during recovery. Each step in the dependency chain is assigned a higher priority number (P1, P2, etc.), which corresponds to a lower criticality. For example, a P0 system should be restored before a P2 system, since it could be a potential dependency of other P0, P1, or P2 systems.
During this step, we should also identify any systems that don’t have any DR or high availability mechanisms in place. This includes systems that:
Not all of these systems will be P0, but they are still high-risk. If possible, we should implement some sort of protection on these systems before continuing with disaster recovery planning, whether through automated backups or replication.
Now that we’ve taken stock of our systems, we need to consider the various ways that they can fail, the impact these failures will have on our business, and how we can recover from them. Remember that black swan events are both unpredictable and severe, and can include:
The nature and severity of the event will influence our planning process and estimated recovery time, which is why teams will often create multiple DRPs covering different scenarios.
In order to determine how effective our DRP is, we need a way to measure the recovery time. We do this primarily using two key metrics: recovery time objective (RTO) and recovery point objective (RPO).
Recovery time objective (RTO) is the measure of how quickly systems become operational after a failure. It is a target recovery time, or the maximum amount of time we’re willing to tolerate being down. Our true recovery time after an event is measured by the recovery time actual (RTA). In site reliability engineering (SRE), this is commonly called the time to recovery (TTR), with the mean time to recovery (MTTR) being an average of the RTA across multiple outages.
In an ideal scenario, our RTO and RTA will be zero, meaning we can restore service immediately after a failure. This is the goal of organizations that provide critical services such as financial firms, government agencies, and healthcare organizations. However, a low RTO is both expensive and technically complicated to implement correctly, requiring not only automated failover and replication solutions, but also monitoring systems that can immediately detect and respond to outages. In reality, the RTA and RTO can vary significantly and range from a few minutes to several days, depending on the severity of the event.
The recovery point objective (RPO) is the amount of data loss or data corruption that we’re willing to tolerate due to an event. This is measured by the difference in time between our last backup and the time of the failure, at which point we assume that any data stored on the failed system is lost. The actual amount of data loss is measured by the recovery point actual (RPA).
As with RTO, we want our RPO and RPA to be as close to zero as possible, meaning we don’t lose any data. However, this is also difficult to implement realistically. For example, GitLab, a popular source code management and CI/CD tool, accidentally deleted data from their production database during one incident. While they had multiple backups, nearly all of their recovery attempts were unsuccessful, resulting in a six-hour outage (RTA) and the loss of over 5,000 projects, 5,000 comments, and 700 user accounts (RPA).
Reducing our RPO means creating more frequent backups—or replicating continuously—to an offsite location, which greatly increases our operating costs and complexity. When choosing our RPO, we’ll need to balance the costs of losing production data against the costs of replicating and backing it up.
A DRP must contain all of the information and instructions necessary for engineers to restore systems to their normal operational states, while simultaneously minimizing the impact on the business. The plan should be comprehensive enough to guide engineers through the entire recovery process, while keeping the RTA as low as possible.
The contents of a DRP should include:
A high-level overview or outline of the steps involved in the plan.
This isn’t an exhaustive list, but rather a starting point. The goal of the DRP is to provide a clear, comprehensive, and easy-to-follow set of instructions for engineers so that they can restore service as quickly as possible.
When writing our DRP, we first need to identify the disaster recovery systems that are already in place. These influence everything about our DRP, from our RTO and RPO to the required recovery steps.
The ideal backup architecture is an active-active architecture, where we operate an identical replica of our production systems in an offsite environment. Running two separate replicated instances of the same systems lets us continue operations by immediately failing over to the secondary environment while we restore the primary, resulting in zero RTO and RPO. However, this adds technical complexity to our deployment and effectively doubles our operating costs.
A similar architecture is active-passive, where the secondary (passive) environment only begins operating when the primary (active) environment fails. Data is still replicated between both environments, but the systems running our applications remain idle in the secondary until they are needed after an event. This reduces our costs, but can increase both the RTA and RPA as engineers need extra time to bring the secondary environment online.
The type of architecture you use will influence your recovery plan, particularly the steps needed to come back online. In active/active setup, recovery might be as simple as changing a DNS entry. An active/passive setup might involve changing DNS entries, spinning up idle hardware, copying data, and more, costing us valuable time
A DRP can’t be relied on until it’s been proven to work. While the most effective way to validate a DRP is to wait for a black swan event, this is also the most risky. We can test our DRP in a much less risky manner by using Chaos Engineering to recreate conditions that would be present during a disaster—such as server outages and network failures—without causing permanent or irreversible damage to our systems. This allows us to test our DRP much more safely and much more predictably than we could otherwise.
For example, Qualtrics is an Experience Management platform that has invested heavily in disaster recovery in order to mitigate the risk of downtime. As part of their disaster recovery planning process, they needed to test large-scale disaster recovery activities—like region evacuation—without actually performing a hard failover. Using Gremlin, over 40 of their teams independently planned for failover, which saved over 500 engineering hours and reduced their per-dependency test times from several hours to just four minutes. Qualtrics still did the full scale DR test to be sure the plan worked properly, but using Gremlin to test each team independently helped them save time in the preparation for this exercise.
In the next section, we’ll show how Chaos Engineering can be used to validate a DRP by running through a mock FireDrill. Recall that a FireDrill is a planned event where we deliberately induce failure in order to test our response procedures. We’ll use Gremlin to simulate a regional outage, resolve the issue using a DRP, then do a post-mortem to determine whether our DRP was successful.
Running FireDrills is the best way to ensure that your DRPs are effective, but if you’re just starting out on your disaster recovery or Chaos Engineering journey, this may feel like a significant and risky step. If you don’t yet feel confident recreating a black swan event, start with a small-scale chaos experiment, such as blocking traffic to a specific application instead of an entire production cluster. As you become more confident in your ability to recover from small-scale failures, incrementally increase the intensity of the attack (the magnitude) and expand it across a larger number of systems (the blast radius).
Gremlin allows you to quickly and safely halt and roll back any ongoing experiments in case of unexpected circumstances. This greatly reduces the risk of an experiment causing actual harm to your systems. If you find that you need to halt an experiment while testing a DRP, make sure to document the reason so that you can integrate it into your DRP, or so that your engineers can address the root cause.
A real incident is a true test and the best way to understand if something works. However, a controlled testing strategy is much more comfortable and provides an opportunity to identify gaps and improve.
Now we’re ready to create a chaos experiment and validate our DRP. Let’s assume that our company manages an e-commerce platform that sells to customers around the world. Every minute of downtime amounts to hundreds of thousands of dollars in lost sales, and any data loss or corruption can result in unfulfilled or inaccurate orders. To reduce the risk of data loss, we maintain an active/passive architecture with continuous replication to a separate data center or region. We’ll set an RTO of 30 minutes: not unrealistically short, but not so long that it creates an extended outage.
Before starting the FireDrill, we need to notify key stakeholders in our company. These are the executives and engineering team leads who aren’t directly involved in the recovery process, but will still be alerted of any major incidents, such as the Chief Technology Officer (CTO) and Director of Engineering. We want our on-call engineers to believe that this is a real incident as it will provide the most accurate evaluation of our DRP, but we want to notify stakeholders in order to prevent the outage from escalating into a full-blown emergency response.
To simulate our disaster conditions, we’ll create a Scenario in Gremlin. Scenarios let us accurately simulate real-world disaster conditions by incrementally increasing both the blast radius and magnitude of an attack. In this example, we want to simulate losing a data center or region consisting of three servers. We’ll do this by using a blackhole attack to block all incoming and outgoing network traffic on each server for the duration of the Scenario. A shutdown attack would provide a more realistic simulation, but we can immediately stop a blackhole attack at any time, making it the safer option for teams starting out with DR.
In addition to dedicated halt buttons, Gremlin also has multiple fail-safes built-in in case our Scenario gets out of hand. For one, attacks will only run for the amount of time specified (30 minutes in this example). If we can’t restore service within that time, the attack ends and the servers resume normal operations. Attacks will also immediately halt if the Gremlin agent loses connection to the Gremlin Control Plane.
The Scenario has three stages. First, one of the servers is immediately taken offline. After five minutes, we expand the blast radius to a second server. After another five minutes, we expand the blast radius again to the final server, taking all three servers offline. Once 30 minutes has passed from the start of the experiment, the Scenario ends and all three servers resume normal service.
During the Scenario, we want to track our:
Scenario start and end times. Gremlin automatically tracks and records these.
Once we start the Scenario, the following process should occur:
Our monitoring and alerting solution detects a problem and notifies the disaster recovery team. The difference in time between the start of the Scenario and the disaster recovery being notified of the problem is our time to detection (TTD).
The team starts following the DRP, giving us our time to execution (TTE).
We compare the time of the backup we restored from to the Scenario start time (i.e. the time that our systems were taken offline), giving us our recovery point (RPA).
Once the Scenario ends and we’ve restored service to our primary systems, the next step is to gather our observations from the FireDrill and evaluate the success of our DRP.
Every black swan event and DRP execution should be followed by a post-mortem, where we review our processes to determine their success. In addition to gathering key metrics such as RTA and RPA, we want to know how easy or difficult it was for the response team to follow the DRP. Our analysis should answer the following questions:
It’s unlikely that a DRP will work flawlessly on the first try. An unsuccessful FireDrill doesn’t necessarily mean that the team failed, but rather that there’s a shortcoming in our plan or procedures that we need to address. Identifying these shortcomings is why we do these exercises! We can use our experience to find ways to improve, optimize, or fix parts of our plan, or by improving our backup and recovery systems. An "unsuccessful" recovery is beneficial because it lets us find and fix problems before we ever face a real emergency.
When determining how to improve our DRP, we should consider:
Over time, IT infrastructure and business processes change, necessitating changes to our DRPs. If we don’t maintain our plans, procedures that worked well in the past may no longer be effective or even relevant. A DRP is effectively a living document that requires frequent updating and testing, otherwise it no longer suits its purpose. We should encourage our teams to not only document any infrastructure changes, but also run periodic FireDrills to validate our plans.
Today’s computer systems are far more complex and dynamic, creating more opportunities for failure. It’s no longer enough to wait for an incident to strike before implementing and testing our disaster recovery plans. With Chaos Engineering, we can proactively validate our plans to ensure business continuity even when faced with a catastrophic and damaging outage.
If you’re getting started with business continuity planning and disaster recovery planning, consider looking into a standardized procedure such as ISO 22301 Business Continuity, ISO 31000 Risk Management, or the Failure Modes and Effects Analysis (FMEA) method.
Gremlin empowers you to proactively root out failure before it causes downtime. See how you can harness chaos to build resilient systems by requesting a demo of Gremlin.
Get started