This article was originally published on TechCrunch.
I recently had a scheduled video conference call with a Fortune 100 company.
Everything on my end was ready to go; my presentation was prepared and well-practiced. I was set to talk to 30 business leaders who were ready to learn more about how they could become more resilient to major outages.
Unfortunately, their side hadnât set up the proper permissions in Zoom to add new people to a trusted domain, so I wasnât able to share my slides. We scrambled to find a workaround at the last minute while the assembled VPs and CTOs sat around waiting. I ended up emailing my presentation to their coordinator, calling in from my mobile and verbally indicating to the coordinator when the next slide needed to be brought up. Needless to say, it wasted a lot of time and wasnât the most effective way to present.
At the end of the meeting, I said pointedly that if there was one thing they should walk away with, itâs that they had a vital need to run an online fire drill with their engineering team as soon as possible. Because if a team is used to working together in an office â with access to tools and proper permissions in place â it can be quite a shock to find out in the middle of a major outage that they canât respond quickly and adequately. Issues like these can turn a brief outage into one that lasts for hours.
Quick context about me: I carried a pager for a decade at Amazon and Netflix, and what I can tell you is that when either of these services went down, a lot of people were unhappy. There were many nights where I had to spring out of bed at 2 a.m., rub the sleep from my eyes and work with my team to quickly identify the problem. I can also tell you that working remotely makes the entire process more complicated if teams are not accustomed to it.
There are many articles about best practices aimed at a general audience, but engineering teams have specific challenges as the ones responsible for keeping online services up and running. And while leading tech companies already have sophisticated IT teams and operations in place, what about financial institutions and hospitals and other industries where IT is a tool, but not a primary focus? Itâs often the small things that can make all the difference when working remotely; things that seem obvious in the moment, but may have been overlooked.
So here are some tips for managing incidents remotely:
Designate a call leader
There should be one person who is the âcall-leaderâ responsible for gathering critical updates and sharing them with key stakeholders during an outage. Having a single point of contact makes communication and collaboration less confusing, especially in a remote and distributed environment. The call leader is responsible for:
- Providing status updates to the call on a regular basis
- Ensuring people are not acting on their own
- Making sure only one thing is tested at a time
- Making judgment calls when team members arenât sure which course of action to choose by collecting all available information and then issuing a decision
Get everyone the right hardware
If youâre an engineering manager, make sure each of your team members feels adequately set up, and let them expense improvements to their home office. Having office-quality internet at home is crucial when that becomes your primary workplace. Most engineering teams at sophisticated IT organizations will already provide work laptops â but for many companies this is a novel idea worth exploring. Providing a budget for the small things like webcams, microphones and extra monitors can improve communication, response time and the ability of team members to actively contribute to solving emergent problems. Make sure that company-provided hardware is also properly equipped with all needed software for team members to do their jobs effectively.
Also, I wonât name names⊠but there have been a handful of times in my career when the person responsible for a service left their two-factor authentication app/hardware somewhere inconvenient. So when that service went down, they were not able to get in and fix the problem quickly. This can add a lot of unnecessary time and frustration â so remember to keep your phones with authentication apps or your hardware keys (Gemalto, YubiKey, etc.) nearby at all times when you are on call!
Create an instant messaging channel
Having an easy and quick way to share graphs, logs, details, changes and so on is crucial to mitigating the length and scope of a major outage. Creating a unique channel in your instant messaging (IM) app (such as Slack, Discord or IRC) dedicated to dealing with the specific outage at hand will accomplish a few things. For one, it wonât add to other channels noise that will be distracting for others involved. It also provides a home for all key stakeholders, and is a place to direct people who want to get involved. Importantly, it also serves as a timeline of what happened, which may be useful later during a retrospective.
Follow conference call etiquette
This sounds simple, but can have a drastic impact on your ability to resolve an incident quickly: Be a good citizen. This applies to absolutely everyone now. When thereâs no clear agenda, when people are talking over one another, when thereâs a ton of background noise, this all can distract from the problem at hand. The call leader should run the conference call while the team is responding to an incident, and each person will have a chance to share their update. When not speaking, other team members should be on mute, so that everyone doesnât hear their keyboard clanking away while notes are being taken.
Run an online fire drill
If youâve never run an online fire drill â this is the time to do it. The idea is to dedicate time when everything is fine to simulate a failure. This is often done using Chaos Engineering. One person causes a simple failure (they are the safety net, watching the whole time, ready to roll things back with a fix if needed). The other team gets alerted, paged, they log in, and it is their task to find the failure. This method forces teams to do more than just pay lip service â if there are weaknesses in your teamâs processes, you will find them. You can then adjust accordingly, build up muscle memory and be better prepared for when disaster actually strikes.
In short, if youâve established the call leader, created the IM room, gone over conference bridge etiquette, put your runbooks online, have your 2FA needs handy and have all the right hardware and software⊠then run an online fire drill to test that when something unexpectedly fails, the team is ready to respond quickly.