This past January – in what seems like a different world now – a story briefly hit the headlines and was seen as more of a quirk than a threat. It was soon eclipsed by the enormity of the COVID-19 pandemic, which has occupied the world’s attention ever since. But this story should not be forgotten, because it has roots that extend far back into the past and, more importantly, has serious implications for computing and threat management far into the future. And it starts with some parking meters in New York City.
The problem – as small as it might seem to some – was that immediately after New Year’s Day 2020, some parking meters in NYC stopped accepting credit and parking cards. At fault was the credit card payment software built into the meters. According to the manufacturer, its anti-fraud system had not been updated, which meant the software essentially expired as the calendar turned to January 1, 2020. Technicians were dispatched to manually adjust the individual meters, one at a time, and in the meantime, New Yorkers were still able to pay using coins.
But dismissing this story as a mere quirk would be like walking across a frozen lake and shrugging your shoulders at a small crack that radiates out from your footstep. Small cracks inevitably lead to larger ones and tend to grow exponentially.
The NYC parking meter software bug serves as a very real reminder that the applications and networks we rely on for every part of our lives are still inextricably interconnected, and much of this connectivity includes legacy technology that was built and deployed in an earlier time, usually with a finite life that is quickly approaching its end or may already have done so.
The biggest of these was, of course, the Year 2000 Problem. Collectively known as Y2K, this revealed the hundreds of thousands of systems worldwide using a two-digit date system that meant a move beyond 1999 would be a move backward to 1900. There were worries of aircraft falling out of the sky, power grid shutdowns and all manner of destructive events to come. Fortunately, enough skilled programmers, many brought out of retirement, were martialed, and enough money was spent – estimated at $500 billion globally – to thwart most of the problems.
The fact that nothing bad happened on January 1, 2000, was not hailed as a victory of proactive crisis management, but more like a disappointment that the entire thing had always been a nonevent, and perhaps was overstated from the start. This is a problem that happens with human beings in general. Being wired for reaction and not pro-action, it becomes exceedingly difficult to comprehend threats until they are actually upon us. Endless case studies will be written on this topic by people observing different countries’ responses to the COVID-19 pandemic, climate change, earthquake preparation and much more.
Software glitches will continue to plague us. In 2036 and 2038, two separate computer systems that rely on time-codes will roll over into potential chaos if they are not upgraded or replaced. The 2038 problem affects 32-bit systems relying on time-codes that max out on January 19 of that year, and before that a rollover is expected in 2036 for Network Time Protocol systems.
In all likelihood, the affected systems will have been replaced within the next 16 years, but the dangers still exist. The most obvious place where they exist will be legacy systems once again, following years of patches and layering. But the other and more pressing circumstance may be cases where advance calculation of expiry dates is needed. This may result in computing failures or physical defects in products long before the year 2036 comes around.
The elephant in the room is not so much the software glitches themselves, but the complacency that influences decision making at all levels within an organization. In addition to shrugging away potential risks, humans also tend to avoid situations that make them fearful or uncomfortable. On a personal level this is called procrastination, but on a corporate and strategic level, proactive change may be suppressed due to financial concerns, or potential damage to a reputation. An airline, for example, will likely never want to go to the press to describe a software glitch that may affect flight safety, even if it is being fixed.
These are examples of challenges, both past and future, that organizations need to be aware of. There will be many more – not just in terms of calendar-specific software glitches, but also in terms of data security, as networks continue to connect with IoT devices, phishing crimes grow in sophistication, and threat actors exploit artificial intelligence and machine learning to dig through stored legacy data to find new and innovative ways to prey on organizations worldwide.
Safety and preparation come from having a team of people who understand threats and also know how to speak the language of decision makers whose priorities might lay elsewhere. It is not enough to just be good with cybersecurity. Communication skills and strategic awareness are vital components of this battle. This is why (ISC)² does what they do. Their mandate is to teach and certify cybersecurity professionals who are able to not only understand technical challenges and threats, but who can also play a more involved role in proactive governance of an organization.
For more information, read the Proactive Cybersecurity Beyond COVID-19 white paper.
By Steve Prentice
Steve Prentice is a project manager, writer, speaker and expert on productivity in the workplace, specifically the juncture where people and technology intersect. He is a senior writer for CloudTweaks.