Businesses globally have been moving to the software-as-a-service (SaaS) model even for core applications, over several years.
Over the past two years, the COVID-19 pandemic has prompted the emergence of a hybrid working model that looks like it will endure.
Both trends have enormous implications for CIOs and their colleagues in IT from all sorts of angles. One issue that is not yet fully understood is the implication of moving off the corporate network onto the internet, particularly as regards user experience.
SaaS workloads are now being delivered not only over the corporate network, or via dedicated links on which there are enforceable service-level agreements, but over the complex web of interlocked networks we call the internet.
The situation is exacerbated when one recognises the fact that so many employees are now working from home for at least part of the time. They, too, are accessing SaaS applications, and applications or data held on the corporate network, via the internet.
In this highly heterogeneous environment, it’s no longer just the performance of the application that is of concern but the performance of the network. For example, configuration changes on one of the networks over which a company’s workloads pass would have an impact on the user experience.
The big, dirty secret about the internet everybody forgets
The internet was designed some six decades ago with an emphasis on flexibility and resilience − not performance. It is not one network but many; there are no single points of failure and internet traffic has a virtual infinity of possible routes to travel.
Thus, no matter how much bandwidth or speed one theoretically has, the throughput one actually achieves reflects the performance (or otherwise) of all the disparate networks over which the data passes.
It’s important to understand that data packets are not routed across the public internet to achieve maximum speed, but rather to reduce the cost for the originating network.
The question ultimately is: Who is responsible for the internet? And the answer is, really, nobody.
Another important point to notice is that the handoff between networks is governed by what is called Border Gateway Protocol (BGP), a technology that is somewhat lacking in light of what the internet is being asked to do today.
For example, when the least-cost route becomes congested, BGP is just not smart enough to reroute traffic to a less congested route.
Additionally, BGP is far from secure − it is relatively easy for an existing route to be hijacked by a malicious actor announcing a “better” path to the destination.
In April 2018, for example, cyber criminals were able to hijack 1 300 IP addresses in the Amazon Web Services space and to masquerade as the crypto-currency website MyEtherWallet.com. Around $150 000 in digital coins was stolen during the scam.
In short, then, it would be a mistake to assume the internet will deliver a level of performance similar to what the older, point-to-point corporate networks were able to achieve.
It may come close, but one needs to recognise that one is essentially moving off a network designed for performance onto one designed to be resilient and flexible.
Flying in the dark
This brave new world of services delivered from the cloud to anywhere employees happen to be working has many well-known advantages. Productivity can be enhanced, and companies can adjust their business processes rapidly to suit changing circumstances − such as, for example, the need to move employees off-premises in a big hurry.
The downside, as noted above, is the fact that whereas once the progress of data across the corporate network was fully visible and could thus be managed, it is now obscured.
It’s a situation that is less than ideal: the CIO needs as much visibility into those dark networks as possible in order to deliver a similar user experience to the one he or she can deliver from the company’s own data centre.
This is easier said than done because the networks involved are interdependent, and only some of them will have direct relationships with the company.
The question ultimately is: Who is responsible for the internet? And the answer is, really, nobody. However, the first step for the CIO is to obtain visibility of the route that his or her data is taking, and thus where the bottlenecks are.
In response to this challenge, a new generation of tools is emerging that provide the ability to monitor the internet and the experience of end-users. Without going into the technicalities, these tools combine various monitoring techniques, including BGP monitoring and packet monitoring, to provide the necessary visibility.
Once this visibility is obtained, the CIO is in a position to influence the relevant network operator to make the necessary changes to facilitate a smoother passage of data − and so start the process of massaging the performance of the internet to acceptable levels.