MaskMask

Real-Time Solutions for Last-Mile Connectivity

PublishedMay 21, 2021BySubspace Team
TL;DR
Optimized real-time performance requires real-time solutions. Subspace recognizes it’s a marathon, not a sprint, and focuses its technologies to positively impact network performance far beyond what can be achieved by solely focusing on the last mile.
Estimated read time: 9 minutes

The COVID-weary world may feel a longing to return to the office, but a complete return to yesteryear seems far-fetched. Work habits and real estate trends started or magnified in 2020 will deeply influence our emerging new normal. We seem headed towards a world where working remotely will be the rule, not the exception. As a result, real-time communications will remain paramount and only grow more important as borders erode and organizational reach expands.
Dependable, performant last-mile connectivity—the final handoff between internet infrastructure and the end user—is seen as the Holy Grail of true real-time experiences.
But, as Indiana Jones said of being just one step away from his goal, “That's usually when the ground falls out from underneath your feet.”
Just because last-mile connectivity works, doesn’t mean it works well. But simply pointing fingers and saying “Last mile bad!” doesn’t convey useful information. So, let’s explore some common issues encountered in last-mile connectivity and see what can be done about them.

Latency’s Impact on Real-Time Performance

Imagine placing a large order in your favorite coffee shop. The shop has the staff and machines—the bandwidth—to deliver your drinks quickly. The catch is that you can’t start drinking yours until the entire order is made and handed to you. That’s latency.
More specifically, that’s the impact latency can have on bandwidth. It doesn’t matter how fast those baristas can pour your drink into a cup; your first sip will still have to wait until your full order is put on the counter, after all 12 of the other lattes are finished too.
Similarly, it doesn’t matter how fast your traffic can move through that wide data pipeline if the first packets get mired in last-mile issues.
To varying degrees, latency impacts all online applications, but real-time applications feel that impact most. The first widespread awareness of latency’s importance dates back to the turn of the century when Google and Amazon gathered data on the impact of milliseconds on visitor use. As then-Amazon manager Greg Linden noted, “In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.” Not surprisingly, time sensitivity remains paramount. Google uses the Lighthouse auditing tool to assess website performance and weight Search Engine Optimization (SEO) accordingly. Lighthouse employs a range of metrics, but the bottom line is that initial page loading and responsiveness matter most. Even minor improvements to that performance can yield major benefits, regardless of the industry.

Defining the Last Mile

To varying degrees, we all suffer from subjectivity bias. If you’re a network engineer or back-end developer, bias can lead you to believe that your part within the network falls within the last mile. In essence, there’s the Internet core—the mesh of large Internet Service Providers and their web of backbone links—and then everything else. Given the globe-spanning reach of those backbone links and relative proximity of more localized resources, it’s easy to see how the “last mile” idea can become a catch-all.
However, the reality of internet layers and architecture is more nuanced. As Juniper has shown, the path between the main internet and end-user LAN/WLAN progresses through the core, edge, aggregation, and access networks. Some or all of these may be seen as part of the “last mile,” but they’re not.
Credit to Juniper.
To see why such definitions matter, take the example of multiplayer games. In addressing performance, the distance between gamer and game server will likely be the priority, but performance matters beyond the last mile. The real last mile is the distance from the gamer to the access network, such as a cable ISP’s local head end. It’s true that certain factors can greatly impair performance in the last mile. Traffic spikes can clog access networks. Five neighbors maxing out their download or upload bandwidth can swamp packet flows.
The physical last mile is only a small piece of the total performance puzzle, though. If our gamer is playing on her smartphone with another gamer a block away, both will likely connect to the same cell tower. However, as Vapor IO CEO Cole Crawford wrote for Forbes, “Data sent from one device to another attached to the same cell tower, or to the internet, cannot take a straight path to its destination. Instead, due to convoluted legacy network architectures, data in transit often takes a meandering, ineffective path, sometimes ‘tromboning’ (looping out and back) thousands of miles to do so.”
The latency implications here should be clear. Solely focusing on the last mile is like gauging a runner’s total marathon performance from measurements taken within sight of the finish line.

The Middle Mile Is Really the Long Mile

In 2018, Network World contributor Steve Garson wrote about tests he’d done comparing AWS workload performance across the public internet compared to the AWS network. In particular, he examined latency impacts within the last mile compared to the internet core, also called the “middle mile” or “the long mile.”* Results showed that, as was widely suspected, last-mile latencies can fluctuate wildly—in fact, nearly up to 200 percent. But this sounds worse than it is. The median last-mile latency was only 3 ms. In contrast, “the middle miles varied from 36% to 85%—92ms to 125ms—a 20x greater impact on the connection.”
Again, this is like judging marathon performance by assessing runners’ speeds in the last 100 meters. If you’ve managed to survive all the hills, pain, and exhaustion of 26 miles, your last steps will probably be much like those of your peers. It’s the middle that matters.
This middle is the domain of internet exchange points (IXPs), the interchanges through which ISP, CDN, and similar central internet entities connect with each other. IXPs are LANs of Ethernet switches able to achieve throughput of sometimes terabits per second. As every major city commuter knows, though, speed and efficiency through major exchanges can vary widely. And as we’ve noted before, IXPs are businesses focused on balancing service with profit. Their public mission is to deliver packets reliably. Their mission to management and shareholders is to do so with the lowest possible costs, even if it means sending traffic along wildly inefficient paths.
Even back in 2010, University of Central Florida researchers documented how “in most cases, there is an alternate path with a significantly lower round-trip time (RTT) than the default IXP path.” Moreover, packet “losses due to IXPs are always greater than alternate paths.” In other words, IXPs rarely, if ever, select traffic paths that optimize for performance. In over a decade, the situation has not improved.

Overloaded and Underwhelmed

We began here by mentioning the present tilt toward a remote-first work world. As many have found, remote-first leads to higher employee satisfaction and productivity. The road to the future has its fair share of potholes, though. Since March 2020, the internet-developed world has found that the Service Level Agreements (SLAs) common between ISPs and corporate customers do not apply to consumers. In April 2020, we saw a fascinating range of worldwide data showing how, on average, internet traffic—much of which was video communications and gaming—had spiked due to the work-at-home transition, and this had taken a noticeable toll on download performance. We know that bandwidth and demand don’t always align, and our needs are only growing across the globe.
Real-time applications depend on low-latency connections, but overburdened networks struggle to maintain fast, dependable packet flow. APMdigest reported how the pandemic forced a digital transformation on DevOps and IT teams had yielded increased service incidents, longer incident resolution times, and increases in the cost of downtime. The public internet has always been “a match made in Hell” for real-time services like VoIP, and WFH has only magnified all those flaws people had grown to tolerate (to greater or lesser degrees).
Consider a basic function like Quality of Service (QoS), which is built into practically every performance-class consumer router of the last decade or more. QoS allows for packet classification and prioritization so that the application or traffic type selected by the user has the highest possible performance. But operational realities prevent QoS in the middle mile. Moreover, those IXP-level providers and services continue to lack the ability to anticipate failing routes, which results in service degradation, higher latency, and/or increased jitter and packet loss. We’ve detailed some of the reasons for why this is and how Subspace does things differently in recent articles.

All the Miles

In a sense, focusing on “last mile” fixes to performance ignores the nature of the network. Network performance depends on an entire continuum of resources, from user to internet core. It’s not about the last mile; it’s about every mile. Subspace understands this and designed its network, monitoring, and routing functionality accordingly.
Subspace relies on precision measurement across every major network link to construct a real-time weather map of network conditions along a multitude of different paths. We then combine this with lightning-fast algorithms able to detect potential network issues down to the sub-millisecond and respond as needed. Subspace’s peering agreements and inter-network partnerships allow us to speed through these core backbone connections and deliver striking performance improvements, in part because of how we optimize BGP protocols for performance rather than economics.
When your real-time traffic is on Subspace, PacketAccelerator reduces latency and accelerates packets, while Subspace WebRTC-CDN allows you to run TURN globally, without having to deploy or manage servers of your own.
After decades of evolution down a certain path, the internet is fairly well locked into its trajectory—which is fine. Obviously, it does what it does very well and has transformed the world as a result. But Subspace recognizes that today’s real-time applications need an improved level of internet functionality. We employ managed services, partnerships, and a range of packet prioritization technologies to positively impact network performance far beyond what can be achieved by solely focusing on the last mile. In short, Subspace puts you in control of your network. We have mastered the middle mile and dramatically increased network reliability so your applications, in turn, can be more reliable for your customers.
We have made the inherently unpredictable internet predictable at last.
Want to start building on Subspace today? Sign up here.

Share this post

Subscribe to our newsletter

The world’s fastest internet for real-time applications—period. Every millisecond counts. Learn more in our newsletter.

Related Articles