5G Streaming Questions Answered

The broad adoption of 5G, internet of things (IOT) and edge computing are reshaping the nature and role of enterprise and cloud storage. Preparing for this significant disruption is important. It’s a topic the SNIA Cloud Storage Technologies Initiative covered in our recent webcast “Storage Implications at the Velocity of 5G Streaming,” where my colleagues, Steve Adams and Chip Maurer, took a deep dive into the 5G journey, streaming data and real-time edge AI, 5G use cases and much more. If you missed the webcast, it’s available on-demand along with a copy of the webcast slides.

As you might expect, this discussion generated some intriguing questions. As promised during the live presentation, our experts have answered them all here.

Q. What kind of transport do you see that is going to be used for those (5G) use-cases?

A. At a high level, 5G consists of 3 primary slices: enhanced mobile broadband (eMBB), ultra-low latency communications (URLLC) and massive machine type communication (mMTC). Each of these are better suited for different use cases, for example normal smartphone usage relies on eMBB, factory robotics relies on URLLC, and intelligent device or sensor applications like farming, edge computing and IOT relies on mMTC.  

The primary 5G standards-making bodies include:

  • The 3rd Generation Partnership Project (3GPP) – formulates 5G technical specifications which become 5G standards. Release 15 was the first release to define 5G implementations, and Release 16 is currently underway.
  • The Internet Engineering Task Force (IETF) partners with 3GPP on the development of 5G and new uses of the technology. Particularly, IETF develops key specifications for various functions enabling IP protocols to support network virtualization. For example, IETF is pioneering Service Function Chaining (SFC), which will link the virtualized components of the 5G architecture—such as the base station, serving gateway, and packet data gateway—into a single path. This will permit the dynamic creation and linkage of Virtual Network Functions (VNFs).
  • The International Telecommunication Union (ITU), based in Geneva, is the United Nations specialized agency focused on information and communication technologies. ITU World Radio communication conferences revise the international treaty governing the use of the radio-frequency spectrum and the geostationary and non-geostationary satellite orbits.

To learn more, see

Q. What if the data source at the Edge is not close to where the signal is good to connect to cloud? And, I wonder how these algorithm(s) / data streaming solutions should be considered?

A. When we look at a 5G applications like massive Machine Type Communications (mMTC), we expect many kinds of devices will connect only occasionally, e.g. battery-operated sensors attached to farming water sprinklers or water pumps.  Therefore, long distance, low bandwidth, sporadically connected 5G network applications will need to tolerate long stretches of no-contact without losing context or connectivity, as well as adapt to variations in signal strength and signal quality.   

Additionally, 5G supports three broad ranges of wireless frequency spectrum: Low, Mid and High. The lower frequency range provides lower bandwidth for broader or more wide area wireless coverage.  The higher frequency range provides higher bandwidth for limited area or more focused area wireless coverage. To learn more, check out The Wired Guide to 5G.

On the second part of the question regarding algorithm(s) / data streaming solutions, we anticipate streaming IOT data from sporadically connected devices can still be treated as steaming data sources from a data ingestion standpoint. It is likely to consist of broad snapshots (pre-stipulated time windows) with potential intervals of null sets of data when compared with other types of data sources. Streaming data, regardless of interval of data arrival, has value because of the “last known state” value versus previous interval known states. Calculation of trending data is one of the most common meaningful ways to extract value and make decisions. 

Q. Is there an improvement with the latency in 5G from cloud to data center?

By 2023, we should see the introduction of 5G ultra reliable low latency connection (URLLC) capabilities, which will increase the amount of time sensitive data ingested into and delivered from wireless access networks. This will increase demand for fronthaul and backhaul bandwidth to move time sensitive data from remote radio units, to baseband stations and aggregation points like metro area central offices.

As an example, to reduce latency, some hyperscalers have multiple connections out to regional co-location sites, central offices and in some cases sites near cell towers. To save on backhaul transport costs and improve 5G latency, some cloud service providers (CSP) are motivated to locate their networks as close to users as possible.

Independent of CSPs, we expect that backhaul bandwidth will increase to support the growth in wireless access bandwidth of 5G over 4G LTE. But it isn’t the only reason backhaul bandwidth is growing. COVID-19 revealed that many cable and fiber access networks were built to support much more download than upload traffic. The explosion in work and study from home, as well as video conferencing has changed the ratio of upload to download. So many wireline operators (which are often also wireless operators) are upgrading their backhaul capacity in anticipation that not everyone will go back to the office any time soon and some may hardly ever return to the office.

Q. Are the 5G speeds ensured from end-to-end (i.e from mobile device to tower and with MSP’s infrastructure)? Understand most of the MSPs have improved the low latency speeds between Device and Tower.

We expect specialized services like 5G ultra reliable low latency connection (URLLC) will help improve low latency and narrow jitter communications. As far as “assured,” this depends on the service provider SLA. More broadly 5G mobile broadband and massive machine type communications are typically best effort networks, so generally, there is no overall guaranteed or assured latency or jitter profile.

5G supports the largest range of radio frequencies. The high frequency range uses milli-meter (mm) wave signals to deliver the theoretical max of 10Gbps, which means by default reduced latency along with higher throughput. For more information on deterministic over-the-air network connections using 5G URLLC and TSN (Time Sensitive Networking), see this ITU presentation “Integration of 5G and TSN.”  

To provide a bit more detail, mobile devices communicate via wireless with Remote Radio Head (RRH) units co-located at the antenna tower site, while baseband unit (BBU) processing is typically hosted in local central offices.  The local connection between RRHs and BBUs is called the fronthaul network (from antennas to central office). Fronthaul networks are usually fiber optic supporting eCPRI7.2 protocol, which provide time sensitive network delivery. Therefore, this portion of the wireless data path is deterministic even if the over-the-air or other backhaul portions of the network are not.

Q. Do we use a lot of matrix calculations in streaming data, and do we have a circuit model for matrix calculations for convenience?

We see this applies case-by-case based on the type of data.  What we often see is many edge hardware systems include extensive GPU support to facilitate matrix calculations for real time analytics.

Q. How do you see the deployment and benefits of Hyperconverged Infrastructure (HCI) on the edge?

Great question.  The software flexibility of HCI can provide many advantages on the edge over dedicated hardware solutions. Ease of deployment, scalability and service provider support make HCI an attractive option.  See this very informative article from TechTarget “Why hyper-converged edge computing is coming into vogue” for more details.

Q. Can you comment on edge-AI accelerator usage and future potentials? What are the places these will be used?

Edge processing capabilities include many resources to improve AI capabilities.  Things like computational storage and increased use of GPUs will only serve to improve analytics performance. Here is a great article on this topic.

Q. How important is high availability (HA) for edge computing?

For most enterprises, edge computing reliability is mission critical.  Therefore, almost every edge processing solution we have seen includes complete and comprehensive HA capabilities.

Q. How do you see Computational Storage fitting into these Edge use cases?  Any recommendations on initial deployment targets?

The definition and maturity of computational storage is rapidly evolving and is targeted to offer huge benefits for management and scale of 5G data usage on distributed edge devices.  First and foremost, 5G data can be used to train deep neural networks at higher rates due to parallel operation of “in storage processing.”  Petabytes of data may be analyzed in storage devices or within storage enclosures (not moved over the network for analysis). Secondly, computational storage may also accelerate the process of conditioning data or filtering out unwanted data.

Q. Do you think that the QUIC protocol will be a standard for the 5G communication?

So far, TCP is still the dominate transport layer protocol within the industry. QUIC was initially proposed by Google and is widely adopted in the Chrome/Android ecosystem.  QUIC is getting increased interest and adoption due to its performance benefits and ease in implementation (it can be implemented in user space and does not need OS kernel changes). 

For more information, here is an informative SNIA presentation on the QUIC protocol.

Please note this is an active area of innovation.  There are other methods including Apple IOS devices using MPTCP, and for inter/intra data center communications RoCE (RDMA over Converged Ethernet) is also gaining traction, as it allows for direct memory access without consuming host CPU cycles.  We expect TCP/QUIC/RDMA will all co-exist, as other new L3/L4 protocols will continue to emerge for next generation workloads. The choice will depend on workloads, service requirements and system availability.

Leave a Reply

Your email address will not be published. Required fields are marked *