In the context of 5G networks and beyond, radio access network (RAN) virtualization is a key idea. It is a workable way to optimize material and radio resource management and addresses energy cost issues. A complete implementation of Virtual RAN (V-RAN) is still elusive despite countless global deployment trials. Current platforms tackle various aspects of V-RAN but encounter numerous limitations. The development of RAN virtualization thus far and the ongoing obstacles to its complete implementation in upcoming mobile radio networks are covered in this article. First, we go over the basics of virtualization and how it has changed in the last few decades, explaining the differences between Cloud RAN (C-RAN), V-RAN, and Open RAN ideas. We then provide a thorough tutorial on how to implement V-RAN according to various vendors. Last but not least, we clarify lingering problems with vendor platforms and illustrate through simulations the limitations on their functionality concerning supported Base Band Units (BBUs) and devices.
Read Also: The Importance of Using Virtualization Technology in Cloud Computing
Virtualization involves running multiple operating systems on a single physical hardware, creating virtual versions at a higher abstraction level. It offers flexibility by allowing multiple OS instances on one computer and easy migration to other machines. Virtualized instances ensure uninterrupted service during shutdowns or maintenance, and scalability by simplifying node addition or removal. It increases hardware utilization by hosting multiple OS simultaneously and adapts to workload changes by reallocating resources among virtual machines. Financially, it reduces acquisition costs and lowers maintenance and electricity expenses. Administratively, it transforms physical machines into easily transferable files, simplifying migration.
The hypervisor serves as the linchpin of virtualization, forming the core component that enables the deployment of virtualization platforms. These platforms facilitate the concurrent operation of multiple operating systems on the same hardware. Various types of virtualization have been developed within the realm of computer science.
Full Virtualization: In this paradigm, the hypervisor emulates a complete hardware environment for each virtual machine, endowing each with its own set of virtual hardware resources provided by the hypervisor. This setup allows virtual machines to execute applications independently. Full virtualization offers benefits such as user isolation and shared utilization of a single computer system among multiple users. However, it may impact system performance and application speed due to the hypervisor’s data processing demands, which consume a portion of the physical server’s computing power and resources. Additionally, the hypervisor requires suitable interfaces, known as device drivers, to access the machine’s resources. The absence of drivers for specific hardware resources can hinder full virtualization’s operation on a given machine, posing challenges for organizations adopting new hardware advancements.
Para-Virtualization: In contrast to full virtualization, para-virtualization employs a different approach. Here, the hypervisor offers a programming interface (API) that allows guest operating systems direct access to the physical hardware of the host system. This method delivers performance advantages over full virtualization. Para-virtualization also streamlines backup processes, facilitates rapid migrations, enhances system utilization, promotes server consolidation, and contributes to power conservation, among other benefits. The performance gains of para-virtualization are contingent on the workload, with the degree of benefit closely tied to the volume of hypercalls, which are communications between the operating system and the hypervisor. The efficacy of these hypercalls in reducing compute time for specific workloads determines the actual performance enhancement. Workloads generating numerous hypercalls may experience substantial performance improvements compared to running the same application in full virtualization.
Virtualization can be classified based on its application in system communications into three main types:
Application virtualization decouples application execution from the local environment by configuring remote applications on a server and delivering them to users’ devices. This allows users to interact with virtualized applications as if they were installed locally, enhancing remote work, portability and centralized application management. Key benefits include improved flexibility, reduced conflicts between applications and simplified deployment processes, making it easier for organizations to manage software across different environments.
Also known as Operating System-level virtualization, OS virtualization enables multiple isolated user spaces, called containers, to operate on a single OS kernel. This approach allows for efficient resource utilization by sharing physical resources among containers rather than allocating dedicated hardware for each OS. The result is minimized resource wastage and improved system performance, as multiple applications can run concurrently without the overhead associated with traditional virtualization methods.
Network virtualization abstracts network resources from the underlying physical hardware, creating independent virtual networks. This technology aims to establish a flexible, scalable, and programmable virtual network layer that operates autonomously from the physical infrastructure. Organizations can manage and configure these virtual networks independently, enhancing their ability to adapt to changing requirements while optimizing resource allocation and improving overall network management.
Since the concept of virtualization emerged, there has been significant research growth in network virtualization, particularly in wireless networks. This research aims to implement virtualization and control mechanisms through Network Function Virtualization (NFV) and Software Defined Networking (SDN) technologies in wireless environments. As a result, several new models for wireless networks have been proposed, including:
These advancements reflect the ongoing efforts to improve network efficiency and management in increasingly complex wireless environments. Following this development, RAN virtualization emerged as a Cloud Radio Access Networks (C-RAN) advancement within 5G cellular networks.
C-RAN Architecture
Cloud Radio Access Network (C-RAN), also referred to as Centralized RAN, represents a significant advancement in radio access network architecture, moving away from the decentralized structure typical of 4G networks. In traditional RANs, baseband processing resources were distributed, with baseband units (BBUs) and remote radio heads (RRHs) co-located at each cell site. The introduction of C-RAN centralizes these baseband processing resources into a unified pool, facilitating dynamic resource sharing among base stations and allowing for on-demand resource allocation.
The C-RAN architecture consists of three main components:
Advantages | Limits |
Reduction of CAPEX and OPEXBetter interferences handlingBetter adaptability to non-uniform trafficMinimization of energy consumed spectral efficiency (SE)Ease of maintenance and expansion. | Fronthaul LantencySecurity Problems |
V-RAN Architecture
The Virtual RAN (V-RAN) concept is an advancement of the Centralized RAN (C-RAN) architecture, first introduced in a 2014 research paper on RAN virtualization. This innovative architecture aims to provide a simplified view of the physical network by logically grouping macro and small cells into virtual cells and incorporating virtual components into the C-RAN BBU pool. The V-RAN architecture consists of three primary elements: Remote Radio Heads (RRHs), the fronthaul, and the virtual BBU pool, which utilizes Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies. This decoupling of hardware from software functions allows for the creation of virtual BBUs that can support various 5G services, such as Enhanced Mobile Broadband (eMBB), Machine Type Communication (MTC), and Ultra-Reliable Low Latency Communication (URLLC).
Research on RAN virtualization has progressed significantly since 2016, focusing on two main areas: architectural frameworks and resource management strategies for V-RAN.
Numerous frameworks have emerged that highlight the advantages of virtualization. For instance:
Research has also concentrated on optimizing radio resource management in V-RAN:
Several studies have proposed energy-saving strategies:
Research has evaluated fronthaul latency requirements within 5G networks, revealing specific latency budgets influenced by virtualization technologies and deployment configurations.
Security concerns regarding BBU hoteling schemes in Access Cloud Networks were addressed by proposing various protection strategies to ensure survivability against processing failures.
Architecture of Open RAN
The Open Radio Access Network (Open RAN or ORAN) represents a significant advancement in mobile network architecture, building upon the principles of Virtual RAN by emphasizing openness and intelligence. This initiative is led by the O-RAN Alliance, a global consortium that includes mobile network operators, manufacturers, suppliers, and academic institutions collaborating within the telecommunications industry.
Open RAN aims to enhance interoperability and standardization within the Radio Access Network (RAN) while integrating software components from various vendors. This approach supports multi-vendor V-RAN deployments, fostering a competitive ecosystem. The use of open-source software and hardware designs can accelerate innovation and commercialization while ensuring compatibility with existing legacy systems. As wireless networks, particularly 5G and beyond, become more complex due to increased densification and application demands, operators are encouraged to adopt self-organizing strategies.
Incorporating Machine Learning (ML) and Artificial Intelligence (AI) into Open RAN can automate network operations and reduce operational costs. The telecommunications sector views the establishment of an open virtualized RAN as crucial for unlocking the full potential of 5G.
The primary components of the Open RAN reference architecture include:
The RAN Evolution from C-RAN to Open RAN.
The RAN virtualization process unfolds in two distinct phases. Initially, the first phase involves abstracting each physical resource within the RAN, including BBU storage and BBU Central Processing Unit (CPU). Subsequently, the second phase entails virtualizing various mechanisms within the radio access network.
The abstraction of RAN physical resources is a procedure that enables the creation of one or more virtual replicas of these resources, encompassing all their components such as Operating System (OS), storage, CPU, network functions, etc. Each generated virtual replica is termed a Virtual Machine (VM) or domain. The abstraction of RAN physical resources can be executed using servers, personal computers, or any electronic devices. The pivotal tool responsible for realizing the hardware abstraction is the Hypervisor.
The Hypervisor emulates all the necessary hardware components for running software. In the market, there exist two categories of Hypervisors: Type 1 Hypervisors and Type 2 Hypervisors.
Type 1 Hypervisors, also known as native or ‘‘bare metal’’ hypervisors, are installed directly on the hardware without any intermediary OS This positioning grants Type 1 Hypervisors complete and privileged control over the hardware resources. These resources are then abstracted and allocated to the Virtual Machines (VMs). Below are some recent Type 1 Hypervisors suitable for RAN virtualization projects:
VMware ESXi (formerly ESX) is a Type 1 Hypervisor developed by VMware. It was initially released in 2001, followed by a second version in 2010. Its kernel, called VMkernel, manages the created virtual environment and controls access to the underlying physical hardware. VMkernel also provides conditions to ensure smooth execution of all system processes within the virtual domain specific conditions. Key processes running atop the VMkernel include:
ESXi, a Type 1 Hypervisor deployed for virtualization, introduced a system storage layout enabling flexible partition creation and management.
Xen, a Type 1 Hypervisor originally developed in 2003 by the University of Cambridge computer laboratory, comprises several components that collaborate to deliver one or multiple abstracted environments known as ‘‘domains.’’ These components include the Xen Hypervisor, Domain 0 Guest (referred to as Dom0), and Domain U Guest (referred to as DomU). Xen utilizes the Borrowed Virtual Time (BVT) scheduling algorithm to ensure low-latency dispatch of a domain when an event occurs. Initial memory allocation for each domain is established at its creation, with memory zoned between domains to provide secure isolation.
Introduced in 2006 and later incorporated by Qumranet, the KVM hypervisor is an open-source virtualization technology that converts a Linux system into a hypervisor. This transformation occurs through a minimally intrusive method that integrates KVM as a kernel module, providing abstraction capability. KVM is integrated into the Linux kernel as a loadable module, treating each virtual machine (VM) as a Linux process managed by the standard Linux kernel. The Linux kernel utilizes the Completely Fair Scheduler (CFS) [42], a sophisticated process scheduler, for advanced process planning. Modifications to the CFS scheduler include the inclusion of a Cgroups (Control groups) resource manager, enabling process resource sharing. KVM utilizes Linux memory management services, maintaining VM memory akin to other Linux processes and enabling memory page sharing via the Kernel Same-page Merging (KSM) feature.
Released in 2008 by Microsoft, Hyper-V, simplifies communication between hardware, the operating system, and virtual machines (VMs). Hyper-V offers features such as live migrations, hosted OS isolation, security, reliability, and performance improvements. Each VM machine in Hyper-V allows manual CPU settings adjustments by administrators to align with business or IT operator requirements, enabling reservation of a portion of the server’s total processing resources for a VM. Administrators can also limit the consumption of processing resources by a single VM on a host. Hyper-V employs two memory management and optimization techniques, including Dynamic Memory (DM). In this technique, the Dynamic Memory Virtual Service Consumer (DM VSC) monitors guest OS memory usage. Another technique utilized is Smart Paging, which leverages temporary storage for memory caching to ensure adequate RAM allocation for virtual machines (VMs).
Also Read: Azure Stack HCI: How it Differs from Hyper-V?
Hypervisors | CommercialSolutions | Main Clients | Selling Arguments | Example of Customers |
---|---|---|---|---|
ESXi | vSphere | Large companies | Market leader, reliability, innovation | Private companies excluding cloud providers |
Hyper-V | Hyper-V | Medium and large companies | Scalability, flexibility, powerful with Windows VMs, in strong progress | Private companies excluding cloud providers, Microsoft Azure |
KVM | Proxmox VE, RedHat Virtualization (RHV) | Public cloud companies | very flexible, open source, in strong process | Google Cloud,Joyent, NextGen |
Xen | Oracle VM, Citrix XenServer | Public cloud companies | Open source, Leader of cloud players | AWS, Cloudstack, Rackspace, Linode, Oracle, Citrix |
Type 2 Hypervisors: Unlike Type 1 Hypervisors directly installed on hardware, Type 2 Hypervisors are software that operates within an operating system (OS). Installation of Type 2 Hypervisors is similar to any software application, making them easier to set up but lacking control or priority over hardware resources. Consequently, their performance may be limited, and they may encounter unstable virtual environments. Type 2 Hypervisors are typically used for small-scale testing or in research and are popular in academia due to their flexibility and ease of use. The two most widely used Type 2 Hypervisors are:
VMware Workstation Pro and Player are Type 2 hypervisors developed by VMware, compatible with x64 versions of Windows and Linux OSs. They allow the creation and simultaneous operation of VMs on a single physical machine, each capable of running its own OS, including various versions of Windows, Linux, BSD, and MS-DOS. VMware Workstation Player is free, while VMware Workstation Pro requires a license.
VirtualBox, developed by Oracle in 2010, is a powerful x86 and AMD64/Intel64 virtualization product suitable for business and individual testing purposes. It is available as open-source software under the GNU General Public License (GPL) version. VirtualBox runs on hosts including Windows, Linux, Macintosh, and Solaris, supporting a wide range of guest operating systems, such as various Windows and Linux versions.
Type 2 Hypervisors | Price | Operating Systems | Limitations | Formataccepted | Performances |
Workstation Player | Free | Windows, Linux | Unable to launch multiple VMs at the same time | Disks:vmdk (natif), vdi, vhd VM Config:ova, ovf | More efficient than VirtualBox for Windows VMs |
WorkstationPro | Paid(30 day trial version) | Windows,Linux | Disks:vmdk (natif), vdi, vhd VM Config:ova, ovf | More efficient than VirtualBox for Windows VMs | |
VirtualBox | Free | Windows, Linux, MacOS | Disks:vmdk (natif), vdi, vhd VM Config:ova, ovf | Flexible, but slower. |
The creation of a Virtual Machine (VM) by the Hypervisor serves as a virtual entity container, each with its unique setup algorithm. This section delineates the VM setup process using VMware Workstation Pro, renowned for its user-friendly interface and availability as a free hypervisor. The algorithm outlined in table is universally applicable across electronic devices, facilitating the creation of VMs tailored to access hardware features like CPU, memory storage, and Hard Disk. However, additional steps are necessary to virtualize the mechanisms and protocols inherent to the Baseband Units (BBUs) within the newly created VMs, ensuring comprehensive control over the BBU operating system.
General setting up algorithm of a virtual entity container using VMware workstation pro.
The present operation is achievable in two steps:
1) installation of a platform supporting a 3GPP 4G or 5G RAN protocol stack standards-compliant on the created VM
2) building and running the BBU’s mechanisms on this VM.
we distinguish four open source 3GPP 4G or 5G RAN stack platforms: OpenAirInterface, srsLTE, free5GRAN and UERANSIM.
OpenAirInterface: OpenAirInterface (OAI) OpenAirInterface (OAI) serves as a pivotal platform tailored for 4G and 5G mobile telecommunications systems. Originating from its development at Eurecom, OAI’s software stands as an open-source solution delivering a standards-compliant implementation of a 3GPP 4G LTE stack. This stack seamlessly operates on a commodity x86 CPU and a USRP radio device. Notably, OAI’s stewardship has transitioned to the OpenAirInterface Software Alliance (OSA), a French non-profit organization that fosters open-source software and tools for 4G and 5G wireless research. Comprehensive in its scope, the OAI software encompasses the entirety of the protocol stack outlined in the 3GPP LTE standards, including Releases 8 and 9, and partially 10 and 11. It also has implementations of the e-UTRAN UTRAN (both Evolved Node B (eNB) and User Equipment (UE) ) and the EPC (Mobility Management Entity(MME), Serving Gateway(SGW), Public Data Network (PDN) and Home Subscriber Server (HSS) ).
Installation algorithm of the OpenAirInterface platform.
srsLTE: srsLTE, developed by Software Radio Systems (SRS) in Ireland, is an open-source LTE SDR platform. It encompasses a complete protocol stack UE (srsUE) along with a physical layer downlink transceiver link. Additionally, it offers a full protocol stack eNB known as srseNB and an srsEPC that contains core network functions. A third-party eNB and EPC can be integrated to establish an LTE SDR system .
Free5GRAN, an open-source 5G RAN stack, is equipped with a receiver capable of decoding MIB and SIB data. It functions as a cell scanner and operates in SA mode .
UERANSIM, developed by free5GC, is an open-source project focused on 5th generation (5G) mobile core networks. It includes both a UE and a 5G RAN (gNodeB) implementation, serving as a 5G mobile phone and a base station for testing and studying the 5G core network and system.
The mechanisms of BBU (Baseband Unit) are largely consistent across the platforms mentioned earlier, except for those still in conceptual stages. These mechanisms are organized into layers, each supported by various protocols. The specific mechanisms of OpenAirInterface BBUs are detailed below. In the OpenAirInterface platform, the BBU mechanisms are split into five layers [129]: the Physical layer, The MAC (Medium Access Control) layer, the Packet Data Convergence Protocol (PDCP) layer, the RLC (Radio Link Control) layer and the RRC (Radio Resource Control) layer.
BBU protocols and mechanisms implementation algorithm in open air interface
Read Also: AWS Lambda: Scaling Serverless Applications Seamlessly
This section presents the remaining problems to achieve a completed V-RAN. We will highlight today’s problems obstructing large-scale V-RAN realization for IT operators.
In recent years, various telecommunications groups and vendors have introduced proposals for Radio Access Network (RAN) virtualization solutions. Here are some notable developments:
The Network Function Virtualization (NFV) principle, as defined by the European Telecommunications Standards Institute (ETSI), establishes a virtualization layer where Virtualized Network Functions (VNFs) can operate independently, providing greater flexibility in network management. VNFs can be grouped into Element Management Systems (EMS) to facilitate various services, including network slicing. This concept allows multiple virtual networks to be created within a single physical infrastructure, with each slice representing a unique virtualized instance that includes a dedicated set of resources and specific traffic management rules.
However, current LTE and 5G platforms often lack this flexibility, as Base Band Units (BBUs) are typically integrated stacks of functions without modularity. To overcome this limitation, future developments should focus on enabling access to modular functions within BBUs or creating new platforms that allow VNFs to operate independently. This shift is essential for enhancing the adaptability and efficiency of network services in an increasingly complex telecommunications landscape.
The main objective of C-RAN and V-RAN was the centralization of the BBUs control. This centralized control should be reproduced in virtualized RANs and performed by an orchestrator. This orchestrator must ensure:
• The control of the BBUs activity
• Material resources provisioning and control
• Scheduling
• Real-time and dynamic allocation of network resources
• Management of the Pool of BBUs: fault management, performance, capacity planning and optimization
This principle is not applied in all the aforementioned V-RAN solutions
A significant challenge in achieving complete Radio Access Network (RAN) virtualization is the limited availability of virtual Base Band Units (BBUs) within the BBU pool. In a centralized RAN (C-RAN) architecture, the BBU pool acts as a core component located in a cloud or data center, consisting of multiple BBU nodes that provide substantial computational and storage resources. These nodes are responsible for processing resources and dynamically allocating them to Remote Radio Heads (RRHs) according to the network’s current demands. However, research has indicated that the maximum number of BBUs in these pools often caps at just two, which falls short of the requirements for an effective radio access network. This limitation arises from the complex architecture of LTE and 5G RAN stacks, which struggle to support even a single BBU efficiently, let alone multiple virtual instances.
To address this issue, enhancements were made to the OpenAirInterface (OAI) platform, allowing it to support an unlimited number of virtual BBUs and RRHs. Additionally, further optimization of CPU and memory usage was achieved through innovative machine learning techniques. This approach not only improves resource allocation but also enhances overall network efficiency by enabling better scalability and adaptability in response to varying network demands.
All of the earlier V-RAN solutions still do not allow two BBUs to communicate or cooperate. The vBBUs don’t communicate with one another and are essentially isolated from one another. Consequently, no mechanisms for handover or interference management between the virtual BBUs are put in place.
Success in the tech industry requires continuous learning, networking and staying updated with the latest…
Broadcom issued a warning today regarding three newly discovered VMware zero-day vulnerabilities (CVE-2025-22224, CVE-2025-22225, and…
Microsoft has announced Dragon Copilot to revolutionize clinical workflows with AI voice assistant for healthcare.…
Cloud computing continues to evolve, and businesses increasingly adopt multi-cloud and hybrid cloud strategies to…
Every technology era brings new developments and breakthroughs, and Linux is no exception. Linux is…
Gartner introduced SASE, or Secure Access Service Edge, to revolutionize networking by integrating connectivity and…