Resource Adaptation Mechanisms in Virtualized Network Environments
Purchased from Gettyimages. Copyright.
Resource Management in Virtualized Environments
In recent years, rapid advances in virtualization technologies and cloud solutions have brought a significant revolution in the field of computing. Cloud computing has become a paramount platform for hosting enterprise systems and delivering a wide range of services and applications (i.e., IoT and 5G applications). Virtualization—a technique that creates virtual versions of physical resources—has paved the way for the development of cloud computing, enabling efficient use of resources and cost savings in data centers.
In networks based on Network Function Virtualization (NFV), an application involves a service function chain (SFC) of an interconnected set of virtual network functions (VNFs). These SFC chains may vary in size and topology, introducing further challenges and constraints for their resource management. Figure 1 shows examples of SFC topologies. These VNFs are hosted in the cloud, on virtual machines or containers.
This NFV-Cloud paradigm shift has led to a growing need to propose and implement efficient resource management techniques. These techniques are mainly aimed at enhancing resource utilization in virtualized environments, minimizing energy consumption, and ensuring compliance with Service Level Agreement (SLA) requirements. An SLA is a contract between the service provider and the customer regarding the required Quality of Service.
Resource management or adaptation in virtualized environments is a broad research subject, encompassing various sub-topics, including resource consolidation, resource utilization prediction, resource scaling, and migration techniques. In our research, we addressed the aforementioned topics, proposing distinct techniques that yielded valuable insights and outcomes. The diagram in Figure 2 illustrates the interactions between the implemented resource management entities in our work.
Our Approaches: Adaptation Strategies from Novel Perspective
Our investigations into resource scaling and migration techniques have led to the development of innovative methods for dynamically adjusting resource allocation in virtualized environments based on workload demands. Dynamic resource management in NFV-cloud environments poses challenges, given the variability of workloads, the diversity of SFCs, and the need to pick the appropriate method from horizontal scaling (HS), vertical scaling (VS), and migration (M) to adapt VNF resources, and to balance conflicting optimization goals. While vertical scaling (increasing VNF computing capacity, such as the CPU) is limited by server capacities, horizontal scaling of all VNF instances (adding new VNF instances) or their migration (moving VNFs from current hosting servers to others) can result in high operating costs. Existing research often focuses on one adaptation mechanism, neglecting the full range of possibilities. One of our key contributions lies in our innovative formulation of the problem, which takes a novel and unique perspective, by considering all three adaptation strategies, their associated costs, and subsequently determining the most appropriate approach for each given scenario.
To solve the problem, we used an Integer Linear Programming (ILP) model, which provided an exact solution. Because solving ILP is time-consuming, several metaheuristic decision-making algorithms were proposed based on the non-dominated sorting genetic algorithm, chemical reaction optimization, and binary particle swarm optimization. These algorithms offer efficient solutions, enabling real-time resource adaptation decisions to manage SFC resources based on real-time demands and performance requirements. Moreover, we targeted multiple optimization objectives, including meeting SLA requirements, optimizing resource utilization, and reducing energy consumption. In addition, our proposal addresses the variability of SFCs by considering different SFC sizes and topologies. Experimental results demonstrated the effectiveness of metaheuristic techniques in reducing SLA latency while approximating optimal solutions in terms of resource utilization and energy consumption.
Our Approaches: Predictive Resource Consolidation Techniques
By meticulously examining proactive resource reallocation approaches, we also explored the use of resource prediction techniques. We developed a multi-step workload prediction model called K-SVR, which combines the power of a Kalman filter and support vector regression (SVR) to accurately predict the future trend of server CPU utilization. The primary objective of this research part is to overcome the limitations of existing approaches that rely solely on actual workload variations to adapt resources and make related decisions. These approaches often lead to unreliable resource adaptation decisions, resulting in energy waste, performance degradation, and SLA violations.
Furthermore, our research delves into the domain of resource consolidation, shedding light on overload and underload detection techniques to accurately estimate server status and efficiently trigger reliable migration decisions. Overload (OD) and underload detection (UD) algorithms enable VNF migrations from overloaded servers to meet SLA requirements and migrations from underloaded servers in order to save energy. The diagram in Figure 3 illustrates step by step how the resource consolidation approach works. By combining the K-SVR prediction model with this approach, we built a predictive resource consolidation framework that dynamically determines overloaded and underloaded servers by considering both current and near-future resource utilization. The main objective is to ensure reliable decision-making, avoiding unnecessary VM migrations and associated costs.
Moreover, we implemented an alternative consolidation approach using an autoregressive integrated moving average (ARIMA) prediction model, replacing the K-SVR model. Simulations were conducted using real-world PlanetLab workloads on the Cloudsim simulator. Compared to original and modified versions of some existing algorithms—local regression, static threshold, mean absolute deviation, interquartile range-based consolidation approaches—and to the ARIMA-based approach, the proposed consolidation technique exhibits a substantial reduction in SLA violations, VM migrations and energy consumption.
Further Optimization Work
Lastly, building on previous work, we optimized the proposed prediction model and consolidation technique by incorporating multi-resource aspects. The optimized versions forecast the use of server resources, including CPU, memory, bandwidth, etc. Different applications may have different resource requirements, necessitating the consideration of multiple resource types for accurate decision-making. By considering a broader range of resources, the proposed approach becomes more versatile and applicable to various types of applications and workloads. Overall, the culmination of all these research efforts has resulted in the production of three distinct journal papers (two published and one submitted), each representing a significant contribution to the field of resource adaptation and optimization in virtualized environments.
Additional Information
For more information on this research, please read the following papers:
Awad, M., Kara, N., & Leivadeas, A. (2022). Utilization prediction-based VM consolidation approach. Journal of Parallel and Distributed Computing, 170, 24-38.
Awad, M., Kara, N., & Edstrom, C. (2022). SLO-aware dynamic self-adaptation of resources. Future Generation Computer Systems, 133, 266‑280.