Network virtualization (NV) is a crucial enabler for the growing demands of modern Internet technology. It offers benefits such as isolation, improved physical network (PN) utilization, and security [
1,
16]. By leveraging NV, service providers (SPs) can logically partition PN resources into independent virtual data center requests (VDCRs), allowing for more efficient management of network resources. For instance, Fig.
1 represents the VDCR belonging to a sample real-time applications such as hosting websites, online games, and video streaming from geographically distributed users [
10]. In Fig.
1, VDCR comprises four virtual machines (VMs) and four interconnected virtual links (VLs). The numerical value 4 associated with VM
v1, 1 represents its resource demand regarding computational resource blocks (CRBs). One CRB unit equating to one CPU core and 512 MB of RAM [
16,
18]. Similarly, the numerals associated with VLs represent the minimum communication bandwidth demand. One of the primary challenges in NV is allocating the required physical resources to VDCR components, i.e., VMs and VLs. This process is known as virtual data center embedding (VDCE). It comprises two sub-problems: first, VM embedding, which involves assigning server resources to VMs, and second, VL embedding, which maps physical paths to the VLs connecting VMs. Both of these sub-problems are proven to be
\(\mathbb {N}\mathbb {P}\) hard [
7] [
17].
To address this problem, many recent VDCE approaches focus on achieving objectives such as improving the revenue-to-cost ratio [
16], increasing acceptance ratio [
16], minimizing embedding costs [
14], and energy minimization [
8,
19]. However, these approaches overlook the importance of effective load distribution to achieve energy minimization in data centers (DCs), which is essential for modern Internet technology. In this regard, few research works have been conducted to address this objective of the VDCE problem. Fischer
et al. in [
4] introduced a strategy to map multiple VMs onto the same physical server, preferring a server with lower power consumption and selecting energy-efficient paths. However, it failed to account for resource load dependency and scalability under overloaded conditions, leading to poor resource utilization and Quality of Service (QoS). To address this limitation, Rodriguez
et al. in [
13] proposed power-on-demand and live migration to minimize energy consumption while ensuring QoS and balancing the network load. However, it consumes high execution time and computational overhead due to frequent migrations. Further, Zhang
et al. in [
19] developed a VDCE strategy to improve energy by leveraging Gaussian distribution and diurnal patterns. Although it achieved energy savings, it faced challenges with modeling complexity and consumes more execution time for more extensive networks. The authors Lin
et al. in [
8] proposed a VDCE approach using an Integer Linear Programming (ILP) formulation to minimize costs. However, it is computationally complex and limited to smaller scenarios. Later, Pham
et al. in [
12] introduced a congestion-aware and energy-aware embedding strategy using the weighted constraint method. It tries to minimize the energy by putting inactive servers to sleep and mitigating congestion by dispersing traffic across multiple paths. However, it exhibited inefficiencies due to a fixed congestion ratio, resulting in degraded performance. On the other hand, the authors in [
5] presented the embedding strategy by combining spectral clustering with field theory. This model enhanced network performance but faced clustering complexity, which led to underutilized resources. Additionally, the authors of [
3] claim that the DC servers often operate at only
\(15{-}20\%\) of CPU capacity despite consuming up to
\(70\%\) of their peak energy when idle, leading to inefficiency and higher operational costs. Further,
Amazon reports that
\(42\%\) of its operational costs are due to DC energy use [
15]. Globally, ICT infrastructure consumes
\(10\%\) of the world’s energy, with US DC alone consuming
\(1.4\%\) of national electricity in 2010 and
\(1.8\%\) in 2014. IT energy consumption could reach
\(13\%\) by 2030, with DC electricity usage by
\(15{-}20\%\) annually [
11].
From the above literature, the following limitations still exist: (
i.)
Existing works are less scalable (
ii.)
More energy consumption due to poorer embedding mechanism (
iii.)
Computationally complex model (
iv.)
Increased execution time and (
v.)
Lack of effective load distribution leads to poor PN utilization. In order to tackle these limitations, this work introduces a greedy-based, two-stage heuristic VDCE approach called LitE. The key
contributions of this work are as follows: (
1.) This work introduces a framework called LitE for the VDCE problem. It generates a
spine-leaf topology-based PN, typically called a full-meshed
Clos architecture [
16]. The proposed work aims to minimize the overall energy consumption in DC through effective load-balancing and thereby improve the overall performance of the network. (
2.) LitE offers an efficient resource management (ERM) component to evaluate the VM embedding benefits of the server by considering server utilization, server overloading probability, and server energy consumption. Using this embedding data, VM embedding is carried out, followed by a VL embedding using Dijkstra’s shortest path algorithm. (
3.) To test LitE effectiveness, we integrated three state-of-the-art VDCE strategies (
i.) Congestion-Aware, Energy-Aware Virtual Network Embedding (CEVNE) [
12], (
ii.) Dynamic Region of Interest (DROI) [
5] and (
iii.) First Fit algorithm. Simulation results show that LitE improves DC overall energy efficiency by minimizing up to
\(15\%\) of energy consumption compared to baselines while improving PN resource utilization through load-balancing.