- G. Kontos, P. Soumplis, P. Kokkinos and E. Varvarigos, “Cloud-Native Applications’ Workload Placement over the Edge-Cloud Continuum” In Proceedings of the 13th International Conference on Cloud Computing and Services Science – Volume 1: CLOSER, ISBN 978-989-758-650-7, SciTePress, pages 57-66, Prague, Czech Republic, 26-28 April 2023. DOI: 10.5220/0011850100003488. Link: https://www.scitepress.org/Papers/2023/118501/118501.pdf
- P. Soumplis, P. Kokkinos, E. Varvarigos, “Joint Fiber Wireless Resource Allocation to support the Cell Free operation”, IEEE International Conference on Communications (ICC), pp. 2565-2570, Rome, Italy, 28 May-1 June 2023. Link: https://ieeexplore.ieee.org/document/10279309
- P. Soumplis, P. Kokkinos and E. Varvarigos, “Efficient Resource Provisioning in Critical Infrastructures based on Multi-Agent Rollout enabled by Deep Q-Learning”, 18th International Symposium on Visual Computing (ISVC), Lake Tahoe, Nevada, USA, 16-18 October 2023. Link: https://dl.acm.org/doi/abs/10.1007/978-3-031-47969-4_17
- P. Soumplis, G. Kontos, A. Kretsis, P. Kokkinos, A. Nanos and E. Varvarigos, “Security-Aware Resource Allocation in the Edge-Cloud Continuum”, IEEE 12th International Conference on Cloud Networking (CloudNet 2023), New York, USA, 1-3 November 2023. Link: Download publication
- G. Kontos, P. Soumplis and E. Varvarigos, “Optimization of Resource Deployment and Configuration in Hierarchical Edge Topologies”, IEEE Global Communications Conference (GLOBECOM) 2024, Cape Town, South Africa, 8-12 December 2024. Link: https://edas.info/p31420
- P. Soumplis, G.Kontos and E. Varvarigos, “Prioritized Multi-Criteria Optimization for Efficient Cloud-Native Application Resource Allocation”, submitted to IEEE International Conference on Communications (ICC) 2025. Link: https://icc2025.ieee-icc.org/
- P. Soumplis, G. Kontos, A. Kretsis, P. Makris, P. Kokkinos, N. Efthimiopoulos, K. Kontodimas, M. Filippou and E. Varvarigos, “MAESTRO – Autonomous Intent-based Integrated Fiber, Wireless Computing and Storage for 6G Networks”, submitted in Springer Nature Journal of Network and Systems Management, Link: https://link.springer.com/journal/10922
- P. Kokkinakis, P. Soumplis and E. Varvarigos, “Risk-Aware Resource Allocation in Edge Computing Using Stochastic Forecasting”, submitted to IEEE International Conference on Communications (ICC) 2025. Link: https://icc2025.ieee-icc.org/
Publications Short Description
Paper #1 is related with MAESTRO Task 2.2 of WP2 as well as WP3 (intent translation mechanims). It proposes advanced mechanisms for automating the allocation of computing resources in order to optimize the service of cloud-native applications in a layered edge-cloud continuum. We initially present the Mixed Integer Linear Programming formulation of the problem. As the execution time can be prohibitively large for real-size problems, we develop a fast heuristic algorithm. To efficiently exploit the performance– execution time trade-off, we employ a novel multi-agent Rollout, the simplest and most reliable among the Reinforcement Learning methods, that leverages the heuristic’s decisions to further optimize the final solution. We evaluate the results through extensive simulations under various inputs that demonstrate the quality of the generated sub-optimal solutions.
Paper #2 is related with MAESTRO WP1 (MAESTRO architecture) as well as Task 2.1 of WP2 (infrastructure dimensioning). It introduces a novel approach for resource allocation in the MAESTRO project’s integrated fiber-wireless networks. We address the complex challenge of managing resources across distributed Access Points (APs) that form cooperative clusters to meet User Equipment (UE) demands. Our work enables the resource allocation over the MAESTRO converged infrastructure model, incorporating Time and Wavelength Division Multiplexed Passive Optical Networks (TWDM PON), massive Multiple Input Multiple Output (mMIMO) Base Stations, and Cell-Free (CF) technologies. We introduce an optimal Mixed Integer Linear Program (MILP) for the efficient allocation of both fiber and wireless resources. Recognizing the high complexity and lengthy execution times in real-size scenarios, we also present an innovative multi-agent rollout mechanism. This mechanism effectively balances execution time with performance, showcasing a practical solution for real-world applications. Our approach not only enhances spectral efficiency and optimizes the use of limited communication and processing resources but also sets a precedent for future developments in converged fiber-wireless networks. This work paves the way for more efficient, scalable, and robust network infrastructures, crucial for the ever-growing demands of modern communication systems.
Paper #3 is related with MAESTRO WP2 and WP3 work. Our paper addresses the complex challenge of allocation of resources to enable the baseband processing in critical infrastructures integrating IoT, AI, and Edge Computing. These infrastructures, essential to smart environments, face unique challenges in managing secure communication across different domains with diverse security and communication protocols. Our innovative solution employs a Multi-Agent Deep Reinforcement Learning mechanism, combining multi-Agent Rollout with Deep Q-Learning, to efficiently manage these complexities. This approach optimizes resource allocation by balancing the specific needs of various applications against the broader objectives of the infrastructure. Our simulations demonstrate the effectiveness of this method in enhancing resource allocation efficiency, ensuring optimal utilization of infrastructure resources while navigating security and communication challenges.
Paper #4 is closely related with MAESTRO WP2 and WP3 work and investigates a similar research issue with paper #1 above. It revolves around optimizing the deployment of cloud-native applications across the edge-cloud continuum. These applications, which are collections of interdependent services, are tailored to utilize edge resources for time-sensitive tasks and cloud resources for more extensive, time-driven computations in order to enable the baseband processing of the wireless part of the MAESTRO infrastructure. In both cases, the infrastructure management is conducted through a hierarchical orchestration system, with each level focusing on a specific resource pool. In both papers #1 & 4, we confront the critical challenge of resource allocation in a distributed computing environment, balancing computational and network constraints alongside application-specific needs. The key distinction lies in the emphasis on security in paper #4. Here, we integrate application-specific security requirements into the allocation strategy, adding a layer of complexity to the decision-making process. This necessitates a nuanced approach to manage varying degrees of workload isolation, achieved through lightweight virtualization, and to establish different levels of security and trust, each demanding specific computational and storage resources. Extensive simulation experiments demonstrate the viability and effectiveness of our proposed mechanisms. They highlight MAESTRO solutions’ capability to navigate the complex demands of resource allocation in the edge-cloud continuum, effectively managing conflicting objectives such as speed, efficiency, security, and scalability.
Paper #5 straightforwardly addresses MAESTRO WP2 (network dimensioning and planning problem). Understanding that a poorly informed edge-network creation process can lead to sub-optimal performance, underutilization of resources and unnecessary expenditures, this work targets the efficient deployment of edge networking and computing equipment to design a high-performance edge network from the ground up. It takes into account a pool of candidate locations for the deployment of equipment, including Base Stations, transportation hubs, dense urban spots such as malls and stadiums, regional DCs, Telecom facilities etc., each with an associated activation cost (rental/setup costs). The work also considers heterogenous edge computing equipment, such as general-purpose low-power processors, multi-core processors, GPUs and hardware accelerators, each with its own computational capabilities, acquisition cost and energy consumption. The workload comprises a set of chain-structured tasks, which can describe both the functional split chain (RU-DU-CU/CP-CU/UP) or any application pipeline. The problem is initially formulated as an ILP. To tackle its complexity, a novel variation of the Rollout mechanism is developed. In the experimental section, Rollout’s efficiency in balancing computation time and performance is highlighted, by comparing it both with the optimal solution and two other baseline methods. The study’s findings can guide network designers and operators in developing high-performance edge infrastructures according to traffic requirements, optimization preferences and budget limitations.
Paper #6 directly correlates with MAESTRO WP3 (intent translation and resource allocation). It considers a high-level description of the microservice intents, formulated as a hierarchical order (ranking) of preferences among different optimization metrics, along with a tolerance factor for each metric. The tolerance factor expresses the percentagewise deviation from the optimal solution for that metric that the service is willing to sacrifice, in order to optimize subsequent metrics in its hierarchy. A general, formula-agnostic mathematical model is presented, wherein an ILP problem is sequentially solved for each metric, and the results of previous optimization parameters are incorporated as constraints in the following optimization, guided by the tolerance factor. To tackle the problem’s complexity, a novel heuristic based on the Deterministic Elimination by Aspects (DEBA) algorithm is presented. The experimental section considers 3 optimization criteria, namely cost of service, experienced latency and energy consumption. The workload is organized according to the typical 5G use-cases (eMBB, mMTC, uRLLC), following the MAESTRO’s consideration of WP2. Experimental results showcase the efficacy of the proposed mechanism to enhance both application performance and infrastructure utilization, underlining the effectiveness of a prioritized optimization approach in preserving the integrity of application-specific demands within the resource allocation process.
Paper #7 sums up the progress of MAESTRO, presenting its key-technologies, architecture, telemetry and optimization techniques adopted throughout the project’s lifetime (all WPs). First, the motivation for MAESTRO is highlighted, along with the core-concepts of the project (Converged Fi-Wi CF networks, network slicing, SDN, functional splits etc.) and the respective advancements. The proposed architecture along with an overview of the control and the telemetry plane is detailed in the next section. Finally, this work attempts to unify all of the MAESTRO’s achievements and illustrate them in a unified framework that showcases the end-to-end operations of MAESTRO, from UE intent submission and translation, to resource allocation (both in the networking and the computing domain), and the constant monitoring and dynamic reconfiguration enabled by MAESTRO’s advanced telemetry. This work aims to bring MAESTRO to the spotlight in the field of beyond 5G networks, by detailing its innovative approach to converged Fi-Wi architectures and emphasizing the importance of efficient resource allocation in modern 5G and future 6G network ecosystems.
Paper #8 is related with MAESTRO’s WP3 (i.e. dynamic resource allocation mechanisms under uncertainty in Task 3.2). It examines the placement and proactive migration of microservices across an edge-cloud distributed infrastructure, following MAESTRO’s proposed architecture. A stochastic workload forecasting and risk quantification model is developed based on the well-known Black-Scholes equation used in finance. The model can detect future incidents of CPU-demand spikes based on historical traffic. Leveraging its predictions, a speculative resource allocation mechanism performs dynamic load balancing, aiming to minimize migrations and relocations during runtime, thus enhancing the application’s stability and performance. Experimental results showcase the efficacy of the model by comparing its results with two baseline methods, in terms of node utilization and capacity violation events.