Το work with title An Autonomous, dynamic and decentralised minimum latency service placement method for dynamic workloads in HybridFog-Edge infrastructures by Chamarousios Dimitrios is licensed under Creative Commons Attribution 4.0 International
Bibliographic Citation
Dimitrios Chamarousios, "An Autonomous, Dynamic and Decentralised Minimum Latency Service Placement Method for Dynamic Workloads in HybridFog-Edge Infrastructures", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2023
https://doi.org/10.26233/heallink.tuc.97753
Latest applications supported by state-of-the-art technologies, like Internet of Things and Smart Cities, are highly dependent on Fog and Edge cloud Infrastructures. In order for the user to obtain the benefits of these applications in a proper manner, we should make sure that the underlying structure is capable of dealing with the demands of these applications. This implies that the substructure should allow users to experience an increased performance without the large communication delay that occurs due to cloud architecture configuration. In order for these cloud infrastructures to successfully answer users’s requests, they need CPU cycles and memory for computation, storage resources and network bandwidth. Furthermore, due to the lack of resources at the edge and fog compared to cloud infrastructures/big data centers the application's components need to be placed adopting this restriction and depending on the time-variable type of requests. Moreover, another constraint is the stability insurance and the way that the infrastructure will react to a possible machine failure during peak-hours requests. We address the components/service placement problem, taking into consideration the CPU restrictions and the low-latency demand, by proposing an autonomous, dynamic, decentralized algorithm for service placement and a dynamic cloud infrastructure architecture capable of handling user’s requests offering a low-latency experience to the end users. Through extensive experiments using real cloud infrastructure and applications, we demonstrate that our proposal can decrease latency compared to a static placement solution, while ensuring infrastructure stability.