Institutional Repository [SANDBOX]
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Dynamic microservice placement strategies in Kubernetes

Kastrinakis Nikolaos

Full record


URI: http://purl.tuc.gr/dl/dias/91385F73-A707-4A6D-8670-A089C0F15A50
Year 2024
Type of Item Diploma Work
License
Details
Bibliographic Citation Nikolaos Kastrinakis, "Dynamic microservice placement strategies in Kubernetes", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2024 https://doi.org/10.26233/heallink.tuc.100593
Appears in Collections

Summary

As applications are moving to the cloud, microservices-based architectures are becoming more and more popular. For all their advantages, microservices add complexity to application design and bring with them a new set of problems. To combat these issues, Kubernetes was created to orchestrate containerized microservices applications by automating management, scaling and deployment. Applications in Kubernetes are deployed in Clusters, a set of Nodes (VMs) that are managed by a centralized control plane. Microservices are placed in Pods and those Pods in turn into Nodes. The Kubernetes Scheduler is responsible for placing Pods in Nodes using a set of filtering and scoring rules to attempt to optimally place it in the Cluster. The placement of Pods in certain Nodes is a key problem in microservices architectures, as the increase in physical distance between parts of an application could lead to latency issues. Furthermore, the Node-to-Node traffic, also called egress traffic, is charged by cloud providers. Therefore, placing Pods that communicate a lot with each other in the same Node is considered optimal to minimize egress traffic, decrease infrastructure costs and improve application performance via minimizing response times. In previous works, this was done using graph clustering techniques to produce an optimal placement for the microservices of the application. While the results were stellar, they would not translate well in a practical setting, where workloads shift and scaling of Pods occurs. In this work, we will attempt to address the problem with a more heuristic approach. We will alter the functionality of the Default Scheduler, by adding our own filtering and more importantly a communication aware scoring method. Our aim is to improve all aforementioned areas with a method that more aligns with microservices architecture and Kubernetes design. One application was deployed in the Google Cloud Platform (GCP) using Kubernetes to perform our experiments. The results show that our communication-aware placement is far better in improving application performance, infrastructure cost and egress traffic compared to the Default Scheduler. Furthermore, while it performs slightly worse than MODSOFT-HP, a fuzzy graph clustering method, further experiments with more users and scaling mechanisms that are connected with our method, could prove it to be a better and more practical method overall.

Available Files

Services

Statistics