<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=278116885016877&amp;ev=PageView&amp;noscript=1">

, ,

May 13, 2024 | 5 Minute Read

How To Monitor And Scale Your Kubernetes Workload

Hanush Kumar, Marketing Associate

Table of Contents

Introduction

Adopting the cloud-native approach can be expensive when you manually scale up and down your resources. Users may also face frequent service failures due to a lack of resources for handling the load.

Monitoring Kubernetes workloads and utilizing an autoscaling option can help solve these challenges.

Why Monitoring And Autoscaling Are Important For Efficient Scaling

Efficient scaling is the ability to handle increased but also decreased workload or demand of the applications of our system. For instance, ecommerce websites witness increased traffic, sales, and order processing during festival seasons compared to regular days.

During these times, a dedicated person must manually allocate and adjust the necessary computing resources for the company to ensure seamless order placements. In such cases, your apps begin to break down as you need to provision more resources to handle the workload or pay excess for resources that have yet to be scaled down when not required.

On the flip side, businesses can monitor their computing resources, understand their workload capacity, and automate the scaling process accordingly.

What Is Kubernetes Monitoring?

Kubernetes monitoring involves reporting mechanisms that enable proactive cluster management. During Kubernetes Monitoring, developers can oversee the utilization of resources such as memory, storage, and CPU, thereby streamlining the management of containerized infrastructure.

As a result, businesses can track performance metrics, get real-time data, and help them take the necessary corrective actions to ensure the maximum uptime of applications. Timely monitoring Kubernetes resources helps optimize nodes, detect faulty pods, and make scaling decisions.

Why Should You Monitor Kubernetes Workloads?

Simply put, monitoring Kubernetes workloads ensures the performance, security, and availability of apps running on Kubernetes clusters. Issues affecting the cluster and its applications are difficult to spot or identify without monitoring.

Along with cost management, monitoring Kubernetes workloads has other benefits, including scalability, fault detection and troubleshooting, and performance and resource optimization.

What Are The Metrics To Be Monitored?

While two essential resources—nodes and pods—must be monitored, subsets within them must also be considered.

  • Cluster nodes
    • There are two types of nodes: control plane and worker nodes. The worker node hosts your application, while the master node controls the operations of worker nodes. These nodes execute their tasks by running multiple resources, such as pods, services, deployments, replicasets, etc.
  • Resource utilization
    • This monitors CPU usage, memory usage, disk utilization, network bandwidth, and more.
  • Average Load
    • It shows the number of programs running or in the queue to be run on a CPU. Troubleshooting becomes easier when you monitor the load average concerning I/O wait and sys CPU usage.
  • Scaling and availability
    • The number of pods decides the number of nodes required to run an application. Therefore, understanding Kubernetes' autoscaling capabilities aids in configuring auto scalars for clusters.
  • Application metrics
    • This allows businesses to track metrics specific to applications and their business logic. For example, you can collect data concerning the app's conversion, user experience, latency and throughput, status and response time, and traffic.

What Is Kubernetes Autoscaling?

Autoscaling is one of Kubernetes' core value propositions that helps businesses modify the number of pods in a deployment based on the metrics discussed above. Now, businesses can optimize resources, improve app performance and availability, and maintain cost-efficiency by automatically adjusting the compute resources based on usage patterns and workload demand.

Why Should You Use Kubernetes Autoscaling?

Kubernetes Autoscaling allows apps to adjust their resources per the rising or lowering demands, thereby helping businesses avoid the problem of overprovisioning and underprovisioning computing resources. As a result, businesses ensure the optimal running of apps and resource costs while encountering varying demands.

Types Of Kubernetes Autoscaling

The process of autoscaling begins with the tool Metrics Server. It gathers the pod metrics and exposes those data points using REST APIs. Now, the autoscaling mechanisms fetch the necessary parameters and decide whether to scale up or down the computing resources.

Types of Autoscalers in Kubernetes:
Horizontal Pod Autoscaler (HPA)
Vertical Pod Autoscaler (VPA)

Horizontal Pod Autoscaler (HPA)

Horizontal pod autoscaling is the process of 'scaling out' your resources. It increases the number of compute instances based on CPU and Memory usage, not the resources allocated to a single container. It adjusts the number of pods in the replicaset.

By default, the HPA checks the metrics server every fifteen seconds. Thus, HPA can make logical decisions based on the metrics server's inputs and increase or decrease the number of pods accordingly.

 

What does HPA do?

  1. HPA always communicates with the metrics server to understand the resource usage pattern.
  2. Once HPA receives the necessary data, it automatically calculates the desired number of replica sets.
  3. Now, the HPA gears up to scale the application to the desired number of replica sets.
  4. At last, the HPA adjusts the desired number of replica sets.
  5. Also, since the HPA engages continuously with the metrics server, the process repeats from step 1 whenever a metric change is observed.

Vertical Pod Autoscaler (VPA)

Vertical pod autoscaling is the process of scaling up your resources. Unlike HPA, you increase or decrease CPU and memory resources in an existing pod. You can install the VPA in your cluster, which adjusts Pods CPU and memory resource configuration to better align with actual usage.

There are two ways to configure your resources:

  • Requests - It allows you to set the minimum amount of resources a container needs.
  • Limits - It allows you to set the maximum resources a container can consume.

Apart from this, VPA has three major components, namely, 

  • VPA recommender
  • VPA updater
  • VPA Admission Controller

 

VPA Recommender

It monitors resource utilization and calculates target limits. Using this data, the VPA recommender suggests new requests by examining various metrics it monitors, and limits are increased or decreased accordingly.

 

VPA Updater

It kills the pods that need new resource limit allocation. When you use 'updateMode: Auto' while configuring your infra, i.e., VPA, the 'VPA Updater' will implement whatever the 'VPA Recommender' suggests.

 

VPA Admission Controller

It uses a webhook to change the CPU and memory settings whenever the VPA Updater kills a pod and restarts a new one.

Since the only option to alter the resource requests of a running pod is to decommission it, the VPA Admission Controller decommissions the pods whenever the VPA recommender suggests altering the resource requests.

Autoscaling Best Practices In Kubernetes

  • Make sure you've configured the pods for resource requests and limits. This way, you can collect data from pods and ensure that the data is correct so that the HPA components can take the necessary actions.
  • Do not use HPA and VPA together; they are incompatible. You can use them together only if you rely on custom or external metrics for HPA.
  • It is highly recommended that you go through Kubernetes Documentation for efficient, compatible autoscaling objects of the Kubernetes Control Plane version. As Kubernetes receives frequent updates, it is essential to understand the modifications done to the autoscale, know the compatible objects, and then put them into action.
  • Try to rely on custom metrics over external metrics. In comparison, the external metrics API poses a significant risk threat over custom metrics API as it holds only a specific set of metrics.

 

Example of Horizontal Pod Autoscaler

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: example-deployment
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

 

Example of Deployment:

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: example-container
        image: nginx:latest
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"



Example of Vertical Pod Autoscaling:

 

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: example-vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    Kind: Deployment
    name: example-deployment
  updatePolicy:
    updateMode: "Auto"

 

Final Thoughts

Autoscaling Kubernetes resources is a complex task that requires a strategic approach to building high-performance apps. Axelerant, with its prowess in cloud-native solutions, understands the nuances of Kubernetes' autoscaling concepts.

Our experts possess adept knowledge of both horizontal and vertical pod autoscalers. They can tailor seamless integrations to your business and application needs.

Schedule a meeting with our experts to avoid manual resource management, which can hinder your scalability and negatively affect your business. Let us empower your infrastructure to adapt dynamically, ensuring your apps thrive in challenging scenarios.

About the Author
Bassam Ismail, Director of Digital Engineering
About the Author

Bassam Ismail, Director of Digital Engineering

Away from work, he likes cooking with his wife, reading comic strips, or playing around with programming languages for fun.


Hanush_img

Hanush Kumar, Marketing Associate

Hanush finds joy in YouTube content on automobiles and smartphones, prefers watching thrillers, and enjoys movie directors' interviews where they give out book recommendations. His essential life values? Positivity, continuous learning, self-respect, and integrity.

Leave us a comment

Back to Top