Horizontal Pod Autoscaling (HPA)

Modern applications often face variable workloads. Traffic may spike during peak hours and drop during off-hours. Manually scaling Pods is inefficient and error-prone. Kubernetes provides Horizontal Pod Autoscaling (HPA) to automatically adjust the number of Pods in a Deployment, ReplicaSet, or StatefulSet based on observed metrics like CPU or memory usage.

What is HPA?

Horizontal Pod Autoscaler is a Kubernetes resource that monitors metrics and scales Pods horizontally (increasing or decreasing replicas). It ensures applications remain responsive while optimizing resource usage.

Key Features

  • Automatic Scaling: Adjusts Pod replicas based on metrics.
  • Metrics-Based: Uses CPU, memory, or custom metrics.
  • Integration: Works with Metrics Server or Prometheus for advanced monitoring.
  • Efficiency: Reduces costs by scaling down during low demand.

YAML Example: HPA

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: webapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: webapp
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Explanation: This HPA scales the webapp Deployment between 2 and 10 replicas, targeting 70% average CPU utilization.

Flowchart: HPA Workflow


   Metrics Server ---> Collects CPU/Memory usage
          |
          v
   HPA compares metrics ---> Desired replicas calculated
          |
          v
   Controller updates Deployment ---> Pods scaled up/down
  

Real-Time Example

In an e-commerce platform:

  • Peak Hours: HPA scales the frontend Pods from 3 to 10 to handle traffic spikes.
  • Off-Hours: HPA scales down to 2 Pods, saving resources.
  • Outcome: Customers experience consistent performance without over-provisioning.

Common Mistakes

  • Not deploying Metrics Server, causing HPA to fail.
  • Setting minReplicas too low, leading to downtime during sudden spikes.
  • Ignoring memory metrics, focusing only on CPU.
  • Not testing scaling behavior under load.

Interview Notes

Q1: How does HPA differ from Vertical Pod Autoscaler?

Answer: HPA scales Pods horizontally (replicas), while Vertical Pod Autoscaler adjusts resource requests/limits for individual Pods.

Q2: What metrics can HPA use?

Answer: CPU utilization, memory usage, and custom metrics via Prometheus or external adapters.

Q3: What happens if metrics exceed the target utilization?

Answer: HPA increases the number of replicas until utilization falls within the target threshold.

Q4: Example Interview Task

# Create HPA for a Deployment
kubectl autoscale deployment webapp --cpu-percent=70 --min=2 --max=10

Explanation: This command creates an HPA for the webapp Deployment, scaling between 2 and 10 replicas based on CPU usage.

Advanced Notes

  • Custom Metrics: HPA can use metrics like request latency or queue length.
  • Scaling Policies: Define stabilization windows and scaling behaviors to avoid thrashing.
  • Cluster Autoscaler: Works with HPA to add/remove nodes when Pod scaling exceeds cluster capacity.
  • Best Practices: Always test scaling under load, set realistic min/max replicas, and monitor scaling events.

Summary

Horizontal Pod Autoscaling ensures Kubernetes workloads scale dynamically based on demand. By monitoring metrics and adjusting replicas, HPA balances performance and efficiency. Combined with Cluster Autoscaler and custom metrics, it enables resilient, cost-effective applications. Mastering HPA is vital for production-ready deployments and a common topic in Kubernetes interviews.