/ PYTHON, KUBERNETES, AZURE

Event-Driven Scaling of Python Applications in Azure Kubernetes Service (AKS) using KEDA

Event-Driven Scaling of Python Applications in Azure Kubernetes Service (AKS) using KEDA

In this article, we will walk you through setting up event-driven scaling of a Python application deployed in Azure Kubernetes Service (AKS) using Kubernetes-based Event-Driven Autoscaling (KEDA). KEDA is an open-source project that provides event-driven autoscaling for Kubernetes workloads.

Prerequisites

Before we start, ensure you have the following installed on your local machine:

  • Python: We’ll use Python to write our application.
  • Docker: We’ll use Docker to build and package our Python application.
  • Kubernetes CLI kubectl: We’ll use kubectl to interact with our Kubernetes cluster.
  • Helm: We’ll use Helm to deploy KEDA on our Kubernetes cluster.

You’ll also need an AKS cluster for deploying our application and KEDA. If you don’t have one, you can create an AKS cluster by following this Azure documentation.


Table of Contents:


Deploying KEDA on AKS

We’ll use Helm, a Kubernetes package manager, to deploy KEDA on AKS. Execute the following commands to add the KEDA Helm chart repository and install KEDA:

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda

Building a Python Application for Event-Driven Scaling

We will create a simple Python application that reads messages from an Azure Storage Queue. This application will serve as our example for event-driven scaling. The number of pods of this application will scale up and down based on the number of messages in the Azure Storage Queue. Here’s a simple Python script that reads messages from Azure Storage Queue:

from azure.storage.queue import QueueService
import os

queue_service = QueueService(account_name=os.getenv("AZURE_STORAGE_ACCOUNT"), account_key=os.getenv("AZURE_STORAGE_ACCESS_KEY"))
queue_name = os.getenv("QUEUE_NAME")

def read_messages():
    while True:
        messages = queue_service.get_messages(queue_name)
        for message in messages:
            print(f"Processing message: {message.content}")
            queue_service.delete_message(queue_name, message.id, message.pop_receipt)

if __name__ == "__main__":
    read_messages()

Packaging and Deploying Python Application in AKS

Now, we will package our Python application using Docker. We need to create a Dockerfile:

FROM python:3.7

WORKDIR /app

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD [ "python", "./app.py" ]

After building the Docker image and pushing it to a Docker registry, we create a Kubernetes deployment for our application. However, to enable KEDA, we need to create a ScaledObject which defines the scaling parameters:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: python-app-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    name: python-app
  minReplicaCount: 0
  maxReplicaCount: 100
  triggers:
  - type: azure-queue
    metadata:
      queueName: myqueue
      queueLength: '5'
      connectionFromEnv: AzureWebJobsStorage

Testing Event-Driven Scaling

To test the event-driven scaling, add messages to the Azure Storage Queue and observe the number of pods of your Python application. You should see more pods when there are more messages in the queue and fewer pods when there are fewer messages.

Conclusion

This article walked you through setting up event-driven scaling of a Python application in Azure Kubernetes Service using Kubernetes-based Event-Driven Autoscaling (KEDA). This capability is crucial in building highly scalable and responsive applications that automatically adjust their resources based on demand.

faizan

Faizan Bashir

Principal Engineer | Architecting and building distributed applications in the Cloud | Adventurer

Read More