Our Blog

Django Meets Kubernetes: Unleash The Power Of Python
AI LLM Models

Django Meets Kubernetes: Unleash The Power Of Python - Part 3

Part 1 of the blog "Django Meets Kubernetes" explores the power of combining Django, a popular Python web framework, with Kubernetes for deploying scalable applications. It covers an introduction to both technologies, highlighting Django's MVT architecture and Kubernetes' container orchestration capabilities. The blog provides a step-by-step guide to setting up a Django project for fetching stock market data, including environment setup, key Django files like models.py, views.py, and urls.py, and using Django's built-in development server.

Part 2 of the blog "Django Meets Kubernetes", we created a Django API application from scratch to store and retrieve the Stock Prices. This included, Django Application setup, database creation, data model creation and setup of Django views.

In this blog, we will show step-by-step instructions on how to deploy the Django Application created in Part 2 to a Kubernetes Cluster. We will use Minikube for running Kubernetes environment on our server.

Section 1 : The Prerequisites

1. What is Minikube and Setting up Minikube on Ubuntu 22.04

Minikube is indeed an essential tool, enabling developers to run Kubernetes, a robust platform for managing containerized applications, locally. It particularly proves beneficial for testing and developmental scenarios prior to real production rollout.

More on setting up Minikube on Ubuntu 22.04, you should begin by updating the system using the command. sudo apt update Run the Apt Update Command It's a best practice to keep the system up to date before installing any new software.

In order to install Minikube, you need to download the binary using the curl command curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 Execute the curl command and download the executable Now we need to make the downloaded file executable by using the following command: chmod +x minikube Change the permissions of Minikube

Finally, to make minikube available globally on the system, you need to move the 'minikube' binary to the '/usr/local/bin/' directory with the command sudo mv minikube /usr/local/bin/ Moving the binary to this directory adds it to the user's PATH, making it accessible from anywhere in the system. To confirm this, browse to any other folder like /tmp and issue the following command cd /tmp minikube -v Check if minikube command works from other folders

Note: Be cautious with sudo permissions. While executing these commands, ensure that you're aware of what each step entails for system security.

2. Install the Docker runtime

Minikube is a tool that allows developers to run a Kubernetes cluster locally, and it requires a container runtime to function. By default, Minikube uses Docker as its runtime because Docker is widely supported, easy to configure, and integrates seamlessly with Kubernetes for managing containers. Docker provides the necessary environment to run containers, which Minikube uses to simulate a Kubernetes cluster.

First remove all of the existing docker packages from your system (if any). Use the following command: for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done Remove all existing Docker packages

We will install Docker runtime using the official Docker's apt package repository.
- Install necessary software sudo apt install ca-certificates curl Install necessary packages for docker installation - Create the directory for keyrings. This setup is typically used for securely storing GPG keys used by APT to verify the authenticity of software repositories. sudo install -m 0755 -d /etc/apt/keyrings Create the directory for keyrings. - The command downloads Docker's official GPG key from its website and saves it in the /etc/apt/keyrings directory under the file name docker.asc. This GPG key is later used by APT to verify the authenticity of Docker's software packages when adding and updating its repositories. sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc Download Docker GPG Key. - Setup necessary permissions for docker.asc file in /etc/apt/keyrings. The following command ensures that the GPG key file docker.asc in /etc/apt/keyrings is readable by all users. This is important because the APT package manager needs to access this key to verify the authenticity of Docker's repository during package installations or updates. Setup file permissions for keyrings file. - Now add the repository to the Apt sources and update apt: # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update Add the docker repo to apt sources and update apt. - Install the latest Docker runtime, execute the following command: sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin Install Docker software. - We can verify successful installation by issuing the following command: docker -v Verify Docker installation.

3. Install the kubectl

kubectl is the command-line tool used to interact with Kubernetes clusters. It allows users to manage and deploy applications, inspect and manage cluster resources, and view logs or troubleshoot issues within the Kubernetes environment. With kubectl, administrators and developers can execute commands such as creating or deleting pods, services, and deployments, scaling applications, or applying configuration changes. It communicates with the Kubernetes API server to send instructions and retrieve the cluster's state. kubectl is highly versatile and supports imperative commands as well as declarative management using YAML configuration files, making it an essential tool for Kubernetes cluster management.

On Ubuntu 22.04, you can install kubectl by running the following command: snap install kubectl --classic Install kubectl using the snap installer. We can verify the installation, by running the following command: kubectl Confirm kubectl installation.

4. Start the Minikube

We can use the following command to start the minikube Kubernetes cluster. minikube start --force Start minikube.

5. Verify Kubernetes is running

Execute following command to verify kubectl get all Verify kubernetes is running.

Section 2 : Containerize Your Django Application

The procedure to containerize a Django application using Docker and Kubernetes is a key aspect of deploying scalable and consistent environments. The primary step in this process involves bundling the Django application and its dependencies into a Docker container. This process turns a Django application into a standalone executable application, eliminating dependency issues and ensuring it runs uniformly across different environments.

# Use an official Python runtime as a parent image FROM python:3.8-slim-buster # If you're behind a proxy # ENV http_proxy http://:@: # ENV https_proxy http://:@: # Set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set the working directory in the Docker image WORKDIR /app # Copy the existing files to a new directory /app in the container COPY . /app # Upgrade pip RUN pip install --upgrade pip # Install requirements.txt dependencies RUN pip install --no-cache-dir -r requirements.txt # Expose port 8000 to allow communication to/from server EXPOSE 8000 # Run the command to start uWSGI CMD ["uwsgi", "--http", ":8000", "--module", "myproject.wsgi"]

The code above contains some augmented features. Now, `ENV` defines environment variables, which are vital in case of proxies. `COPY` is used instead of the `ADD` command due to the best practices recommended by Docker, as `COPY` is more transparent. We also upgrade pip before installing requirements to ensure we are using the latest version. A critical addition is the `EXPOSE` directive, which informs Docker that the application will run on port 8000.

The CMD line is updated to include important settings for a Django app, like the wsgi module (`myproject.wsgi`). Start the Docker container with `CMD ["uwsgi", "--http :8000", "--module", "myproject.wsgi"]`, ensuring the Django app is accessible.

Now let's create a Docker image and we will upload this image to the docker hub. We will use the following command: docker build -t <dockerhub_username>/stock_market_api:latest . Create the Docker Image

Upload the image to the docker hub. You will need your docker hub login and password to do that.

docker login -U <docker_hub_username> Login into Docker Hub

We can now push docker image to docker hub using the following command:

docker push <docker hub username> /stock_market_api:latest Push Docker Image to Docker Hub

Section 3 : Prepare Kubernetes Deployment and Service Manifests

A Kubernetes deployment is a configurable, replicate set of pods (smallest deployable units in a Kubernetes cluster) that run on any node in the Kubernetes cluster. It's in fact a blueprint for your application. In simple words, you describe the desired state for your application like the Docker image to use, the required number of replicas, and more.

As we are using a private image, we will need to create a secret that will contain information about our Docker hub credentials. As we have already authenticated or logged in with Docker hub, we can use stored session information from our server. Please verify you have a file located in your user directory (in Ubuntu 22.04) <user_directory>/.docker/config.json. Since we have logged in using sudo command it will be stored in /root/.docker/config.json ls -la /root/.docker/config.json

We can create the Kubernetes secret using the following command: kubectl create secret generic my-docker-secret --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson Create the Docker Login Secret

We will be running Postgres on a separate POD. We will need to store its credentials securely. Let's create another secret called 'postgres-secret.yaml' to store Postgres connection strings: apiVersion: v1 kind: Secret metadata: name: postgres-secret type: Opaque data: DB_NAME: bXlkYXRhYmFzZQ== # Base64 encoded 'mydatabase' DB_USER: c3RvY2tfdXNlcg== # Base64 encoded 'stock_user' DB_PASSWORD: MTIzNF9hYmNk # Base64 encoded '1234_abcd' DB_HOST: cG9zdGdyZXMtc2VydmljZQ== # Base64 encoded 'localhost' DB_PORT: NTQzMg== # Base64 encoded '5432' Now issue the following command to create them: kubectl apply -f postgres-secret.yaml Create the Postgres Credentials Secret

Let's create the main deployment file. In this file, we will be deploying the django application, postgres pod and a service for accessing Postgres pod inside the Django Pods apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13 # You can use the version you prefer env: - name: POSTGRES_DB value: mydatabase - name: POSTGRES_USER value: stock_user - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: DB_PASSWORD ports: - containerPort: 5432 --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 --- apiVersion: apps/v1 kind: Deployment metadata: name: django-app spec: replicas: 3 selector: matchLabels: app: django template: metadata: labels: app: django spec: containers: - name: django image: munirfarhan/stock_market_api:latest ports: - containerPort: 8000 env: - name: DB_NAME value: mydatabase - name: DB_USER value: stock_user - name: DB_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: DB_PASSWORD - name: DB_HOST value: postgres-service # The service name for PostgreSQL - name: DB_PORT value: "5432" imagePullSecrets: - name: my-docker-secret

In the example above, the deployment instructs Kubernetes to bring up three instances of 'django-app'. The `kind: Deployment` line signifies that you're creating a deployment. Assumming above YAML is stored in a file called deployment.yaml, we start this deployment by running following command: kubectl apply -f deployment.yaml Create the kubernetes deployment We can verify deployment is successful by checking number of running pods. We can find that out by using the following command: kubectl get all Verify kubernetes deployment

This is it. Your Django API application has been successfully deployed on Kubernetes cluster

4 : Test the Application

Once the deployment is complete, you can start the service and access your application. We can use port forwarding to access it locally. First execute the following command to get the list of running pods: kubectl get pods Get the list of Pods Let's use django-app-6dfc75944c-7m86h for port forwarding. We use following command to forward port 8080 to 8000. kubectl port-forward pod/django-app-6dfc75944c-7m86h 8080:8000 Port Forwarding to 8000 to 8080 Now test it by running the following command on the server's command line: curl http://localhost:8080/api/quotes/ Congratulations! We have successfully deployed Django App on Kubernetes.

Looking for a reliable tech partner? FAMRO-LLC can help you!

Our development rockstars excels in creating robust and scalable solutions using Django, a powerful Python framework known for its rapid development capabilities and clean, pragmatic design. FAMRO’s team ensures that complex web applications are built quickly and with precision. Their expertise allows businesses to focus on enhancing their digital presence while leaving the intricacies of backend development in skilled hands.

On the deployment side, FAMRO's Infrastructure team takes charge with Kubernetes, a leading platform for container orchestration. Their deep knowledge of Kubernetes ensures that applications are seamlessly deployed, scaled, and managed in cloud environments. By automating key processes like service discovery, load balancing, and resource scaling, FAMRO’s Infrastructure team guarantees that applications not only perform well under high traffic but also remain resilient and easy to maintain. This combination of development and DevOps expertise enables FAMRO to deliver end-to-end, highly scalable solutions.

Please don't hesitate to Contact us for free initial consultation.

Our solutions for your business growth

Our services enable clients to grow their business by providing customized technical solutions that improve infrastructure, streamline software development, and enhance project management.

Our technical consultancy and project management services ensure successful project outcomes by reviewing project requirements, gathering business requirements, designing solutions, and managing project plans with resource augmentation for business analyst and project management roles.

Read More
2
Infrastructure / DevOps
3
Project Management
4
Technical Consulting