Our Blog

What Are the Basics of Async Programming in Python?
What Are the Basics of Async Programming in Python?

What Are the Basics of Async Programming in Python?

Modern software rarely spends all of its time doing heavy computation. Much of the time, applications are waiting. They wait for APIs to respond, databases to return results, files to load, messages to arrive, or users to submit input. In many business systems, that waiting time adds up quickly. Async programming is a way to handle that reality more efficiently.

For SMEs building internal tools, customer-facing platforms, integrations, or automation services, understanding the basics of async programming in Python can help teams make better architecture decisions. It is not something every project needs, but in the right situation it can improve responsiveness, scalability, and resource usage.

What Is Async Programming?

Async programming, or asynchronous programming, is a style of writing software that allows a program to continue working on other tasks while it is waiting for something else to finish.

In a traditional synchronous program, execution moves step by step. If one step needs to wait for an external operation, such as a web request or a database query, the program stops at that point until the result comes back. That pause can slow down the overall system, especially when many waiting operations happen often.

Async programming changes that model. Instead of blocking the whole program during a wait, the application can pause only the specific task that is waiting and let other tasks continue running. This is especially useful when the workload involves many I/O operations. I/O stands for input and output, such as reading files, making network calls, or talking to databases and services.

A simple way to think about it is this: synchronous code stands in line and waits its turn, while async code starts a task, steps aside while that task is waiting, and uses the time to do something else useful.

That does not mean everything happens at once in the same way as parallel computing. Async programming is mainly about managing waiting time efficiently, not automatically using multiple CPU cores. It is best suited to workloads where delays come from external systems rather than from heavy calculations.

Key Advantages of Async Programming

The main value of async programming comes from making better use of time that would otherwise be wasted.

Key Advantages of Async Programming

One major advantage is improved responsiveness. If an application needs to make many external requests, async code can keep serving other work instead of freezing while each request completes. This matters in web applications, dashboards, chat systems, automation tools, and service integrations where users expect fast responses.

Another benefit is better handling of many simultaneous I/O tasks. Suppose a program needs to fetch data from twenty APIs. In a synchronous approach, it may do them one by one and spend most of its time waiting. With async, it can initiate several requests and allow them to progress together, dramatically reducing total waiting time.

Async programming can also improve resource efficiency for suitable workloads. Instead of creating many threads or processes to deal with many connections, async systems often handle large numbers of waiting tasks with less overhead. That can reduce memory use and improve throughput when designed well.

For SMEs, this can translate into business value in practical ways. Customer portals may feel faster. Integration services may process more jobs with the same infrastructure. Monitoring or messaging tools may support more connections without immediate scaling costs.

Still, the key phrase is “for suitable workloads.” Async is not a universal performance fix. It is valuable when the bottleneck is waiting for I/O, not when the main challenge is CPU-intensive computation.

Common Use Cases for Async Programming

Async programming is most useful where systems spend a lot of time waiting on external operations.

A common example is web services. Many modern back-end applications receive requests, call external APIs, query databases, check authentication providers, or interact with storage services. Async can help these applications manage many client connections efficiently, especially when each request depends on several I/O steps.

Another strong use case is API-heavy integration work. Many SMEs rely on external SaaS platforms for CRM, ERP, billing, marketing automation, shipping, and support. If your Python service needs to call multiple APIs, poll endpoints, or synchronize records, async can reduce idle time and improve throughput.

Network tools are also a natural fit. Scripts or services that monitor endpoints, scan devices, handle sockets, or manage network communication often involve many waiting operations. Async helps keep these tools responsive without blocking on every connection.

Messaging systems benefit as well. Applications that consume queues, process events, or communicate through message brokers often wait for incoming messages or external acknowledgments. Async code can manage those workflows more efficiently.

Data pipelines can also use async effectively when the pipeline spends significant time waiting for remote systems. For example, collecting data from many APIs, downloading files, or enriching records through external services can be a good match. However, if the pipeline mainly performs heavy data transformation or analytics, async alone may not help much.

In short, async is often a strong choice when the workload has many waiting points and needs to handle multiple operations smoothly.

Things to Watch Out For

Although async can be powerful, it introduces trade-offs that teams should understand before adopting it broadly.

The first is complexity. Async code adds a different execution model, and developers need to think carefully about where execution can pause and resume. That can make the code harder to follow compared with straightforward synchronous logic, especially for teams new to the pattern.

Debugging can also be more challenging. Problems related to task timing, cancellations, timeouts, or incomplete awaits may be less obvious than in standard sequential code. Error handling also needs more care when several tasks are running within the same event loop.

Another important point is that async is not the right solution for CPU-heavy work. If a task is busy doing calculations, compressing files, processing images, or running machine learning inference, async does not magically make that faster. In fact, running CPU-heavy tasks directly in the async event loop can block other tasks and harm performance.

This is also where Python’s GIL, or Global Interpreter Lock, matters. In standard CPython, the GIL allows only one thread at a time to execute Python bytecode within a process. That means neither async nor regular threading is a silver bullet for CPU-bound workloads. Async helps programs deal efficiently with waiting, but it does not bypass the GIL to make heavy computation run in parallel on multiple cores. For CPU-intensive tasks, teams often need multiprocessing, external worker systems, or native extensions that can release the GIL.

Teams should also avoid mixing async into a codebase without a clear reason. If the application only performs a few occasional I/O operations, synchronous code may be simpler and easier to maintain. Complexity should only be added when it brings a clear operational benefit.

Finally, async libraries and frameworks must be chosen carefully. Some libraries are designed for async workflows, while others are not. Using blocking libraries inside async code can remove much of the benefit and lead to confusion.

Python and Async Programming

Python supports async programming natively through the async and await keywords.

These features make asynchronous code more readable than older callback-based patterns.

An async def function defines a coroutine. A coroutine is a special kind of function that can pause its execution and later continue from where it stopped. Inside that function, await is used to pause until another asynchronous operation completes.

Behind the scenes, Python typically uses an event loop to manage this work. The event loop is responsible for tracking tasks, deciding when they are ready to continue, and switching between them when one is waiting. Rather than blocking the whole application on a single wait, the event loop keeps other ready tasks moving forward.

The most widely used built-in module for this is asyncio. It provides the tools to run coroutines, create tasks, manage timing, and coordinate multiple asynchronous operations.

The basic mental model looks like this:

   You define asynchronous functions with async def

   You pause for asynchronous operations with await

   The event loop runs and coordinates those coroutines

   While one task waits, another task can continue

This model allows Python applications to handle multiple waiting operations efficiently without relying entirely on threads.

Simple Usage Examples

Here is a simple example of an asynchronous function:

import asyncio

async def say_hello():
    print("Starting task...")
    await asyncio.sleep(2)
    print("Hello after 2 seconds")

asyncio.run(say_hello())

Simple Python Example for Async Function

In this example, asyncio.sleep(2) simulates a waiting operation. The function starts, pauses for two seconds, and then continues. The important detail is that await marks the point where the coroutine gives control back to the event loop.

Now let us look at a more useful example with multiple tasks:

import asyncio

async def fetch_data(name, delay):
    print(f"{name}: request started")
    await asyncio.sleep(delay)
    print(f"{name}: response received")
    return f"{name} result"

async def main():
    task1 = asyncio.create_task(fetch_data("API 1", 2))
    task2 = asyncio.create_task(fetch_data("API 2", 1))
    task3 = asyncio.create_task(fetch_data("API 3", 3))

    results = await asyncio.gather(task1, task2, task3)
    print("All results:", results)

asyncio.run(main())

Concreate Python Example With multiple tasks

This example creates three asynchronous tasks. Each one simulates a request with a different delay. Instead of waiting for each task one by one, the program allows them to progress during the waiting periods. The total runtime is closer to the longest single delay, not the sum of all delays.

For comparison, here is what a synchronous version might feel like conceptually:

import time

def fetch_data(name, delay):
    print(f"{name}: request started")
    time.sleep(delay)
    print(f"{name}: response received")
    return f"{name} result"

results = [
    fetch_data("API 1", 2),
    fetch_data("API 2", 1),
    fetch_data("API 3", 3)
]

print("All results:", results)

Synchronous API fetch Python Example With multiple tasks

This version waits for each call to finish before starting the next one. The delays add up directly, which is often inefficient for I/O-heavy workloads.

Code Walkthrough

Let us break down the async example step by step.

The fetch_data function is defined with async def, which means it is a coroutine function. Calling it does not immediately run it in the usual way. Instead, it creates a coroutine object that the event loop can manage.

Inside fetch_data, the line await asyncio.sleep(delay) tells Python that this task needs to pause for a period of time. During that pause, the event loop can run other tasks that are ready. This is the heart of async programming: work is not blocked just because one task is waiting.

In the main function, asyncio.create_task() schedules each coroutine to run as an independent task. At that point, the event loop can begin managing all three tasks.

Next, await asyncio.gather(task1, task2, task3) waits for all the tasks to complete and collects their results. While main is waiting for the group, the event loop continues switching among the running tasks based on what is ready.

Here is how the flow works in practice:

   Task 1 starts and reaches its await, so it pauses.

   Task 2 starts and also pauses at its await.

   Task 3 starts and pauses.

   The event loop keeps checking when each delay has finished.

   When Task 2 is ready first, it resumes and completes.

   Task 1 resumes next.

   Task 3 resumes last.

   Once all are finished, gather returns the results.

The important lesson is that the tasks are not completing because Python is executing them all at the same exact moment on multiple CPU cores. They are completing efficiently because the event loop avoids wasting time during waits.

This is why async works well for network calls, database requests, message waiting, and similar operations. Whenever one task has nothing useful to do except wait, another task gets a chance to run.

Using Async for Streaming ChatGPT Responses with WebSockets

One practical and highly relevant use of async programming is real-time AI applications. A common example is streaming a ChatGPT response from a Python back end and showing it live on a front end through WebSockets.

This is a strong fit for async because there are multiple waiting points happening at the same time. The server is waiting for chunks of text from the AI model, and the browser is waiting for updates from the server. With async programming, Python can handle both flows smoothly without blocking the application between each chunk.

In a typical setup, the front end sends a user message to the server over a WebSocket connection. The Python back end then forwards that prompt to the LLM API using a streaming request. Instead of waiting for the entire response to finish, the server receives the answer piece by piece. As each piece arrives, it immediately pushes that chunk through the WebSocket to the browser. The browser then appends the new text to the chat window in real time.

This creates the typing-style experience users now expect in modern AI tools. More importantly, it improves perceived performance. Even if the full answer takes several seconds to complete, the user sees progress almost immediately.

Here is a simplified example of what that pattern can look like in Python:

import asyncio
import json
from fastapi import FastAPI, WebSocket

app = FastAPI()

async def fake_llm_stream(prompt: str):
    response_chunks = [
        "Async programming ",
        "helps applications ",
        "handle waiting tasks ",
        "without blocking."
    ]
    for chunk in response_chunks:
        await asyncio.sleep(0.5)  # simulate streamed model output
        yield chunk

@app.websocket("/chat")
async def chat_socket(websocket: WebSocket):
    await websocket.accept()
    while True:
        data = await websocket.receive_text()
        message = json.loads(data)
        prompt = message["prompt"]

        async for chunk in fake_llm_stream(prompt):
            await websocket.send_text(json.dumps({
                "type": "chunk",
                "content": chunk
            }))

        await websocket.send_text(json.dumps({
            "type": "done"
        }))

Store above code as: main.py. In order to execute it, please install fastapi, uvicorn and websockets Python libraries. You ruun the Python FastAPI code using the following command: uvicorn main:app --reload

On the front end, the browser connects to the WebSocket endpoint and updates the interface whenever a new message chunk arrives:

<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8" />
  <title>FastAPI WebSocket Test</title>
</head>
<body>
  <button onclick="connect()">Connect</button>
  <button onclick="sendPrompt()">Send Prompt</button>
  <pre id="output"></pre>

  <script>
    let socket;

    function connect() {
      socket = new WebSocket("ws://localhost:8000/chat");

      socket.onopen = () => {
        document.getElementById("output").textContent += "Connected\n";
      };

      socket.onmessage = (event) => {
        document.getElementById("output").textContent += event.data + "\n";
      };

      socket.onclose = () => {
        document.getElementById("output").textContent += "Closed\n";
      };
    }

    function sendPrompt() {
      socket.send(JSON.stringify({
        prompt: "Explain async programming"
      }));
    }
  </script>
</body>
</html>

Store above code as test.html. You can serve this html code using the following Python code: python3 -m http.server 5500 Open up your favorite browser and browse to http://127.0.0.1:5500/test.html. The following GIF image shows the interaction: Streaming response from LLM Call

The async value here is easy to see. The server does not need to wait until the full AI response is ready before sending anything to the client. It can keep listening, receiving, and forwarding data as events happen. This makes the application feel much more interactive and is one of the clearest real-world examples of why async programming matters.

From a business perspective, this pattern is useful for AI chat assistants, customer support tools, internal knowledge bots, and workflow copilots. Users get faster visible feedback, and the application can manage real-time communication more effectively. For SMEs building AI-powered platforms, async plus WebSockets is often a practical foundation for delivering a modern chat experience.

Conclusion

Async programming in Python is fundamentally about handling waiting time better. It allows applications to pause specific tasks during I/O operations and continue progressing elsewhere instead of blocking the whole flow.

For SMEs, that can bring real value in web services, integrations, messaging platforms, monitoring tools, and data workflows that depend on many external systems. It can improve responsiveness, support more simultaneous operations, and make better use of infrastructure for the right workload type.

At the same time, async is not a universal answer. It adds complexity, requires a solid understanding of execution flow, and should not be used as the default solution for CPU-heavy work. The right question is not whether async is modern or powerful. The right question is whether your application spends enough time waiting to justify the model.

Python makes async approachable through async, await, asyncio, and the event loop model. Once teams understand these basics, they are in a much better position to evaluate where asynchronous design makes sense and where simpler synchronous code remains the better business decision.

When chosen carefully, async programming can help teams build Python applications that are more responsive, more scalable for I/O-heavy workloads, and better aligned with the demands of modern digital systems.

That is where FAMRO-LLC can help. We provide practical software development services for startups, SMEs, and growing businesses that need modern backend systems, API development, cloud-ready applications, system integrations, and production-focused engineering support. Whether the need is a new Python-based platform, a scalable web application, an internal automation system, or modernization of an existing product, we help translate business requirements into robust and maintainable software solutions.

If your business is planning a new software product, improving an existing platform, or looking for a reliable technical partner for backend and cloud-based development, FAMRO-LLC can help you move forward with a solution designed for real operational needs.

To help organizations get started, we offer a free initial consultation focused on your software goals, technical challenges, and delivery priorities.
🌐 Learn more: Visit Our Homepage
💬 WhatsApp: +971-505-208-240

Our solutions for your business growth

Our services enable clients to grow their business by providing customized technical solutions that improve infrastructure, streamline software development, and enhance project management.

Our technical consultancy and project management services ensure successful project outcomes by reviewing project requirements, gathering business requirements, designing solutions, and managing project plans with resource augmentation for business analyst and project management roles.

Read More
2
Infrastructure / DevOps
3
Project Management
4
Technical Consulting