Build better. Earn faster. With modern tech– AI, code/no-code & beyond.

The Evolution of HTTP: A Developer's Guide from HTTP/1.1 to HTTP/3 and QUIC

Last Updated: 2025-12-05
Category: Web Development
Reading Time: 15
Article-type: Standard
An abstract image with the text 'Why upgrading to http/3 Matters!', with 'http/3' stylized in a fiery, flaming font. 

The Different Versions Of Hyper Text Transfer Protocol aka HTTP

As web developers, we often focus on optimizing our JavaScript, compressing images, and fine-tuning CSS. But there's a foundational layer that significantly impacts performance that many of us take for granted—the HTTP protocol itself.

The web is built over layers of different protocols stacked over each other.

The HTTP (HyperText Transfer Protocol) is a set of rules for communication between a client and a server. But the communication channel itself is built using other protocols, specifically TCP (Transmission Control Protocol) or UDP (User Datagram Protocol).

To understand the progression of HTTP through its different versions, we have to understand what challenges each previous version faced. Both—due to its own limitations as well as those imposed by the underlying protocols TCP/UDP.

Let's get a brief overview of the TCP first, because except for the latest HTTP 3.0, all other versions of HTTP were built on top of it and many websites (~66%) still use that.

TCP (Transmission Control Protocol)

The purpose of the TCP is to establish a reliable communication channel between the client and the server.

Its job starts with creating a handshake between the client and the server (some initial back-and-forth transmission of small data packets) that itself is a 3 step (SYN, SYN-ACK & ACK) process.

Once the connection is established, its next job involves sending client requests, getting server responses back, and putting all the response data together in the right order before showing it to the client.

It also tracks every response data packet it gets and notifies the server of the same (like a delivery receipt). This way, the server can resend the data packets of the missing receipts. This way, it ensures that every response from the server reliably reaches the client.

Now that is clear, we can now understand how the different versions of HTTP evolved.

Earlier Versions of HTTP

HTTP 0.9 (1991)

Imagine you live on a remote island and there is only one eatery with a home-delivery option, serving only one type of dish (HTML). You want to try their food one day, so you (the client) call them (the server).

  1. You—Hello, Can you hear me? (SYN)
  2. The receptionist—Hi! Yes, I can hear you. Can you hear me as well? (SYN-ACK)
  3. You—Yes, I can hear you as well. (ACK)

This completes the TCP handshake between the client and the server. The sole purpose of this 3 step elaborate handshake process is to ensure both the client & the server acknowledge that they can hear each other so that data can be exchanged between them.

The eatery is pretty basic and you can only place one order or request. Like—'GET meal.html'. That is literally how simple the http-request looked like in the early days of the web, just one line of text. No headers or anything else.

The delivery boy (the TCP) then would come to your doorstep and deliver the meal.

HTTP 1.0 (1996)

The eatery has now evolved into a full home-delivery restaurant. It can serve several types of dishes, also with the option for you to customize. This time—

  1. You'd still proceed with the same TCP handshake.
  2. You'd place your order, with custom instructions (headers), like only veg or no pineapple on my pizza.
  3. The restaurant also delivers the food with headers written over every parcel (content-type, content-length, response status etc.)

The communication became verbose, but still very simple and all in text so you can read them. The problem? If you have to place multiple orders, like food for your entire family, with ice-creams, soups and starters (css, images etc.), you'd have to repeat the entire process again—for each order!

HTTP 1.1 (1997)

The restaurant realised the inefficiency of the process. It trained its receptionist now to keep in-touch with the customer for a set time period (called KeepAliveTimeout).

Within the "KeepAliveTimeout" period, you can request again without the unnecessary handshake and introducing your name, address every time. The KeepAliveTimeout also gets reset after every last request. This way, both the client and the server could "keep-alive" the connection 🙂.

HTTP 1.1 was revolutionary this way and perhaps, that was the reason it dominated the web for decades. To this day, many websites still employ HTTP 1.1 to serve partially their contents.

For simple websites, serving few contents per webpage, this design was good. But, as the web evolved and webpages started serving more, the limitations of the design became apparent.

The major problem here was, while the client can place the request again on the same connection, the server is trained to process the requests only sequentially or one-by-one.

One slow processing of a request (like a huge image) could block the rest of the queue. This is called Head of Line blocking (HOL).

To circumvent this, browsers did the clever trick of opening multiple client-server connections (max 6-8 at a time). This way, multiple requests can be processed and delivered separately at the same time. But this method of achieving parallelism came at the expense of more memory and computational resources—on both the server and the client systems.

HTTP 2.0 (2015)

HTTP 2.0 brought a lot of major improvements that saw websites reporting 30-40% faster loading times. So many tricks that developers employed prior, to take advantage of parallelism through multiple connections (like css and image-sprites) were also no longer needed.

  1. Multiplexing—not only the orders need not be processed sequentially, the server can also process them simultaneously. The client can place multiple requests on a single connection, the server can process them parallely and return them in any order, even in chunks, as they are completed. The HTTP/2 would label those chunks so that it can reassemble them all in the correct order before finally delivering it to the client.
  2. Server Push—The server can push certain types of files automatically without the client requesting them. For example, if an html file is requested, the server knows the browser will eventually request the linked css file as well.

Another significant change it brought was the implementation of binary data exchange (instead of text). This made the raw data less human readable but more efficient for the machines to parse.

The new design elegantly solved the problem of the HOL blocking, but at the server (application) level. The TCP delivery guy is still adamant to deliver the responses in the correct order only. If any response packet goes missing, it blocks delivering the "subsequent" ones as well until the server resends the missing packet. Thus the same problem of HOL blocking still persisted, but at the TCP level.

The TCP was intentionally designed this way—to ensure and deliver "every" response in the order they were requested. And it actually made sense back then, you do not want to get a css or a font file without an html or a text, it would be useless.

Another problem that emerged with the TCP design was the need for extra "handshakes" so that the TLS (Transport Layer Security) protocol (HTTPS) can be implemented over it.

The TLS implementation, now non-negotiable to protect user data and privacy, requires creation and exchange of cryptographic keys between the client and the server, so that the data can be encrypted, transmitted and decrypted at both ends. The entire process results in, at the very least 2 extra round trips before even a single byte of request (or response) can be exchanged.

Thus, while HTTP/2.0 improved server application architecture, challenges persisted within the communication channel, TCP itself. The bottleneck inherent in TCP's design led to its removal in HTTP/3.0, which instead introduced an entirely new communication channel called QUIC.

Nonetheless, around 65% of active websites today still employ HTTP 1.1 and 2.0 which are built over TCP.

UDP (User Datagram Protocol)

In contrast to the TCP "delivery agent," the UDP delivery guy prioritizes speed over reliability. He simply delivers the packages, without taking extra steps to verify their contents (whether they are complete or if anything is missing). No handshake is required either. You can say the UDP, essentially just dumps the response data on the client, and in no particular order.

All this "removal of responsibility" makes UDP very fast. So where would one want such a type of communication channel? Any place where reliability is not important but speed is. Like when streaming a video or playing a game. For a video streaming at say 30fps, missing 1 or 2 frames is not even going to be noticeable to the viewer.

How are your videos then streamed in a proper sequence? Since, UDP doesn't take the responsibility of delivering in correct order, the client's application must take that responsibility. When you watch a movie on Netflix, the video player makes requests, but not for the entire video, only for its jitter-buffer and implements its own sequencing by adding sequence markers to datagrams so that it can reorder them. It fills many dozens of such buffers in advance.

The reason why I mentioned the UDP here is because the QUIC protocol, which the latest HTTP 3.0 employs, is designed over UDP.

HTTP 3.0 (2022)

HTTP 3.0 runs over the QUIC (Quick UDP Internet Connections) communication channel instead of TCP. It completely eliminated the problem of HOL blocking. Unlike HTTP/2 where a lost TCP packet blocks all subsequent streams, QUIC allows independent stream processing. If one image fails to load only that filestream is blocked, your CSS and JavaScript continue unaffected.

The other features of HTTP/3 are—

  1. Every connection is encrypted and secure by default. The TLS (version 1.3) security layer is built right into QUIC (and not on top of it). The design combines the steps of initial handshake and implementation of the TLS into just one step process. Thus, no more 3 or more round trips to establish an encrypted connection channel.
  2. 0-RTT or Zero round trip time for returning clients—Even the initial full handshake process is required only for the first time. The server gives the client a unique session id, valid for a set period, which it can use on its next visit and make requests right away. This translates to significant performance gain for applications, people are expected to open time and again—social media apps on mobiles, e-commerce marketplaces, games etc.
  3. Connection migration: The HTTP/3 manages to keep the connection open even when the client switches network. When you go outside, the network on your mobile changes from the wifi to cellular which also changes the ip address of your device. On TCP, the end point sockets use IP addresses, hence the connection needs to be reset. The QUIC instead uses unique connection ids that persists as long as the application is active.

How to Upgrade the HTTP Version of Your Website

Most modern websites are hosted on servers where the hosting service would already be running at least HTTP/2. However, this last-year data from HTTP Archive suggests a good ~20% of websites still running HTTP/1.1, so encountering one is not uncommon.

Upgrading to HTTP/3 directly from the endpoint or origin is complex. The simplest approach is to utilize a Content Delivery Network (CDN). Most widely used CDNs, including those offered by hosting providers, typically enable HTTP/3 by default.

When you read claims that 30-35% of websites use HTTP/3, excluding major entities like Google, Amazon, and the CDNs themselves, a significant portion of these websites implement HTTP/3 through CDNs or edge servers, rather than directly from their own origin server. The communication with the origin still runs on HTTP/2 or 1.1.

The reason for that is, as of this writing, except for Nginx, there is no support for HTTP/3 with other server frameworks—Apache, Node/Express, Django etc. Though Django does have a third-party library support, check django-http3.

Still, if you wish to implement settings explicitly (like on unmanaged platforms), there are configuration text files you have to tweak.

Nginx Configuration

Enabling HTTP/2:

nginx.conf

server {
    listen 443 ssl http2;
    server_name example.com;
      
    ssl_certificate /path/to/certificate.pem;
    ssl_certificate_key /path/to/private-key.pem;
      
    # Optimize
    http2_max_field_size 16k;
    http2_max_header_size 32k;
      
    # Enable compression
    gzip on;
    gzip_vary on;
    gzip_types text/css application/javascript;
      
    location / {
        root /var/www/html;
        index index.html;
    }
}

Enabling HTTP/3 (QUIC):

nginx.conf

server {
    listen 443 ssl http2;
    listen 443 quic reuseport;
      
    server_name example.com;
      
    ssl_certificate /path/to/certificate.pem;
    ssl_certificate_key /path/to/private-key.pem;
    ssl_protocols TLSv1.3;
      
    # announce
    add_header Alt-Svc 'h3=":443"; ma=86400';
      
    # QUIC-specific settings
    ssl_early_data on;
    quic_retry on;
}

Apache Configuration

Enabling HTTP/2:

httpd.conf

LoadModule http2_module modules/mod_http2.so

<VirtualHost *:443>
    ServerName example.com
      
    SSLEngine on
    SSLCertificateFile /path/to/certificate.pem
    SSLCertificateKeyFile /path/to/private-key.pem
      
    # Enable HTTP/2
    Protocols h2 http/1.1
      
    # Optimize settings
    H2Push on
    H2PushPriority * after
    H2PushPriority text/css before
    H2PushPriority application/javascript interleaved
</VirtualHost>

No support for HTTP/3 as of this writing in Apache

Node.js/Express Setup

server.js

const http2 = require('http2');
const fs = require('fs');

const server = http2.createSecureServer({
    key: fs.readFileSync('private-key.pem'),
    cert: fs.readFileSync('certificate.pem')
});

server.on('stream', (stream, headers) => {
    if (headers[':method'] === 'GET') {
        stream.respond({
            'content-type': 'text/html',
            ':status': 200
        });
          
       //... other response data
    }
});

server.listen(443);

No support for HTTP/3 as of this writing in Node.js

How to Test & Evaluate Performance Yourself

It is important to see in numbers how much performance % is improved on different HTTP versions. Fortunately, testing this is pretty simple. You can perform a detailed test on any website, including your own, using just the browser developer tools.

Here are the detailed steps (I'll be using Google Chrome)—

  1. Copy the url of the website you want to test. Preferably one that runs fully on HTTP/3 (from origin). I'm using Cloudfare's homepage for this — https://www.cloudflare.com/en-in/
  2. Go to developer tools > network and "disable cache" at the top. If you're testing some other website that employs service workers, then you need to disable the cache loading from there as well. Go to the application tab and check "Bypass for network".
  3. Close all instances of your browser. Open command terminal and run the command:
    start chrome --disable-quic --disable-http2 --incognito https://www.cloudflare.com/en-in/

    The above command will open that url in the chrome's incognito mode and since both http/3 (quic) and http/2 are disabled, it will load the webpage using http/1.1.

  4. Open developer tools again, click the network tab. Right click on any tab (Name, Status etc.) and check protocol. Reload the page again. You'll see resources loading one-by-one with http/1.1.
  5. Note the performance of these important metrics—
    • DOMContentLoaded time at the bottom
    • Click on the homepage's document (most likely the 1st one in the list), go to the timing tab and note the time for Initial connection (this is the time to complete the server-client handshake). Note the waiting time as well, the TTFB (time to first byte) is the sum of waiting and handshake times.
    • Click on the performance tab at the top and note the LCP (Largest Contentful Paint).
  6. Repeat the steps 3,4 and 5 for HTTP/2 and HTTP/3 as well.
    • Disable only quic to run HTTP/2:
      start chrome --disable-quic --incognito https://www.cloudflare.com/en-in/
    • For HTTP/3:
      start chrome --incognito https://www.cloudflare.com/en-in/

Here are my test results:

Metric HTTP/1.1 (Base) HTTP/2 HTTP/3
Initial Connection (Handshake) 136 ms 97 ms (+28.7%) 64 ms (+52.9%)
Time to First Byte (TTFB) 676 ms 620 ms (+8.3%) 433 ms (+35.9%)
DOMContentLoaded 1.37 s 1.26 s (+8.0%) 1.16 s (+15.3%)
Largest Contentful Paint (LCP) 2.34 s 2.05 s (+12.4%) 1.80 s (+23.1%)

I tested many websites (including my own) while writing this article and in many cases, the HTTP/1.1 outperformed HTTP/2! This is not to say, you should give HTTP/1.1 a try. Ideally it should underperform. What this suggests however is the HTTP/2 on those servers is not optimised.

It does tell us an important thing—a poorly optimised configuration will not give the results we expect just by upgrading HTTP.

Conclusion

The easiest way to upgrade to HTTP/3 is currently through a CDN like Cloudflare or BunnyCDN. While this might not offer full HTTP/3 from your server's origin, using a CDN is adequate and provides better security. Another option is a solution from your hosting provider; however, it's recommended to test websites that already use those CDNs (for example, your hosting provider's homepage).