HTTP/3 – how it performs compared to HTTP/2?
Tomasz GajewskiReading Time: 6 minutes
HTTP is the primary application-level protocol used for the web. Its latest iteration has already been developed for a couple of years. In June 2022, it became a proposed standard.
Therefore, it is a perfect opportunity to test how it performs, as HTTP/3 support is already built in NGINX binaries since version 1.25.0. We use NGINX at Kiwee as both web server and reverse proxy for many projects, including Shopware 6-based online stores.
How HTTP/3 is different from its predecessors
Previous versions of HTTP used TCP as the transport protocol. TCP has certain inefficiencies though, particularly in handling packet loss, which can block an entire stream of data (known as head-of-line blocking) until the lost packet is
retransmitted.
HTTP/3 adopted QUIC, a new transport protocol initially designed and developed at Google. QUIC is based on UDP. It aims to eliminate the issue of head-of-line blocking by allowing individual streams to fail without impacting the entire connection.
More improvements in HTTP/3
Another one is security. QUIC includes built-in TLS support. In previous HTTP versions, the security layer was separate.
With QUIC, encryption and data transfer can happen simultaneously, reducing latency. QUIC currently adopts the latest version of TLS: 1.3.
Multiplexing - both HTTP/2 and HTTP/3 support it. Multiple requests can be made in parallel over a single connection. However, in HTTP/2, a lost packet still impacts all the streams due to TCP's head-of-line blocking. QUIC's stream multiplexing uses UDP. Hence, it does not suffer from this problem, leading to better performance when packet loss occurs.
HTTP/2 - TCP head-of-line blocking example. When a packet is lost, all other packets need to wait until it gets retransmitted.
HTTP/3 solution for head-of-line blocking - streams are independently managed in the transport layer by QUIC protocol. Hence, when a packet is lost only one stream is blocked until the packet is retransmitted.
Faster initial connection - QUIC has TLS built-in, so it can handshake once, when the connection is being established. Unlike HTTP/2, where TCP and TLS each need to handshake individually.
Connection migration - QUIC supports connection migration, which means that if a user changes their network interface (e.g., from Wi-Fi to mobile data), QUIC can continue using the same connection without interruption. Conversely, TCP ties connections to the specific IP address, so changing networks often means starting a new connection.
Zero Round Trip Time Resumption (0-RTT) - QUIC allows data to be sent by a client before a connection is formally established. It can speed up the first request to the web server.
HTTP/3 vs HTTP/2 Benchmark
The primary test website was a static CMS page. A static page eliminates additional latency caused by executing an application, connection to a database, or other external services.
Testing tool
The HTTP testing script incorporates Puppeteer - a Node.js library that allows running Chrome in headless mode and accessing the DevTools metrics, particularly metrics from the network section.
Testing methodology
The test page's complete download time was measured, including all linked assets like fonts, images, CSS, and JS files, but excluding requests initiated by JavaScript code.
The given page was downloaded 50 times in enforced HTTP/3 mode (to ensure that even the first request used HTTP/3) and 50 times with QUIC support disabled.
To make all tests consistent, a new container was spun up for each iteration.
Test locations
The website under test was hosted in a data center located in Northern Germany. The tests were executed from:
- Three different data centers of Hetzner Cloud (Nürnberg, Germany; Helsinki, Finland; Ashburn, Virginia, US);
- A laptop connected to the internet via optical fiber 1 Gbps over Wi-Fi (Wrocław, Poland);
- A laptop connected via a mobile 4G/LTE hotspot over Wi-Fi (Wrocław, Poland)
- A laptop connected via a mobile 4G/LTE hotspot over Wi-Fi (Wrocław, Poland), with simulated poor-quality networking conditions by adding extra 100ms latency and ~15% packet loss (using linux "traffic control" utility).
Test results
All values are given in seconds and they are visualized using a violin type of plot where the shape represents
the actual distribution of download times data.
90th percentile means that 10% of download times were slower than the value given.
99th percentile means that 1% of download times were slower than the value given.
Page load time from Nürnberg, Germany - Hetzner Cloud
Nürnberg, DE | HTTP/3 | HTTP/2 |
---|---|---|
median | 0.6350 | 0.5490 |
mean | 0.6472 | 0.8623 |
90th percentile | 0.7550 | 1.7660 |
99th percentile | 0.8750 | 2.0610 |
min | 0.4820 | 0.4200 |
max | 1.2630 | 2.1990 |
Page load time from Helsinki, Finland — Hetzner Cloud
Helsinki, FI | HTTP/3 | HTTP/2 |
---|---|---|
median | 0.6465 | 0.6950 |
mean | 0.6702 | 0.9317 |
90th percentile | 0.8040 | 1.7960 |
99th percentile | 0.9090 | 2.5000 |
min | 0.5360 | 0.5550 |
max | 0.9210 | 2.8420 |
Page load time from Virginia, US — Hetzner Cloud
Viginia, US | HTTP/3 | HTTP/2 |
---|---|---|
median | 1.2800 | 1.6740 |
mean | 1.3491 | 1.7942 |
90th percentile | 1.5220 | 2.1910 |
99th percentile | 1.9210 | 2.9370 |
min | 1.1430 | 1.2840 |
max | 2.1730 | 4.8710 |
Page load time from Wrocław, Poland — fiber 1Gbps
PL (Wrocław, Fiber) | HTTP/3 | HTTP/2 |
---|---|---|
median | 0.8390 | 0.8155 |
mean | 0.8856 | 1.0117 |
90th percentile | 1.0130 | 1.3150 |
99th percentile | 1.0140 | 1.3230 |
min | 0.7530 | 0.7270 |
max | 1.4280 | 2.9040 |
Page load time from Wrocław, Poland — mobile LTE
PL (Wrocław, LTE) | HTTP/3 | HTTP/2 |
---|---|---|
median | 1.6020 | 1.2035 |
mean | 1.7172 | 1.3895 |
90th percentile | 1.8740 | 1.5470 |
99th percentile | 2.5690 | 1.8190 |
min | 1.2350 | 0.8960 |
max | 2.7230 | 3.4310 |
Page load time from Wrocław, Poland — mobile low quality
PL (Wrocław, mobile ~15% packet loss) | HTTP/3 | HTTP/2 |
---|---|---|
median | 6.0280 | 8.9400 |
mean | 6.6856 | 12.8554 |
90th percentile | 9.3170 | 27.6380 |
99th percentile | 9.3490 | 28.4410 |
min | 4.0230 | 5.9210 |
max | 9.7740 | 36.7130 |
HTTP/3 wins unquestionably in intercontinental connections (US East Coast–Germany) with 25% faster download (mean). It also outperforms HTTP/2 when clients use unstable mobile networks with high latency and packet loss with 52% faster download (mean).
At close distance between the client and the server, stable network conditions still show a slight average download time advantage of HTTP/3. Though, medians are lower for HTTP/2.
All tests reveal also that HTTP/3 is more resilient and reliable. When looking at the 90th percentile and 99th percentile, the lowest HTTP/3 download times are still far better than the lowest HTTP/2.
A Jupiter Notebook that aggregates the results can be found on GitHub.
How to enable HTTP/3 for your web application
There are several solutions available depending on your infrastructure restrictions and limitations.
In the case of infrastructure that does not have the ability to replace or update the front HTTP server, and yet the simplest solution is to incorporate a third-party CDN service that supports HTTP/3. Most major providers already do,
including Cloudflare, Fastly, AWS Cloudfront, or Google Cloud CDN.
Another option is to use a server that supports HTTP/3 as your application HTTP server or a reverse-proxy server. Note that QUIC cannot work without TLS. Thus, it requires a valid TLS certificate. The following servers already support HTTP/3.
HAProxy load balancer currently only supports HTTP/3 out-of-the-box in the enterprise version. However, HAProxy can be compiled from source with QUIC enabled.
At the time of writing of this article, Apache HTTP Server sadly still has no QUIC support in its roadmap. There is a feature request pending.
Set up HTTP/3 in NGINX in Docker container
The official NGINX images, like its binaries for download, have built-in QUIC support since version 1.25.0. To enable HTTP/3, follow the steps below:
-
Enable TLS 1.3. QUIC works with the latest 1.3 version only. Append TLSv1.3 to the ssl_protocol directive, like the following:
ssl_protocols TLSv1.2 TLSv1.3;
-
Reuse port 443 for QUIC protocol:
listen 443 quic reuseport;
Note that
reuseport
cannot be used more than once per host. So, when having more than one virtual host listening on the same IP address, only one can have thereuseport
clause. -
Add an
Alt-Svc
header to tell the browser that HTTP/3 is available. The very first request always uses HTTP/2. Once getting the first response containing theAlt-Svc
, all subsequent requests already use HTTP/3.add_header Alt-Svc 'h3=":443"; ma=86400';
-
HTTP/3 implementation in NGINX does not forward the
Host
nor proxy–specific headers, such asx-forwarded-for
,x-forwarded-host
,x-forwarded-proto
,x-forwarded-port
,x-forwarded-prefix
. They need to be explicitly set if your application requires them. -
Expose both TCP and UDP port 443 from the NGINX container. With
docker run
as below:docker run -p "80:80" -p "443:443/tcp" -p "443:443/udp" nginx
Alternatively, in the
Dockerfile
:... EXPOSE 80 # without explicit protocol name the default is TCP EXPOSE 443/tcp EXPOSE 443/udp
In
docker-compose.yml
, it is:... ports: - "80:80" - "443:443/tcp" - "443:443/udp"
-
Check your firewall rules. Inbound UDP traffic on port 443 must be allowed.
How to better inform the browser about HTTP/3 availability?
Typically, a browser makes the first connection with HTTP/2. Then, it finds out that HTTP/3 is available when it receives the Alt-Svc
response header with the value matching the pattern:
Alt-Svc h3=":<port>"; ma=<timeout_seconds>
Besides that, there is a solution to make the browser aware even beforehand, through a DNS SVCB/HTTPS record.
example.com 3600 IN HTTPS 1 . alpn="h3,h2"
Unfortunately, it looks like this feature is not widely supported yet. Nevertheless, according
to Chromestatus, it is already in development for Chrome and Safari.
Conclusion
HTTP/3 is not as big a step forward as HTTP/2 was over HTTP/1.1 in terms of performance boost. HTTP/2 introduced stream multiplexing, allowing browsers to download multiple resources simultaneously on a single TCP connection.
HTTP/3 introduces evolutionary improvements, particularly for conditions such as high network latency and packet loss. Then there is a higher risk of head-of-line blocking occurring for HTTP/2. The tests verified HTTP/3 performing much better in these conditions.
Is HTTP/3 ready for adoption? In my opinion, yes. Big players like Google, Cloudflare, or Shopify have already used it in production. The major browsers support it too. It could be specifically beneficial for mobile users. Mobile networks tend to be unstable especially in rural areas. Therefore, it can turn to an advantage, like improved conversion rates and overall sales for online stores.
Postscript
An animal image at the top illustrates a pronghorn. Here I want to use the opportunity to spread awareness - this antelope, the fastest-running animal in North America with a top speed of 95 km/h / 55 mph, is endangered. Some of its subspecies are even close to extinct. Check out more information about this fascinating animal.