HTTP2 Tutorial - HTTP2 Functional Upgrade
Before we officially introduce the functions of HTTP/2, let's take a detour to understand the past and present of HTTP/2 .
Multiplexing Streams
The bidirectional sequence of text-formatted frames sent via the HTTP/2 protocol exchanged between a server and a client is called a "stream". Earlier iterations of the HTTP protocol could transmit only one stream at a time, with a certain time delay between each stream transmission.
Receiving tons of media content through a single stream sent piece by piece is both inefficient and resource consuming. Changes in HTTP/2 help to establish a new binary framing layer to solve these problems.
This layer allows clients and servers to break the HTTP payload into a sequence of small, independent and manageable interleaved frames, and then reassemble this information at the other end.
The binary frame format allows multiple independent bidirectional sequences open simultaneously to be exchanged without delay between consecutive streams. This approach demonstrates a number of benefits of HTTP/2.
- Requests and responses that are multiplexed in parallel will not block each other.
- Although multiple data streams are transmitted, a single TCP connection is used to ensure efficient utilization of network resources.
- There is no need to apply unnecessary optimization tricks – such as image sprites, concatenation, and domain sharding – which can hurt other areas of network performance.
- Less latency, faster network performance, better search engine rankings.
- Reduce operational and capital expenses for running network and IT resources.
Using this feature, packets from multiple streams are essentially mixed and transmitted over a single TCP connection. These packets are then split up on the receiving end and presented as separate data streams. Using HTTP 1.1 or earlier to transmit multiple parallel requests simultaneously requires multiple TCP connections, which inherently limits overall network performance despite transmitting more data streams at a faster rate.
Check out the article " What are the limitations of HTTP1.1" to learn about the limitations of HTTP1.1 and why HTTP/2 was developed.
Binary Framing Layer
At the heart of all the performance enhancements in HTTP/2 lies the new binary framing layer, which defines how HTTP messages are encapsulated and transmitted between clients and servers.
The so-called "layer" here refers to a new optimized encoding mechanism located between the socket interface and the high-level HTTP API visible to the application: the semantics of HTTP (including various verbs, methods, and headers) are not affected, but the encoding method during transmission has changed. The HTTP/1.x protocol uses newline characters as delimiters for plain text, while HTTP/2 divides all transmitted information into smaller messages and frames and encodes them in binary format.
In this way, the client and the server must use a new binary encoding mechanism in order to understand each other: HTTP/1.x clients cannot understand servers that only support HTTP/2, and vice versa. But it doesn't matter, existing applications don't have to worry about these changes, because the client and server will do the necessary framing for us.
Data flow priority
By breaking HTTP messages into many independent frames, we can reuse frames in multiple data streams, and the order in which the client and server interleave and transmit these frames becomes a key performance determinant. To do this, the HTTP/2 standard allows each data stream to have an associated weight and dependency:
- Each data stream can be assigned an integer between 1 and 256.
- Each data flow can have explicit dependencies on other data flows.
The combination of data stream dependencies and weights allows the client to build and communicate a "priority tree" indicating how it prefers to receive responses. In turn, the server can use this information to prioritize data stream processing by controlling the allocation of CPU, memory, and other resources. Once resource data is available, bandwidth allocation can ensure that high priority responses are delivered to the client in an optimal manner.
However, in the real world, servers rarely have control over resources such as CPU and database connections. Implementation complexity alone prevents servers from adapting to stream priority requests. Research and development in this area is particularly important to the long-term success of HTTP/2, as the protocol is able to handle multiple data streams over a single TCP connection.
This feature can result in simultaneous arrival of server requests that, from the end-user's perspective, are effectively prioritized differently. Randomly stalling streams of data processing requests undermines the efficiency and end-user experience promised by the HTTP/2 change. At the same time, the benefits of HTTP/2 are demonstrated by the intelligent and widely adopted stream prioritization mechanism.
- Efficient use of network resources.
- Reduce the time to deliver major content requests.
- Improved page loading speed and end-user experience.
- Optimize data communication between client and server.
- Reduce the negative impact of network latency issues.
Server Push
This feature allows a server to send additional cacheable information to a client that was not requested but is expected to be present in future requests. For example, if a client requests resource X and knows that resource Y is referenced by the requested file, the server may choose to push Y along with X rather than waiting for an appropriate client request.
The client places the pushed resource Y into its cache for future use. This mechanism saves request-response round trips and reduces network latency. Server push was originally introduced in Google's SPDY protocol. A stream identifier containing a pseudo-header (e.g., path) allows a server to initiate a push of information that must be cacheable. The client must explicitly allow the server to push cacheable resources using HTTP/2 or terminate the push stream with a specific stream identifier.
Other HTTP/2 changes, such as server push, proactively update or invalidate the client's cache, also known as "cache push." The long-term consequences focus on the server's ability to identify possible pushable resources that the client doesn't actually want.
HTTP/2 implementation provides significant performance for push resources.
- The client saves the pushed resources in the cache.
- Clients can reuse these cached resources across different pages.
- The server can reuse the pushed resources along with the originally requested information within the same TCP connection.
- Servers can prioritize pushed resources - this is the key performance difference between HTTP/2 and HTTP1.
- Clients can refuse to push resources in order to maintain an efficient repository of cached resources or to disable server push altogether.
- The client can also limit the number of concurrently multiplexed push streams.
Similar push functionality is already available through suboptimal techniques such as inline push server responses, and server push provides a protocol-level solution to avoid the complexity of optimization tricks secondary to the baseline functionality of the application protocol itself.
HTTP/2 multiplexes and prioritizes pushed data streams to ensure better transmission performance as other request-response data streams. As a built-in security mechanism, the server must be authorized in advance to push resources.
Header Compression
Providing a high-end web user experience requires content- and graphics-rich websites. The HTTP application protocol is stateless, meaning that each client request must contain as much information as the server needs to perform the desired action. This mechanism results in the data stream carrying multiple repeated frames of information so that the server itself does not have to store information from previous client requests.
In the case of websites providing rich media content, clients push multiple nearly identical header frames, causing latency and unnecessary consumption of limited network resources. Without optimizing this mechanism, the prioritized mixing of data streams cannot achieve the required parallel performance standards.
HTTP/2 solves these problems by providing the ability to compress frames with large amounts of redundant headers. It uses the HPACK specification as a simple and safe method for header compression. Both the client and server maintain a list of headers that were used in previous client-server requests.
HPACK compresses the individual values of each header before transmission to the server, and then looks up the encoding information in the list of previously transmitted header values to reconstruct the complete header information. HPACK header compression for HTTP/2 implementations has huge performance benefits, including some of the advantages of HTTP/2 explained below:
- The effective stream priority.
- Make effective use of multiplexing mechanisms.
- Reducing resource overhead – This was one of the earliest areas of focus in the HTTP/2 vs. HTTP1 and HTTP/2 vs. SPDY discussions.
- Encodes large headers as well as commonly used headers without sending the entire header frame itself. The individual transfer size of each data stream shrinks rapidly.
- Not vulnerable to security attacks such as CRIME that exploit compressed header data streams.
Reference article: https://hpbn.co/http2
For reprinting, please send an email to 1244347461@qq.com for approval. After obtaining the author's consent, kindly include the source as a link.
Related Articles
How to redirect a website from HTTP to HTTPS
Publish Date:2025/03/16 Views:117 Category:NETWORK
-
HTTPS is a protocol for secure communication over computer networks and is widely used on the Internet. More and more website owners are migrating from HTTP to HTTPS, mainly due to the following 5 reasons: Google announced that websites usi
How to Fix the “SSL Handshake Failed” Error (5 Methods)
Publish Date:2025/03/16 Views:96 Category:NETWORK
-
Installing a Secure Sockets Layer (SSL) certificate on your WordPress site enables it to use HTTPS for a secure connection. Unfortunately, there are a lot of things that can go wrong in the process of verifying a valid SSL certificate and e
Detailed introduction to Let's Encrypt
Publish Date:2025/03/16 Views:129 Category:NETWORK
-
Let's Encrypt is a free, automated, and open certificate authority that officially launched in April 2016. It was originally founded in 2012 by two Mozilla employees. Their goal for founding Let's Encrypt was really simple; to encrypt the e
HTTP2 Tutorial - The Past and Present of HTTP2
Publish Date:2025/03/16 Views:73 Category:NETWORK
-
HTTP was originally proposed by Timberners-Lee, a pioneer of the World Wide Web, who designed the application protocol with simplicity in mind to perform advanced data communication functions between web servers and clients. The first docum
HTTP2 Tutorial - The shortcomings of HTTP1.1
Publish Date:2025/03/16 Views:145 Category:NETWORK
-
HTTP 1.1 is limited to handling only one outstanding request per TCP connection, forcing browsers to use multiple TCP connections to handle multiple requests simultaneously. However, using too many TCP connections in parallel can cause TCP
HTTP2 Tutorial - How to use HTTP/2 with HTTPS
Publish Date:2025/03/16 Views:84 Category:NETWORK
-
HTTPS is used to build ultra-secure networks connecting computers, machines, and servers to handle sensitive business and consumer information. HTTP/2 browser support includes HTTPS encryption, which actually complements the overall securit
HTTP2 Tutorial - How to Configure HTTP2 with Nginx
Publish Date:2025/03/17 Views:195 Category:NETWORK
-
HTTP2 was officially released in 2015. If your website is still using HTTP/1.1, you may be out of date. Don't worry, here we will see how to use Nginx to upgrade your website to HTTP2. Install Nginx I feel that this column is redundant. Sin
OAuth2.0 - A comprehensive understanding of OAuth2.0
Publish Date:2025/03/17 Views:72 Category:NETWORK
-
When I first came into contact with OAuth2.0, I often confused it with SSO single sign-on. Later, due to work needs, I implemented a set of SSO in the project. Through my gradual understanding of SSO, I also distinguished it from OAuth2.0.
How to Fix the “This Site Can’t Provide a Secure Connection” Error
Publish Date:2025/03/17 Views:164 Category:NETWORK
-
Nothing is more frustrating than receiving an error message that brings our work to a screeching halt—especially when security is involved. Seeing a “This Site Can’t Provide a Secure Connection” notification can be confusing and ala