HTTP 1.0 vs HTTP 1.1 – Bandwidth Optimization

Hi Friends, as part of “differences between HTTP 1.0 vs HTTP 1.1”, we are now trying to cover another important difference about HTTP 1.1 – bandwidth optimization.

Let’s discuss about http 1.1 – bandwidth optimization changes.

Bandwidth Optimization

As we know that bandwidth is precious to serve client better and efficiently. If we are not utilizing bandwidth in effective way then it may cause issues while transferring data. Inefficient use of bandwidth increases network latency which decreases site performance.

So we should be very careful to decide mechanism through which we can preserve bandwidth and can use in effective way.

Bandwidth improvement is also part of those key changes which were done in HTTP 1.1.

What was in HTTP 1.0

We were having few issues in HTTP 1.0 due to which there are higher chances to get bandwidth wastage.

Sending large requests to server:

Every server is restricted to accept requests up to specific sizes. If request is having size more than specification then server returns error code. But server returns error code after bandwidth is consumed by this large request. So this entire consumed bandwidth is a waste. There is no way in HTTP 1.0 so that client server can negotiate before sending large request.

Sending large responses to client:

There could be cases where client needs only a part of any big document or image but in HTTP 1.0 there is no way so that server can send only partial data. Server sends the entire data to client. This is another example of bandwidth wastage which could be saved.

Optimizations in HTTP 1.1

As we discussed about cases where client needs only a part of any big document or image. In HTTP 1.0 there was no way to achieve this but in HTTP 1.1 this is possible. Let’s discuss, how we can achieve this.

Range Requests:

By using this way a client can send a request and in the request it can mention a range to get. Once server processes this requests and if server supports range requests then server will transfer the range of bytes requested by client.

Following is the example of one range request.

In above example we can see that range from 0 to 1023 bytes is requested from server.

If a response contains a range, rather than the entire resource, it carries the 206 (Partial Content) status code. This code prevents HTTP/1.0 proxy caches from accidentally treating the response as a full one, and then using it as a cached response to a subsequent request. In a range response, the Content- Range header indicates the offset and length of the returned range, and the new multipart/byteranges MIME type allows the transmission of multiple ranges in one message.

Range requests can be used in a variety of ways, such as:

  1. To read the initial part of an image, to determine its geometry and therefore do page layout without loading the entire image.
  2. To complete a response transfer that was interrupted (either by the user or by a network failure); in other words, to convert a partial cache entry into a complete response.
  3. To read the tail of a growing object.

Expect and 100 (Continue):

Whenever client sends request to server, it includes Expect header with 100 status code. This request is just to get an approval about if client can continue or not. If server is fine with the expectations of client then it returns 100-continue response and client can continue. But if server does not meet the expectation then it can send any 4xx status code due to which client cannot continue with requests.

Let’s consider one example:

Assume that we have a client which want to transmit a video file to server. This file length may be high and server may not be able to process. To check if this file length is acceptable by server or not, client sends a request with expect header.

A client sends a request with Expect header and waits for the server to respond before sending the message body.

PUT /somewhere/fun HTTP/1.1
Host: origin.example.com
Content-Type: video/h264
Content-Length: 1234567890987
Expect: 100-continue

The server now checks the request headers and may respond with a 100 (Continue) response to instruct the client to go ahead and send the message body, or it will send a 417 (Expectation Failed) status if any of the expectations cannot be met.

This way server is able to save its bandwidth. If this would have been http 1.0 then this request would have transferred to server with entire body. Bandwidth is consumed and then server would have rejected the request which is complete waste of bandwidth.


In next articles we’ll discuss about few more differences between http 1.0 and 1.1.


HTTP 1.0 vs HTTP 1.1 – Caching

Hi Friends, as we are discussing about differences between HTTP 1.0 vs HTTP 1.1, I am trying to cover one difference in one article so that it will be easy for us to grasp these. Http 1.1 caching is an important aspect to learn which impacts web behavior.

Previously we discussed about compatibility changes which were done as part of HTTP 1.1.

In this article we are going to discuss about http 1.1 caching changes.


Caching is a technique through which you can preserve few most required resources on the server\client. Once these resources are preserved, whenever any new client request the same resource from the server then rather than process again, server returns with the preserved resource.

This technique gives few benefits.

  1. As server do not need to process again for the same resource, so server performance improves.
  2. As this is preserved resource so there is no need to add network packets in the response. Due to which response size decreases a bit and network latency improved for other requests.
  3. Due to less network congestion, server is able to handle more and more requests which is cost effective solution.

Caching was there in HTTP 1.0 also but it was not at that much mature level so it causes couple of issues.

Caching in HTTP 1.0

The HTTP 1.0 caching mechanism worked well, but it had many shortcomings. It did not allow either servers or clients to give full and explicit instructions to caches. Therefore, this caching was not well-specified. Because of which we had following issues.

  1. Incorrect caching of some responses that should not have been cached. Due to this responses were unexpected.
  2. Failure to cache some responses that could have been cached. This causes performance problems.

Expires Header:

HTTP/1.0 provided a simple caching mechanism. An origin server may mark a response, using the Expires header, with a time until which a cache could return the response.

After above mentioned date time, cache will expire and will not be served to client. It should be validated with origin server.

If-Modified-Since, Last-Modified headers:

A cache may check the current validity of a response using what is known as a conditional request: it may include an If-Modified-Since header in a request for the resource, specifying the value given in the cached response’s Last-Modified header.

Below is the syntax for this header


The If-Modified-Since request HTTP header makes the request conditional: the server will send back the requested resource, with a 200 status, only if it has been last modified after the given date. If the resource has not been modified since, the response will be a 304 (Not modified) without any body; the Last-Modified header will contain the date of last modification. 

Pragma: no-cache header:

The Pragma: no-cache header, for the client to indicate that a request should not be satisfied from a cache.

Every time response will be processed fresh from origin server.

Caching in HTTP/1.1

HTTP 1.1 caching attempts to clarify the concepts behind caching, and to provide more mature mechanisms for caching. It retains the basic HTTP 1.0 caching plus it provide a design with new features and with more careful specifications of the existing features.

In HTTP 1.1, a cache entry is fresh until it reaches its expiration time. Once it expires, it should not be provided in the response but it should not be deleted from the cache also. But it normally must revalidate it with the origin server before returning it in response to a subsequent request. However, the protocol allows both origin servers and end-user clients to override this basic rule.

ETag, If-None-Match Headers:


The ETag or entity tag is part of HTTP caching, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed. 

When a URL is retrieved, the web server will return the resource’s current representation along with its corresponding ETag value, which is placed in an HTTP response header “ETag” field:

ETag: “686897696a7c8745rt5”

The client may then decide to cache the representation, along with its ETag. Later, if the client wants to retrieve the same URL resource again, it will first determine whether the local cached version of the URL has expired (through the Cache-Control and the Expire headers). If the cache has not expired, it will retrieve the local cached resource. If it determined that the cache has expired (is stale), then the client will contact the server and send its previously saved copy of the ETag along with the request in an “If-None-Match” field.

If-None-Match: “686897696a7c8745rt5”

On this subsequent request, the server may now compare the client’s ETag with the ETag for the current version of the resource. If the ETag values match, meaning that the resource has not changed, then the server may send back a very short response with a HTTP 304 Not Modified status. The 304 status tells the client that its cached version is still good and that it should use that.

However, if the ETag values do not match, meaning the resource has likely changed, then a full response including the resource’s content is returned, just as if ETags were not being used. In this case the client may decide to replace its previously cached version with the newly returned representation of the resource and the new ETag.

The Cache-Control Header:

Standard Cache-Control directives that can be used by the client in an HTTP request. There are many variations for using this directive. Below is directives list for request.

  • Cache-Control: max-age=<seconds>
  • Cache-Control: max-stale [=<seconds>]
  • Cache-Control: min-fresh=<seconds>
  • Cache-Control: no-cache
  • Cache-Control: no-store
  • Cache-Control: no-transform
  • Cache-Control: only-if-cached

Below is directives list for response.

  • Cache-Control: must-revalidate
  • Cache-Control: no-cache
  • Cache-Control: no-store
  • Cache-Control: no-transform
  • Cache-Control: public
  • Cache-Control: private
  • Cache-Control: proxy-revalidate
  • Cache-Control: max-age=<seconds>
  • Cache-Control: s-maxage=<seconds>

In upcoming sessions, we’ll discuss about all these directives in details.

The Vary header:

A cache finds a cache entry by using a key value in a lookup algorithm. HTTP/1.0 uses just the requested URL as the cache key. But this is not a perfect model as sometimes response may vary not only based on the URL, but also based on one or more request-headers (such as Accept-Language and Accept-Charset).

To support this type of caching, HTTP/1.1 includes the Vary response-header. This header field carries a list of the relevant selecting request-header fields that participated in the selection of the response variant. If new request exactly matches with the cached version of request, then only cached resource is returned else server does it normal processing to return the resource.

In addition to above headers, there are few more important headers but those are generally used in more complex scenarios. We’ll cover those headers separately e.g. If-Unmodified-Since and If-Match etc.

That’s all about caching. Please note that this is very simple and basic overview about caching changes in http 1.1. There is a long list of changes which are complex. We’ll cover those in another articles.