HTTP 1.0 vs HTTP 1.1 – Compatibility

Hi friends, today we are going to discuss about differences between HTTP 1.0 vs HTTP 1.1. However version 1.0 was successful and still getting used for many websites but there were few shortcomings in this version which were fixed in version 1.1. There were major improvements done in new version for performance, bandwidth consumption etc.

As we know that HTTP is application level protocol which works on top of TCP. Http uses TCP connections to transfer the data between client and server. If you are not much familiar with HTTP then you can check here.

Now let’s come to the difference between these two protocols. There is a long list of HTTP 1.0 vs HTTP 1.1. So I am breaking this study into multiple articles so that it’ll be easy to grasp each difference easily.

Let’s discuss about first difference i.e. Compatibility with older versions.

HTTP 1.0 vs HTTP 1.1 – Compatibility

Once version 1.0 was released, it took another 4 years to release version 1.1. In these 4 years many 1.1 drafts were released and people started using these. While working with these draft versions there were constantly issues, Improvement areas were reported and fixed also. By the time final version was released, there were many websites which were already working with few draft versions. However these draft versions were having issues but final version can’t ignore all these shortcomings of draft versions.

It was necessary to have final version compatible with HTTP 1.0 and all draft versions so that nothing should get break.

In addition to this, HTTP 1.1 was made in such a way so that it should be compatible with future versions also.

Couple of changes are listed below.

Version Numbers

AS we know that each Http message has http version associated with itself. These version numbers are hop-to-hop, not end-to-end. For example, a client on Http 1.0 sends a request to a server which is on http 1.1. And this request is coming through multiple hopes in between. Client sent the request with http 1.0 in message but the hop which was just before the server was using http 1.1. So in this case when server will receive request, it will have http 1.1 in the request line.

HTTP 1.0 vs HTTP 1.1 - Compatibility

There is no way for server to know the actual client http version.

To resolve this issue, a new request header was introduced as “via”. This header contains the path of HTTP version getting used in transmission. By this header server should be able to know the HTTP version of end client.

HTTP 1.0 vs HTTP 1.1 - Compatibility

Below is the example of this header.

Via: 1.0 lazy, 1.1 p.example.net


HTTP/1.1 introduces the OPTIONS method, a way for a client to learn about the capabilities of a server without actually requesting a resource.

Below is the example of option request

Below will be the response

 From above response we can see that server supports OPTIONS, GET, HEAD, POST methods.

Upgrading to other protocols

In order to ease the deployment of incompatible future protocols, HTTP/1.1 includes the new Upgrade request-header. By sending the Upgrade header, a client can inform a server of the set of protocols it supports as an alternate means of communication. The server may choose to switch protocols, but this is not mandatory.

The Upgrade header field is a HTTP header field introduced in HTTP/1.1. In the exchange, the client begins by making a clear text request, which is later upgraded to a newer HTTP protocol version or switched to a different protocol.

Connection upgrade must be requested by the client; if the server wants to enforce an upgrade it may send a 426 Upgrade Required response. The client can then send a new request with the appropriate upgrade headers while keeping the connection open.

This way a protocol switching happens.

In next session we’ll cover Caching improvements in HTTP 1.1


Keep-Alive header, usage and benefits

Hi friends, today we are going to discuss about keep-alive connections and its significance. First of all, let’s discuss a little about how website works and how connections are maintained. 

How website works?

In general below is the flow of any web request.

Keep-Alive website

  1. User types any address in any browser e.g. www.google.com.
  2. Browser sends request to web server.
  3. Web server creates a new process or assign a thread to process this request.
  4. Web server processes the request and generate the response.
  5. This response is sent back to client.
  6. Now assigned process or thread is free to receive another requests.
  7. Browser displays the response to user.

This is very basic and top view of the process.

 Overview of TCP connection

Whenever two machines communicates with each other (In current example, this communication is between client and the web server), a connection is created. Using this connection, machines communicate with each other. This connection is called as TCP pipe\connection.

TCP connection maintains source, destination IP address and port information so that connection can be created and resource could be transferred.

You can get more information about TCP connection and layers here.

As we know that any webpage can have multiple resources like css files, js files, images etc. When we open any webpage then a new connection is created for each resource. So if a web page contains 10 images, 4 css files, and 2 js files then for each of these 16 files, new connection will be created and then resource will be loaded. Once resource downloading is completed, connection gets closed.

What is Keep-Alive?

As we just read that for each web resource, a new connection is created and resource is transferred. Once resource is transferred then connection gets closed. However this is pretty simple process but a little inefficient.

If request is for same server then why we can’t keep connection open so that all required resources can be transferred using same connection. And once all resources are done then connection may be closed.

Yes, you got me right. Keep-alive does the same thing. Whenever we provide instruction to keep the TCP connection alive, then TCP connection is not terminated after resource transfer. Other files can be transferred using same connection.

Keep-Alive processing

Benefits of using Keep-Alive

Now we know that using keep alive we can transfer multiple resources using one connection. We don’t need to get a new connection for every resource. We can have following benefits using this approach.

CPU Usage:

We know that TCP connection creation is a process which CPU need to do for every resource. By using keep alive we can reduce CPU load so that it can be utilized more efficiently.

HTTPS Connection utilization:

When site is on HTTPS connection then it is beneficial to use one connection as new connection creation and handshakes is an expensive process.

Webpage load speed:

As more files can be transferred using single connection so transfer speed improves and page loads faster. Using multiple connection creation process can slow up website load speed.

How to enable Keep-Alive header?

Let’s see how we can enable this setting for any website.

Open IIS in your web server and select the site for which you want to change the setting.

Change Keep-Alive setting

Double click on “HTTP Response Headers”

Click on “Set Common Headers”. Dialog box will open. Check the setting “Enable HTTP Keep-Alive”.

If we uncheck this box then “Connection: close” header will be sent to client which indicates that this connection will be closed once response is sent.

Any disadvantages for using keep alive?

Now we know that having single connection for all resource files from a particular source to destination is a good idea. But do you really think that we can use this setting blindly?

No, there are few points to consider here while enabling this setting. From client side we are fine if connection remain open all the time. But this can be painful for server.

We know that in normal configuration webserver close the connection once resource is delivered. Due to this nature, web server is able to handle lots of requests as server resources gets freed up after file is done. But if connection remain open then few server resources are occupied to main that connection.

So consider if 1000s of clients are remain their connection open for long time then web server may be impacted with serious issues as lots of server resources will be occupied and web server performance will be reduced .

So think twice while enabling this setting. If you think that enabling this setting will improve your overall experience then go ahead and enable it. But there are few factors which will impact this setting.

Let’s discuss those factors so that you can use this setting efficiently.

Factors which Impact keep-Alive

Now let’s discuss those factor which will impact this setting.


It sets the maximum number of requests for every connection.

A value of 100 is normally good enough for almost any scenario. This value, however, can be increased depending on the amount of files within a web page that the server is supposed to deliver. If any webpage contains 100 files to deliver then this value can be increased.


This setting tell about how long not active connection should remain in server. Once any connection is in server more than this timeout value without any work then server should destroy this.

A value between 7 to 10 seconds is usually ideal. With higher traffic this value can go extremely higher to make sure there is no frequent TCP connection re-initiated. If this value goes down too much, Keep-Alive loses its purpose.


All modern browsers use persistent connections till the time server have no objection.

With HTTP/1.0 implementation, this setting was disabled by default. We can enable it. But with HTTP/1.1, Keep-Alive is implemented differently and the connections are kept open by default. HTTP/1.1 connections are always active.

To disable this setting in HTTP/1.1 we need to set the response header “Connection: close”. If we are not sending this close header then connection will remain open. But this connection will not remain open forever.