Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

  • How Websockets are implemented?
  • What is the algorithm behind this new tech (in comparison to Long-Polling)?
  • How can they be better than Long-Polling in term of performance?

I am asking these questions because here we have a sample code of Jetty websocket implementation (server-side).

If we wait long enough, a timeout will occur, resulting in the following message on the client.

And that is definately the problem I'm facing when using Long-polling. It stops the process to prevent server overload, doesn't it ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
118 views
Welcome To Ask or Share your Answers For Others

1 Answer

How Websockets are implemented?

webSockets are implemented as follows:

  1. Client makes HTTP request to server with "upgrade" header on the request
  2. If server agrees to the upgrade, then client and server exchange some security credentials and the protocol on the existing TCP socket is switched from HTTP to webSocket.
  3. There is now a lasting open TCP socket connecting client and server.
  4. Either side can send data on this open socket at any time.
  5. All data must be sent in a very specific webSocket packet format.

Because the socket is kept open as long as both sides agree, this gives the server a channel to "push" information to the client whenever there is something new to send. This is generally much more efficient than using client-driven Ajax calls where the client has to regularly poll for new information. And, if the client needs to send lots of messages to the server (perhaps something like a mnulti-player game), then using an already open socket to send a quick message to the server is also more efficient than an Ajax call.

Because of the way webSockets are initiated (starting with an HTTP request and then repurposing that socket), they are 100% compatible with existing web infrastructure and can even run on the same port as your existing web requests (e.g. port 80 or 443). This makes cross-origin security simpler and keeps anyone on either client or server side infrastructure from having to modify any infrastructure to support webSocket connections.

What is the algorithm behind this new tech (in comparison to Long-Polling)?

There's a very good summary of how the webSocket connection algorithm and webSocket data format works here in this article: Writing WebSocket Servers.

How can they be better than Long-Polling in term of performance?

By its very nature, long-polling is a bit of a hack. It was invented because there was no better alternative for server-initiated data sent to the client. Here are the steps:

  1. The client makes an http request for new data from the client.
  2. If the server has some new data, it returns that data immediately and then the client makes another http request asking for more data. If the server doesn't have new data, then it just hangs onto the connection for awhile without providing a response, leaving the request pending (the socket is open, the client is waiting for a response).
  3. If, at any time while the request is still pending, the server gets some data, then it forms that data into a response and returns a response for the pending request.
  4. If no data comes in for awhile, then eventually the request will timeout. At that point, the client will realize that no new data was returned and it will start a new request.
  5. Rinse, lather, repeat. Each piece of data returned or each timeout of a pending request is then followed by another ajax request from the client.

So, while a webSocket uses one long-lived socket over which either client or server can send data to the other, the long-polling consists of the client asking the server "do you have any more data for me?" over and over and over, each with a new http request.

Long polling works when done right, it's just not as efficient on the server infrastructure, bandwidth usage, mobile battery life, etc...

What I want is explanation about this: the fact Websockets keep an open connection between C/S isn't quite the same to Long Polling wait process? In other words, why Websockets don't overload the server?

Maintaining an open webSocket connection between client and server is a very inexpensive thing for the server to do (it's just a TCP socket). An inactive, but open TCP socket takes no server CPU and only a very small amount of memory to keep track of the socket. Properly configured servers can hold hundreds of thousands of open sockets at a time.

On the other hand a client doing long-polling, even one for which there is no new information to be sent to it, will have to regularly re-establish its connection. Each time it re-establishes a new connection, there's a TCP socket teardown and new connection and then an incoming HTTP request to handle.

Here are some useful references on the topic of scaling:


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...