Thursday, December 07, 2006

Adaptive polling as an alternative to HTTP streaming

I've been fairly interested in how the HTTP protocol works lately. For a long time, I didn't think much of it. It sat on top of the TCP/IP layers, and there wasn't much I need to do with it. It did what it was suppose to do: let clients fetch pages from servers upon request.

But then I started reading about REST (about a year after the hubbub), and in general about why stateless connections are desirable (it's scaleable). This lead me down equally saturated road of AJAX and eventually some joke about Comet. What was coined as "Comet" was really a play-on-words for another cleaning product applied to another old web technology--namely persistent HTTP connections.

Traditionally, HTTP doesn't allow servers to push data to clients. With the way the web is architectured, most clients are behind firewalls and routers, so the server has no way of knowing which machine to push it to, unless it was talked to first. In other words, only clients can initiate data requests. This isn't enough sometimes, as servers might need to push data to clients, such as live stock ticker feeds in your web browser without page reloading.

The trick to persistent HTTP connections was to get clients to initiate the XHR connection to the server first, and for the server to not immediately reply to the request. The server will hold off on replying (leaving an open connection from the client) until there's actually a message to be sent back to the client (i.e. when there's a new stock update). And that way, it'll look like a server-push. And then the client initiates another connection all over again after a certain wait.

This is the way that LivePage and JotSpot Live implements their responsive apps. However, the concern for most people is that it doesn't scale--at least not when they tried it circa 1998. A server having thousands of open connections to clients will probably buckle, although Twister might have already solved this problem, but I haven't looked into it much yet.

Another concern of mine is that the Ajaxian pattern of HTTP streaming can also require the client and the server to hold state. This is because a server does not know what version of the last set of updates it has received. Therefore, the client sends the server what version it has had (state), and the server will only reply if it has a newer version. This seems to violate the REST architecture. While I only have the original 2000 thesis to say this is not scalable, it seems to make servers a bit more complex.

So why not use polling? Usually, it's because too much polling is wasted bandwidth. And not enough polling, you have stale data. So depending on the nature of the data that you're trying to stream, polling may or may not be a solution. However, it is stateless, and it should scale better, as long as polling isn't overdone.

That lead me to wonder if there was adaptive polling. Why not have clients try and predict their polling frequency based on past observations of their past polling to optimize their polling success. Polling success is defined as every time they poll, they get 1) new data and 2) freshest data.

It ends up that it's a very similar problem in two other fields (and I'm sure many others): web caching and sensor networks. In web caching, you want to cache web pages, so that you can show clients results faster if the page hasn't changed. How do you know the page has changed, and when to throw away the cached copy and obtain a fresh one? In sensor networks, each connection is expensive in terms of energy consumption. How do you know when a node has fresh data, and how often should you poll to obtain polling success? In this case, a master node is analogous to the client and a slave node is analogous to the server.

There's an additional issue to consider. One wouldn't want all the clients hammer the server all at once for a poll. That would make it seem like a flash mob to the server at periodic intervals. It would be best if the clients can spread out their requests, so that the traffic to the server is more constant. That way, the server wouldn't be overloaded. How do you coordinate the polling times of thousands of clients? Wouldn't that create more traffic on the network for the clients to ask each other? I'm guessing no, because the delay in response time from the server would indicate how busy it was at this moment. Using that as a type of "pheromone" from other clients (indicator left by other clients), a client should be able to adjust its offset time for its next polling request.

Sunday, December 03, 2006

Splatting in case statements

RedHanded � Wonder of the When-Be-Splat

I always feel like I'm playing catch up to Rubyists.

BOARD_MEMBERS = ['Jan', 'Julie', 'Archie', 'Stewick']
HISTORIANS = ['Braith', 'Dewey', 'Eduardo']

case name
when *BOARD_MEMBERS
"You're on the board! A congratulations is in order."
when *HISTORIANS
"You are busy chronicling every deft play."
end


That's pretty damn cool. The thing about new languages is that when you're learning to write with it, you'll write it in the style of the old language that you're use to. C programmers will write C++ as if it were C. Java programmers will write Python as if it were Java. Therefore, you might think that there isn't much to be gained from the new language other than some syntactic sugar sprinkled here and there.

As least for me, being open to other constructs like blocks, closures written more like functional programming has lead to more succinct and readable code.

a = [1, 2, 3]
Hash[*a.collect { |v|
[v, v*2]
}.flatten]


I would have done this with a for loop before, and that's probably less readable. But I have to admit, succinct code only has meaning if you know the vocab.