Why don't all programs on your machine run web-servers which they can use to respond to http requests?

Why can't the OS wake them up whenever there's an incoming call?

This link is unreliable,

so I'm going to quote the whole thing. Sorry SeanMcGrath (CategoryCopyrightRisk)

In his classic book entitled Godel, Escher, Bach[1], Douglas Hofstadter

created a place called Tumbolia. Tumbolia is described as the land of

dead hiccups and extinguished light bulbs. A place of "non-things".

There is lots of software in Tumbolia. Software that is just sitting

around, waiting for the system it is installed on to invoke it, to start

it up, to bring it to life. Nothing happens in Tumbolia. Nothing at all.

Nothing is alive there.

Based on this description, you might want to take a second to strike

Tumbolia off your list of places to visit. It is a lot more pleasant to

think about Tumbolia from a safe distance rather than go there. Pull up

a chair and we will contemplate its mysteries together for a couple of

minutes.

Most computer systems have a built-in Tumbolia. The laptop currently on

my knees for example. There are lots of applications installed on it. At

any one time, only some of these are alive - in memory. That is, at any

given moment a certain number of applications are running and the rest

are, well, in Tumbolia.

The only applications I can really interact with are the live ones. Once

alive, I can do wonderful things to business data with the help of

applications. When they disappear off to Tumbolia, most of their power

disappears too. All my documents, all my spreadsheets, all my databases,

all my ledgers - they are lifeless tubs of lard when the application

that created them is not alive on my machine.

This is achingly, bone-crunchingly obvious and I'm sure at this stage

you are wondering where I am going with this. Let us ask a "what if"

question. What if applications never went to Tumbolia? What if

applications stayed running all the time? Why do we stop and start

applications anyway?

For most of the history of computing, there has been a very simple

answer to this - memory. Memory has always been in short supply relative

to the size of applications. So much so that tremendous effort has been

expended building specialized software that swaps applications into and

out of Tumbolia very quickly. We call them "operating systems".

Today, memory is a lot cheaper than it used to be. I have a Gigabyte of

RAM in this machine. Moreover, I do 90 percent of my work with just six

applications. I can run all six at the same time comfortably. Indeed, I

boot up all six when my machine boots and leave them running all day

long. They only go to Tumbolia when I shut my machine down.

Unfortunately, I do not get the value out of these six simultaneously

running applications that I should. My operating system is simply not

geared up that way. For example, if I have spreadsheet running all the

time, why is it that I cannot refer to the values of cells in

spreadsheets from within my ledger system? Why is it that I cannot refer

to one of my database records from within my Web Browser? Why is it that

I cannot load a balance sheet into my word processor by just saying "go

get it, it's over there!". Why is it that I cannot refer to this precise

paragraph of text in this text file in my e-mail client?

Yes, it is true that some of these scenarios are possible on some

operating systems with enough ingenuity and grunt work. However, it

should be much easier than it is in my opinion. Why isn't it easy? I

suspect it isn't easy to do this sort of temporal integration across

applications because applications on a single machine are not generally

designed to co-exist temporally. Developers have not historically said

to themselves "how can I make my application play nice will all the

other applications running right now on this machine?"

It is also not easy because we have no universally standard way of

naming information down to the level of cells in spreadsheets or

paragraphs of text[3] or records in databases...or have we?

Ah, yes the Web! What if, as a matter of course, all desktop

applications shipped with a built-in web server? What if, all

applications managed their own catalog of URLs that allow other desktop

applications to retrieve pieces of data on the fly. In such a world,

sharing data between applications would be a simple matter of HTTP GETs.

What is wrong with this picture? I have a good answer I think. I

scribbled it down here somewhere[4]. Ah, yes, I wrote it down here:

SeanMcGrath.blogspot.com/PublicMachine/Spreadsheets/ArticleTopics/Ideas?cell=C12.

Some day, a link like that might just work.

[1] http://seanmcgrath.blogspot.com/bookshelf.html#0140289208

[2] http://www.4reference.net/encyclopedias/wikipedia/Tumbolia.html

[3] http://www.eekim.com/software/purple/purple.html

[4] http://mathworld.wolfram.com/FermatsLastTheorem.html

This story, "Raid on Tumbolia: A rescue mission for business data" was originally published by ITworld.

Backlinks (1 items)