Congrats to Timetric on their Seedcamp London success.
The notion of using the Web as a base platform for numerical quantities, leveraging all we have learned from the textual Web era, is growing apace and it is exciting to watch.
The next big thing on the Web, in my opinion, is numbers.
Featured Post
These days, I mostly post my tech musings on Linkedin. https://www.linkedin.com/in/seanmcgrath/
Saturday, April 25, 2009
Sunday, April 19, 2009
AtomPub...
Atompub is the subject of Hugh Winkler's REST hypothesis post. I think I agree with most of it. When history is written, I think it will show that without the visual nature of the Web, it never would have taken off as an IT substrate. I.e. The web was something you looked at. It attracted end-user eye-balls first. The droves of substrate diver-types came second. No end-user eyeballs to fuel the fires, no web. Or, looked at another way, no compelling end user application out of the box, no reason to engineer a compelling IT substrate.
The slippery bit here is the stuff that eyeballs look at on the web. We all know that the web started as "pages" where electronic "page" had a strong analogy to a paper "page". The content was forged from tags that marked out paragraphs and bold and headings and what not...
...but that was then and this is now. Moving from mere pages to full-on applications requires more than just html. More than just declarative syntax for end-user-facing text. For a while, the technical answer seemed obvious. Allow browsers to work with structured content if they want to and render said structured content using stylesheets.
For reasons I do not profess to understand, this never really happened. Somewhere along the line, the "structured content+stylesheet=dynamically rendered page" equation broke down. Javascript began to flex its Turing Complete muscles and today we are staring down the barrel of a completely different concept of a web "page"...
...In the new world, it seems to me that HTML is taking a back seat and becoming - goodness gracious me - an envelope notation into which you can pour Javascript for deliver to the client side...
...where it gets turned into HTML (maybe) for rendering using the HTML rendering smarts of the browser.
It is as if declarative information is disappearing into the silos that are not on the web - but interfaced to it. Interfaced by application servers that convert content server side into Javascript programs to be EVALed by the client. The EVAL-ed javascript is then further EVAL-ed by the end-user eyeballs that birthed the success of the web in the first place.
So what? Well, There is a big loss happening I think. At least, I believe it is a real risk. Maybe I'm just a pessimist. I see content disappearing at a rate of knots into silos that are not on the web. Access to these silos is being controlled by application servers that are spitting out programs. Not pages-o-useful-content but PROGRAMS.
We are doing this because programs are so much more useful than mere content if we want to create compelling end-user-applications and because if you squint just right, content is a trivial special case of a Turing Complete program. Just ask any Lisper.
This is happening somewhat under the covers because HTML - gotta love it - allows JavaScript payloads. But if 99% of my pages are 99% JavaScript and 1% declarative markup of content, am I serving out content or serving out programs?
Maybe JSON is pointing at where this is all headed. Maybe we will see efforts to standardize data representation techniques in JSON so that the JSON can be parsed and used separately from the rendering normally bound to it? Maybe XML-on-the-client-side will have a resurgence?
I don't know which way it will go but I would suggest that if we are searching for what exactly the web *is* we have to go further than say it is HTML, as Hugh does in this piece.
For me, the web is URIs, a standard set of verbs and a standardized EVAL function. The verbs are mostly GET and POST and the standardized EVAL function is the concept of a browser that can EVAL HTML and can eval JavaScript. I don't thing we can afford to leave JavaScript out of the top level definition of what the Web is because there is too much at stake.
There is a huge difference between a web of algorithms and a web of data. For computing eons, we have known that a combination of algorithms and data structures lead to programs. Less well known (outside computer science) are the problems of trying to build applications using one without the other or trying to fake one using the other.
Lisp, TeX, SGML...all of these evidence the struggle between declarative and imperative methods. Today, the problems are all the same but the buzzwords are different: JavaScript, XSLT, XML...
We have not solved the fundamental problem: separating what information *is* from what information *does* in a way that makes the "is" part usable without the "does" part and yet does not impede the easy creation of the main application which unfortunately (generally) needs to fuse "is" and "does" in a deep and complex way.
Anyway. Enough Sunday morning rambling. If this stuff is of interest to you you might be interested in Orangutans, Oxen and Ogham Stones.
The slippery bit here is the stuff that eyeballs look at on the web. We all know that the web started as "pages" where electronic "page" had a strong analogy to a paper "page". The content was forged from tags that marked out paragraphs and bold and headings and what not...
...but that was then and this is now. Moving from mere pages to full-on applications requires more than just html. More than just declarative syntax for end-user-facing text. For a while, the technical answer seemed obvious. Allow browsers to work with structured content if they want to and render said structured content using stylesheets.
For reasons I do not profess to understand, this never really happened. Somewhere along the line, the "structured content+stylesheet=dynamically rendered page" equation broke down. Javascript began to flex its Turing Complete muscles and today we are staring down the barrel of a completely different concept of a web "page"...
...In the new world, it seems to me that HTML is taking a back seat and becoming - goodness gracious me - an envelope notation into which you can pour Javascript for deliver to the client side...
...where it gets turned into HTML (maybe) for rendering using the HTML rendering smarts of the browser.
It is as if declarative information is disappearing into the silos that are not on the web - but interfaced to it. Interfaced by application servers that convert content server side into Javascript programs to be EVALed by the client. The EVAL-ed javascript is then further EVAL-ed by the end-user eyeballs that birthed the success of the web in the first place.
So what? Well, There is a big loss happening I think. At least, I believe it is a real risk. Maybe I'm just a pessimist. I see content disappearing at a rate of knots into silos that are not on the web. Access to these silos is being controlled by application servers that are spitting out programs. Not pages-o-useful-content but PROGRAMS.
We are doing this because programs are so much more useful than mere content if we want to create compelling end-user-applications and because if you squint just right, content is a trivial special case of a Turing Complete program. Just ask any Lisper.
This is happening somewhat under the covers because HTML - gotta love it - allows JavaScript payloads. But if 99% of my pages are 99% JavaScript and 1% declarative markup of content, am I serving out content or serving out programs?
Maybe JSON is pointing at where this is all headed. Maybe we will see efforts to standardize data representation techniques in JSON so that the JSON can be parsed and used separately from the rendering normally bound to it? Maybe XML-on-the-client-side will have a resurgence?
I don't know which way it will go but I would suggest that if we are searching for what exactly the web *is* we have to go further than say it is HTML, as Hugh does in this piece.
For me, the web is URIs, a standard set of verbs and a standardized EVAL function. The verbs are mostly GET and POST and the standardized EVAL function is the concept of a browser that can EVAL HTML and can eval JavaScript. I don't thing we can afford to leave JavaScript out of the top level definition of what the Web is because there is too much at stake.
There is a huge difference between a web of algorithms and a web of data. For computing eons, we have known that a combination of algorithms and data structures lead to programs. Less well known (outside computer science) are the problems of trying to build applications using one without the other or trying to fake one using the other.
Lisp, TeX, SGML...all of these evidence the struggle between declarative and imperative methods. Today, the problems are all the same but the buzzwords are different: JavaScript, XSLT, XML...
We have not solved the fundamental problem: separating what information *is* from what information *does* in a way that makes the "is" part usable without the "does" part and yet does not impede the easy creation of the main application which unfortunately (generally) needs to fuse "is" and "does" in a deep and complex way.
Anyway. Enough Sunday morning rambling. If this stuff is of interest to you you might be interested in Orangutans, Oxen and Ogham Stones.
Subscribe to:
Posts (Atom)