Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Sunday, April 19, 2009

AtomPub...

Atompub is the subject of Hugh Winkler's REST hypothesis post. I think I agree with most of it. When history is written, I think it will show that without the visual nature of the Web, it never would have taken off as an IT substrate. I.e. The web was something you looked at. It attracted end-user eye-balls first. The droves of substrate diver-types came second. No end-user eyeballs to fuel the fires, no web. Or, looked at another way, no compelling end user application out of the box, no reason to engineer a compelling IT substrate.

The slippery bit here is the stuff that eyeballs look at on the web. We all know that the web started as "pages" where electronic "page" had a strong analogy to a paper "page". The content was forged from tags that marked out paragraphs and bold and headings and what not...

...but that was then and this is now. Moving from mere pages to full-on applications requires more than just html. More than just declarative syntax for end-user-facing text. For a while, the technical answer seemed obvious. Allow browsers to work with structured content if they want to and render said structured content using stylesheets.

For reasons I do not profess to understand, this never really happened. Somewhere along the line, the "structured content+stylesheet=dynamically rendered page" equation broke down. Javascript began to flex its Turing Complete muscles and today we are staring down the barrel of a completely different concept of a web "page"...

...In the new world, it seems to me that HTML is taking a back seat and becoming - goodness gracious me - an envelope notation into which you can pour Javascript for deliver to the client side...

...where it gets turned into HTML (maybe) for rendering using the HTML rendering smarts of the browser.

It is as if declarative information is disappearing into the silos that are not on the web - but interfaced to it. Interfaced by application servers that convert content server side into Javascript programs to be EVALed by the client. The EVAL-ed javascript is then further EVAL-ed by the end-user eyeballs that birthed the success of the web in the first place.

So what? Well, There is a big loss happening I think. At least, I believe it is a real risk. Maybe I'm just a pessimist. I see content disappearing at a rate of knots into silos that are not on the web. Access to these silos is being controlled by application servers that are spitting out programs. Not pages-o-useful-content but PROGRAMS.

We are doing this because programs are so much more useful than mere content if we want to create compelling end-user-applications and because if you squint just right, content is a trivial special case of a Turing Complete program. Just ask any Lisper.

This is happening somewhat under the covers because HTML - gotta love it - allows JavaScript payloads. But if 99% of my pages are 99% JavaScript and 1% declarative markup of content, am I serving out content or serving out programs?

Maybe JSON is pointing at where this is all headed. Maybe we will see efforts to standardize data representation techniques in JSON so that the JSON can be parsed and used separately from the rendering normally bound to it? Maybe XML-on-the-client-side will have a resurgence?

I don't know which way it will go but I would suggest that if we are searching for what exactly the web *is* we have to go further than say it is HTML, as Hugh does in this piece.

For me, the web is URIs, a standard set of verbs and a standardized EVAL function. The verbs are mostly GET and POST and the standardized EVAL function is the concept of a browser that can EVAL HTML and can eval JavaScript. I don't thing we can afford to leave JavaScript out of the top level definition of what the Web is because there is too much at stake.

There is a huge difference between a web of algorithms and a web of data. For computing eons, we have known that a combination of algorithms and data structures lead to programs. Less well known (outside computer science) are the problems of trying to build applications using one without the other or trying to fake one using the other.

Lisp, TeX, SGML...all of these evidence the struggle between declarative and imperative methods. Today, the problems are all the same but the buzzwords are different: JavaScript, XSLT, XML...

We have not solved the fundamental problem: separating what information *is* from what information *does* in a way that makes the "is" part usable without the "does" part and yet does not impede the easy creation of the main application which unfortunately (generally) needs to fuse "is" and "does" in a deep and complex way.

Anyway. Enough Sunday morning rambling. If this stuff is of interest to you you might be interested in Orangutans, Oxen and Ogham Stones.

6 comments:

Anonymous said...

It seems to me that the browser is the human's user agent and XHR is the browser's user agent. XHR is often the "client" in the client/server pattern. The browser itself is just another intermediary that aggregates data (which the XHR has retrieved) into "content" for the human user. It's just another separation of concerns -- having separated presentation from content (CSS), we've now moved to separating out "data" as well. JSON is the key, but to work it needs some semantics, or at least a grammar from which a minimal Web/REST semantic can be derived. Until something else comes along, Atom/AtomPub provides the best baseline semantic we have. I doubt we've seen the last of it....

hughw said...

Even if web "pages" now have become more procedural than declarative, there's still searchable goodness there. Search engines should load the page and execute the javascript, to construct the DOM. If they don't do this already, why the heck not?

Anonymous said...

Hello Sean, good post.
Microsoft Ajax is bringing declarative programming to HTML/JSON. Have you seen this?:

http://weblogs.asp.net/bleroy/archive/2009/03/18/microsoft-ajax-4-0-preview-4-now-available.aspx

Philip Fennell said...

I share your concerns about the erosion of the Declarative Web. I wrote recently about this topic <http://broadcast.oreilly.com/2009/03/are-we-losing-the-declarative.html> for O'Reilly, but in my case about 3D content on the Web. If developers can be taught/encouraged to put there client-side scripting efforts into implementing libraries for processing declarative languages (XForms, SVG, SMIL-Anim, X3D, XML Events, etc, etc...) rather than the usual procedural smash 'n' grab that appears to be in vogue then all the better in my opinion. The process is under-way with projects like Ubiquity XForms and more to follow now that more advanced JavaScript interpreters are starting to appear.

The real issue behind all of this is that many developers still 'see' the HTML page as a rendering rather than realising that it should be machine processable content that has a rendering. If they were able to make that distinction then they'd understand the value of declarative mark-up. Being able to process the content goes way beyond just rendering. It is tightly woven into accessibility, indexing, data extraction and the re-purposing of content for different media.

walter said...

"In the new world, it seems to me that HTML is taking a back seat and becoming - goodness gracious me - an envelope notation into which you can pour Javascript for deliver to the client side"

Bingo!

elarson said...

I can relate at some level to the lack of attention to thinking in terms of declarative documents when looking at the current web. I think the trend has been created out of pragmatism more than anything else. It simply feels easier to just create a solution in Javascript or create a specialized API rather than deal with the overhead associated with creating a more generic solution (ie AtomPub).

At some level this feels like a tragedy, while at the same time, I think it might be a blessing in disguise. After switching jobs and in turn switching technologies from XML to JSON and (gasp!) RPC services, it was clear that the chosen technologies were better in the respective environments. The big win is the more direct association JSON has with the underlying language. After implementing quite a few AtomPub implementations, this is a huge benefit. Most of the time is spent dealing with the transition from XML to the underlying functionality. In some ways, ORMs and databases make this transition from content to data more reasonable than XML often does.

I'm positive that I don't fully understand the benefits of thinking declaratively, but I'm positive the techniques of AtomPub and the XML community are finally become closer to reality. JSON has just become a more convenient format than XML. While many developers don't understand the web as documents, they do understand how exposing data as JSON containing hyperlinks can be beneficial.