WS-Transfer (2006).
The rise and fall of HTTP (2002).
Featured Post
These days, I mostly post my tech musings on Linkedin. https://www.linkedin.com/in/seanmcgrath/
Thursday, March 16, 2006
Wednesday, March 15, 2006
How long will it take?
- "Five words to strike fear into the heart of any software developer: 'How long will it take?'--How long will it take to make that piece of string?
XML Summer School, Oxford, July
I will be speaking again at this years XML Summer School in Oxford. This time, on the Web Services track.
It is really a unique event. Highly recommended.
It is really a unique event. Highly recommended.
Monday, March 13, 2006
A critical point about standards
Jon Udell asks a key question that cuts to the heart of what standards really are and the relationship between notations and process models.
A standardised notation is a good thing but it can only go so far in specifiying in abstract language the intended processing model of systems consuming the notation.
The "definition of correctness" is an escalator that starts with notation, proceeds up to test suite and from there - more often than not - to reference implementation.
What does it mean for a Python program to be "correct"? Is it in the specs, the test suite or the reference implementation?
Sometimes you just need the specs. /etc/passwd on Unix is too simple to need a processing model or even a test suite probably. What about HTTP Headers?, RTF? XHTML?, ODF?
The reference implementation issue becomes critical when the processing logic gets complex *or* when the definition of correctness transcends the reach of the computer into human-space such as sight or sound. Sight in particular is key. "Does it look right" is not a question that computers are good at answering. This is why, I believe, visual systems like word processors and web browers benefit from reference implementations.
The system fails to be open of course, the moment the reference implementation fails to be open. Regardless of how much XML is sprinkled over it.
A standardised notation is a good thing but it can only go so far in specifiying in abstract language the intended processing model of systems consuming the notation.
The "definition of correctness" is an escalator that starts with notation, proceeds up to test suite and from there - more often than not - to reference implementation.
What does it mean for a Python program to be "correct"? Is it in the specs, the test suite or the reference implementation?
Sometimes you just need the specs. /etc/passwd on Unix is too simple to need a processing model or even a test suite probably. What about HTTP Headers?, RTF? XHTML?, ODF?
The reference implementation issue becomes critical when the processing logic gets complex *or* when the definition of correctness transcends the reach of the computer into human-space such as sight or sound. Sight in particular is key. "Does it look right" is not a question that computers are good at answering. This is why, I believe, visual systems like word processors and web browers benefit from reference implementations.
The system fails to be open of course, the moment the reference implementation fails to be open. Regardless of how much XML is sprinkled over it.
Subscribe to:
Posts (Atom)