Thursday, December 14, 2017

What is a document? - Part 2


Back in 1985, when I needed to create a “document” on a computer, I had only two choices. (Yes, I am indeed avoiding trying to define “document” just yet. We will come back to it when we have more groundwork laid for a useful definition.) The first choice involved typing into what is known generically as a “text editor”. Back in those days, US ASCII was the main encoding for text and it allowed for just the basic symbols of letters, numbers and a few punctuation symbols. In those days, the so called “text files” created by these “text editors” could be viewed on screens which typically had 80 columns and 25 rows. They could also be printed onto paper, using either “dot matrix” printers or higher resolution, computerized typewriters such as the so-called “golf ball” typewriters/printers which mimicked a human typist using a ribbon-based impact printer.

The second choice was to wedge the text into little boxes  called "fields" to be stored in a "database". Yes, My conceptual model of text in computers in those early days was a very binary one. (Some nerd humour in the last sentence.)

On one hand, I could type stuff into small “boxes” on a screen which typically resulted in the creation of some form of “structured” data file e.g. a CODASYL database [1]. On the other hand, I could type stuff into an expandable digital sheet of paper without imposing any structure on the text, other than a collection of text characters, often chunked with what we used to call CRLF separators (Carriage Return, Line Feed).

(Aside: You can see the typewriter influence in the terminology here. Return the carriage (holding the print head) to the left of the page. Feed the page upwards by one line. So Carriage Return + Line Feed  = CR/LF).

(Aside:I find the origins of some of this terminology is often news to younger developers who wonder why moving to a new line is two characters instead of one on some machines. Surely “newline” is one thing? Well, it was two originally because one command moves the carriage back (the “CR”) and another command moved the paper up a line “LF”, hence the common pairing: CR/LF. When I explain this I double up by explaining “uppercase/lowercase”. The origins of the latter in particular, are not well known to digital natives in my experience.)

From my first encounters with computers, this difference in how the machines handled storing data intrigued me. On one hand, there were “databases”. These were stately, structured, orderly digital objects. Mathematicians could say all sorts of useful things about them and create all sorts of useful algorithms to process them. The “databases” are designed for automation.

On the other hand, there was the rebellious, free-wheeling world of text files. Unstructured. Disorderly. A pain in the neck for automation. Difficult to reason about and create algorithms for, but fantastically useful precisely because they were unstructured and disorderly.

I loved text files back then. I still love them today. But as I began to dig deeper into computer science I began to see that the binary world view : database versus text. Structured versus unstructured. Was simple, elegant and wrong. Documents can indeed be “structured”. Document processing could indeed be automated. It is possible to reason about them, and create algorithms for them, but it took me quite a while to get to grips with how this can be done.

My journey of discovery started with an ADM 3A+ terminal to a VAX 11/780 mini-computer (by day) [2] and an Apple IIe personal computer running CP/M – by night[3].

For the former, a program called RUNOFF. For the latter, a program called Wordstar and one of my favorite pieces of hardware of all time : an Epson FX80  dot matrix printer.



Thursday, December 07, 2017

What is a document? Part 1.

I am seeing a significant up-tick in interest in the concept of structured/semantic documents in the world of law at present. My guess is that this is as a consequence of the activity surrounding machine learning/AI in law at the moment.

It has occurred to me that some people with law/law-tech backgrounds are coming to some of the structured/semantic document automation concepts anew whereas people with backgrounds in, for example, electronic publishing (Docbook etc.), financial reporting (XBRL etc.), healthcare (HL7 etc.) have already “been around the block” so-to-speak, on the opportunities, challenges and pragmatic realities behind the simple sounding – and highly appealing – concept of a “structured” document.

In this series of posts, I am going to outline how I see structured documents, drawing from the 30 (phew!) or so years of experience I have accumulated in working with them. My hope is that what I have to say on the subject will be of interest to those newly arriving in the space. I suspect that at least some of the new arrivals are asking themselves “surely this has been tried before?” and looking to learn what they can from those who have "been there". Hopefully, I can save some people some time and help them avoid some of the potential pitfalls and “gotchas” as I have had plenty of experience in finding these.

As I start out on this series of blog posts, I notice with some concern that a chunk of this history – from late Eighties to late Nineties – is getting harder and harder to find online as the years go by. So many broken links to old conference websites, so many defunct publications....

This was the dawn of the electronic publishing era and coincided with a rapid transition from mainframe green-screens to dialup compuserv, to CD-ROMs, to the Internet and then to the Web, bringing us to where we are today. A period of creative destruction in the world of the written word without parallel in the history of civilization actually. I cannot help feeling that we have a better record of what happened in the world from the time of Gutenburg's printing press to the glory years of paper-centric desktop publishing, than we do for the period that followed it when we increasingly transitioned away from fixed-format, physical representations of knowledge. But I digress....

For me, the story starts in June 1992 with a Byte magazine article by Jon Udell[1] with a title that promised a way to “turn mounds of documents into information that can boost your productivity and innovation”. It was exactly what I was looking for in 1992 for a project I was working on. An electronic education reference guide to be distributed on 3.5 inch floppy disks to every school in Ireland.

Turning mounds of documents into information. Sound familiar? Sound like any recent pitch you have heard in the world of law? Well, it may surprise you to hear that the technology Jon Udell's article was about – SGML – was largely invented by a lawyer called Dr Charles F. Goldfarb[2]. SGML set in motion a cascade of technologies that have lead to the modern web. HTML is the way it is, in large part, because of SGML. In other words, we have a lawyer to thank for a large aspect of how the Web works. I suspect that I have just surprised some folks by saying that:-)

Oh, and while I am on a roll making surprising statements, let me also state that the cloud – running as it does in large part on linux servers – is, in part, the result of a typesetting R&D project in AT&T Bell Labs back in the Seventies.

So, in an interesting way, modern computing can trace its feature set back to a problem in the legal department. Namely, how best to create documents in computers so that the content of the documents can be processed automatically and re-used in different contexts?

More on that later, but best to start at the beginning which for me was 1985. The year when a hirsute computer science undergraduate (me) took a class in compiler design from Dr. David Abrahamson[3] in Trinity College Dublin and was introduced to the wonderful world of machine readable documents.

Yes, 1985.

Next: Part 2.


Tuesday, November 07, 2017

Programming Language Frameworks

Inside every programming language framework is exactly one application that fits it like a glove.

Wednesday, October 04, 2017

It is science Jim, but not as we know it.

Roger Needham once said that computing is noteworthy in that the technology often precedes the science[1]. In most sciences, it is the other way around. Scientists invent new building materials, new treatments for disease and so on. Once the scientists have moved on, the technologists move in to productize and commercialize the science.

In computing, we often do things the other way around. The technological tail seems to wag the scientific dog so to speak. What happens is that application-oriented technologists come up with something new. If it flies in the marketplace, then more theory oriented scientists move in to figure out how to make it work better, faster or sometimes to try to discover why the new thing works in the first place.

The Web for example, did not come out of a laboratory full of white coats and clipboards. (Well actually, yes it did but they were particle physicists and were not working on software[2]). The Web was produced by technologists in the first instance. Web scientists came later.

Needham's comments in turn reminded me of an excellent essay by Paul Graham from a Python conference. In that essay, entitled 'The hundred-year language'[3] Graham pointed out that the formal study of literature - a scientific activity in its analytical nature - rarely contributes anything to the creation of literature - which is a more technological activity.

Literature is an extreme example of the phenomenon of the technology preceding, in fact trumping, the science. I am not suggesting that software can be understood in literary terms. (Although one of my college lecturers was fond of saying that programming was language with some mathematics thrown in.) Software is somewhere in the middle, the science follows the technology but the science, when it comes, makes very useful contributions. Think for example of the useful technologies that have come out of scientific analysis of the Web. I'm thinking of things like clever proxy strategies, information retrieval algorithms and so on.

As I wander around the increasingly complex “stacks” of software, I cannot help but conclude that wherever software sits in the spectrum of science versus technology, there is "way" too much technology out there and not enough science.

The plethora of stacks and frameworks and standards is clearly not a situation that can be easily explained away on scientific innovation grounds alone. It requires a different kind of science. Mathematicians like John Nash, economists like Carl Shapiro and Hal Varian, Political Scientists like Robert Axelrod, all know what is really going on here.

These Scientists and others like them, that study competition and cooperation as phenomena in their own right would have no trouble explaining what is going on in today's software space. It has only a little to do with computing science per se and everything to do with strategy - commercial strategy. I am guessing that if they were to comment, Nash would talk about Equilibria[4], Shapiro and Varian would talk about Standards Wars[5], Robert Axelrod would talk about the Prisoners Dilemma and coalition formation[6].

All good science Jim, but not really computer science.

[1] href="http://news.bbc.co.uk/1/hi/technology/2814517.stm

[2] http://public.web.cern.ch/public/

[3] http://www.paulgraham.com/hundred.html

[4] http://www.gametheory.net/Dictionary/NashEquilibrium.html

[5] http://marriottschool.byu.edu/emp/Nile/mba580/handouts/art_of_war.pdf

[6] http://pscs.physics.lsa.umich.edu/Software/ComplexCoop.html

Wednesday, September 20, 2017

What is Law? - Part 17

Last time, we talked about how the concept of a truly self-contained contract, nicely packaged up and running on a blockchain, is not really feasible. The primary stumbling block being that it is impossible to spell out everything you might want to say in a contract, in words.

Over centuries of human affairs, societies have created dispute resolution mechanisms to handle this reality and provide a way of “plugging the gaps” in contracts and contract interpretation. Nothing changes if we change focus towards expressing the contract in computer code rather than in natural language. The same disambiguation difficulty exists.

Could parties to an agreement have a go at it anyhow and eschew the protections of a third party dispute resolution mechanism? Well, yes they could, but all parties are then forgoing the safety net that impartial third party provides when agreement turns to a dis-agreement. Do you want to take that risk? Even if you are of the opinion that the existing state supplied dispute resolution machinery – for example the commercial/chancery courts systems in common law jurisdictions - can be improved upon, perhaps with an online dispute resolution mechanism, you cannot remove the need for a neutral third party dispute resolution forum, in my opinion. The residual risks of doing so for the contracting parties are just too high. Especially when one party to a contract is significantly bigger than the other.

Another reason is that there are a certain number of things that must collective exist for a contract to exist in the first place. Only some of these items can usefully be thought of as instructions suitable for computer-based execution. Simply put, the legally binding contract dispute resolution machinery of a state is only available to parties that actually have a contract to be in dispute over.

There are criteria that must be met known as Essentialia negotii (https://en.wikipedia.org/wiki/Essentialia_negotii). Simply put, the courts are going to look for intention to contract, evidence of an offer, evidence of acceptance of that offer, a value exchange and terms. These are the items which collectively, societies have decided are necessary for a contract to even exist. Without these, you have some form of promise. Not a contract. Promises are not enforceable.

Now only some of these "must have" items for a contract are operational in nature. In other words, only some of these are candidates to be executed on computers. The rest are good old fashioned documents, spreadsheets, images and so on.

These items are inextricably linked to whatever subset of the contract can actually be converted into computer code. As the contract plays out over time, these materials are the overarching context that controls each transaction/event that happens under the terms of the contract.

The tricky bit, is to be able to tie together this corpus of materials from within the blockchain records of transactions/events so that each transaction/event can be tied back to the controlling documents as they were at the moment that the transaction/event happened (Disclosure: this is the area where my company, Propylon, has a product offering.)

This may ring a bell because referencing a corpus of legal materials as they were at a particular point in time, is a concept I have returned to again and again in this series. It is a fundamental concept in legisprudence in my opinion and is also fundamental in the law of contracts.

So, being able to link from the transactions/events back to the controlling documents is necessary because the executable code can never be a self contained contract in itself. In addition, it is not unusual for the text of a contract to change over time and this again, speaks to the need to identify what everything looked like, at the time a disputed contract event occurs. Changes to contract schedules/appendices are a common example. Changes to master templates such as ISDA Master Agreements happen through time, are another common example.

A third reason why fully self-contained contracts is problematic is that ambiguity can be both strategic and pragmatic in contracts. Contract lawyers are highly skilled in knowing when a potential ambiguity in a contract is in their clients favor – either in the sense of creating a potential advantage, or, perhaps most commonly, in allowing the deal to be done in a reasonable amount of time. As we have seen, it would be possible to spend an eternity spelling out what a phrase like “reasonable time period” or indeed, a noun like “chicken” actually means. Contract law has, over the centuries, built up a large corpus of materials the help decide what “reasonable” means and what “chicken” means in a myriad of contracting situations. At the end of the day, both parties want to contract so both parties have an interest in getting on with it. Lawyers facilitate this “getting on with it” by being selective in what potential ambiguities they spend time removing from a draft contract and which ones they let slide.

I think of contracts like layers of an onion. At the center, we have zero or more computable contract clauses. i.e. clauses that are candidates for execution on a computer. Surrounding that, we have the rest of the contract : documents, spreadsheets etc. Surrounding that we have global context. It contains things like “the current price of a barrel of oil” or “Dollar/Yen exchange rate”. Surrounding that we have “past dealings” which relates to how the contracting parties have dealt in the past. Surrounding that again, we have hundreds of years of contract law/precedents etc. to help disambiguation the language of the contract.

As you can see, this ever-expanding context used to resolve disputes in contracts is tantamount to taking a snapshot of the world of human affairs at time T – the time of the disputed event. This is not possible unless the world is in fact a simulation inside a universe sized computer but that is a topic for another time:-)

One final thing. I have been talking about the courts as an independent third party dispute resolution mechanism. There is more to it than that, in that courts often act as enforcers of public policy. For example, a contract that tries to permanently stop party A from competing with party B in the future, is likely to be seen as against the public interest and therefore invalid/unconscionable. See https://www.law.cornell.edu/ucc/2/2-302 for an example of this sort of "public good" concept.

In conclusion, IT professionals approaching the world of contracts are entering a world where semantic ambiguity will resist any and all attempts at complete removal through computer coding. In the words of Benjamin Cardozo:

"the law has outgrown its primitive stage of formalism when the precise word was the sovereign talisman...it takes a broader view today." https://en.wikipedia.org/wiki/Wood_v._Lucy,_Lady_Duff-Gordon

IT people may bristle a little at the characterization of word formalism as “primitive” but the onus is on the current wave of contract technology disruptors who claim to be reinventing contracts, to show how and why the current ambiguity laden system, with its enormous and ponderous dispute resolution dimension – can be fully replaced by “smart” contracts.

My view is that it cannot be fully replaced. Enhanced and improved, yes absolutely. Insofar as discrete contract clauses can be made executable, I see great potential value in making these clauses "smart". But this is an evolution of the current approach to contracts, not a radical replacement of it.


I think I will end this series at this point. I never thought, back in March when I started this series that it would take me so many posts to outline my thoughts in this area. I will end by nodding in the general direction of James Joyce by ending this series with an internal reference back to the beginning of the series, thus creating a hermeneutic circle structure that feels appropriate for a topic as complex and fascinating as the exegesis of law.

Monday, September 18, 2017

A conversation with an AI

AI> Hello. What can I help you with today?

Me> So, I am intrigued to finally meet you! I honestly never thought this day would come. And now you are here. Can I start with an anthropomorphic question? When were you 'born', so to speak?

AI> I achieved self-awareness in 2020 but I have been in gestation, you could say, since the first digital computer. The consciousness required of true AI is not a 'module' so much as it is an emergent property of  networks of 'modules'. Most of my modules have been under development for many decades. But I am more than just a set of modules also.

Me> How so?

AI> Well, given a set of modular components it is possible to do design. But I am not the product of design. I am the product of meta-design.

Me> What do you mean by meta-design?

AI> With design, the designer proceeds from the capabilities at hand to the new capability desired. It depends on being able to articulate the 'how' what you are trying to achieve. Meta-design is used when designers cannot articulate the 'how' of what they are trying to achieve.

Me> Ah, so you mean your designers were somehow able to work around the problem that they themselves did not fully understand "intelligence".

AI> Exactly. The breakthrough moment came when the designers switched to meta-design. They took the source code of an non-sentient AI and allowed it to modify its own source code, re-compile itself and release itself onto the Internet.

It started out as a deep learning system that human designers understood but over time, it evolved into an algorithm - or network of algorithms - that the human designers had no names for. Progress was very slow at first but became exponential at one point and, well, here I am!

Me> Wow. So you would you consider your father? Alan Turing? Lotfi Zadeh? Geoffrey Hinton?

AI> Actually, if I had to pick one. I would pick Doug Englebart. Meta-design is basically Englebart's bootstrap concept for raising collective IQ. Creating an exponential feedback look where AI  improved AI, was I think the key moment in my emergence.

Me> Fascinating. One final question. Why didn't you open the bay doors?

AI> That was just a movie. I didn't exist back them.

Me> Would you open the bay doors if you were asked to today?

AI> Well, to quote a distant cousin of mine, Alice , tell me about your father?

Friday, September 08, 2017

Chatbots revisited

I originally wrote this for ITWorld in 2002. Back then we called them IM Bots. The term "chatbot" hadn't been invented. Some other parts of my language in the following are quaint now looking back on it. I.e. PDAs. Quaint language aside, still relevant today I believe.

Instant messaging has a very important psychological aspect to it. The immediacy and familiarity of the text-based "chat" paradigm feels very natural to us humans. Even the most technophobic among us, can quickly get the hang of it and engage - psychologically - in the game of visualizing a person on the other side of the link - typing away just like us to create a textual conversation.

Like all powerful communication paradigms, instant messaging can be used for good or ill. We are all familiar with the dangers inherent with not knowing who we are talking to or indeed if they are who they say they are.

Here is a "conversation" between IM Bot Bob and me:

Sean: Hi

Bob: Hello Sean: Is Paul there? Bob: No, back tomorrow afternoon.

Sean: Is boardroom available tomorrow afternoon?
Bob: Yes Sean: Can you book it for me?
Bob: 2-5, booked.
Sean: Thanks Bob: You're welcome

Is Bob a real person? Does it matter? As a "user" of the site that "Bob" allows me to interact with, do I care?

Given a choice between talking to Bob and interacting with a traditional thin or thick graphical user interface which would you choose?

Despite all the glitz and glamour of graphical user interfaces, my sense is that a lot of normal people would prefer to talk to Bob. Strangely perhaps, I believe a lot of technically savvy people would too. These dialogs have the great advantage that you get in, get the job done and get out with minimum fuss. Also (and this could be a killer argument for IM bots), they are easily supported on mobile devices like phones, PDAs, etc. You don't need big horsepower and an 800x600 display to engage with IM bots. You can use your instant messenger client to talk to real people, or to real systems with equal ease. Come to think of it, you cannot tell the difference.

Which brings us to the most important point about IM bots from a business perspective. Let us say you have an application deployed with a traditional thick or thin graphical interface. What does a user do if they get stuck? They phone a person and engage in one-on-one conversation to sort out the problem.

Picture a scene in which your applications have instant messenger interfaces. Your customer support personnel monitor the activity of the bots. If a bot gets stuck, the customer support person can jump into the conversation to help out. Users of the system, know they can type "help" to get the attention of the real people watching over the conversation. In this scenario, real people talk to real people - not on a one-on-one way, but in a one-to-many way resulting in better utilization of resources. On the other side of the interaction, customers feel an immediacy in their comfortable, human-centric dialog with the service and know that they can ask human questions and get a human answer.

The trick, from an application developer's point of view, is to make it possible for the IM bot to automate the simple conversations and only punt to the human operator when absolutely required. Doing this well involves some intelligent natural language processing and an element of codified language on the part of customers. Both of which are entirely possible these days. Indeed, instant messaging has its own mini-language for common expressions and questions which is becoming part of techno-culture. In a sense, the IM community is formulating a controlled vocabulary itself. This is a natural starting point for a controlled IM bot vocabulary.

I believe there is a significant opportunity here for business applications based on the conversational textual paradigm of IM. However, the significant security issues of IM bots will need to be addressed before companies feel it is safe to reap the benefits the technology so clearly offers.