Wednesday, November 30, 2011

Legislative Software on Netbeans

A short writeup on the LWB Legislative/Parliamentary Software Suite on the Netbeans Developer Zone website.

Wednesday, November 23, 2011

Leonard Cohen

New song. New album. 'Nuff said.

Monday, October 24, 2011

Another sad loss

My mum used to say that bad news and good news travel around in groups of three...We had Steve Jobs. Then Denis Ritchie, and now, apparently, John McCarthy.

Lisp was one of the great inventions in all of computer science. Its influence on all programming languages - including those that chose to make themselves not-lisp-like, has been immense. If ever a programming language embodied a paradigm - a way of looking at the world - .... that is Lisp.

Emacs, AutoCad, Softquad SGML Author/Editor, Interleaf...

Then of course there was SICP...

Friday, October 14, 2011


Having a soft spot for paper books may seem odd for somebody in my line of work but I absolutely *love* books - paper books.

Join that with an interest in legal publishing and, well, this pic from inside the LoC is like a picture of Disneyland to a child.

I had the great fortune to get a tour of the law library in the LoC a while back and it is absolutely stunning place for a book/law nerd like me to visit. Amazing.

Thursday, October 13, 2011

Denis Ritchie RIP

Wow. Today I learned that Denis Ritchie has passed away. Very sad.

I met him once at an Unix conference in London when the Plan 9 operating system was being evangalized as the logical successor to Unix.

I still have my copy of "K and R" - the book properly known as the C programming language book written by Kernighan and Ritchie. A true classic and, as far as I know, the worlds first "hello world" programming example.

Thursday, October 06, 2011

Steve Jobs remembered

My first paying job in IT was to implement a stock control system in Visicalc on an Apple ][ running the CP/M operating system. One of these. A bit later I worked on a Lisa and then a Fat Mac.

It was the Mac that started my love affair with text processing thanks to the Apple ImageWriter printer and then the amazing Apple LaserWriter.

One of my final year projects in Computer Science in Trinity College Dublin (1987) was a 3D wireframe teapot. I created it by sending postscript directly to an Apple Laserwriter over its printer cable, using an Apple 2c as a terminal.

From the Apple II to the iPad. What an amazing progression.

RIP Steve Jobs

Thursday, September 22, 2011

A coming epidemic of disabled programmers?

I am very glad to see that RSI (Repetitive Strain Injury) is the subject of a talk at PyCon Ireland.

The world is full of programmers in the 20-30 year old range, regularly pulling 18 hour coding sessions without giving it a second thought. Only taking breaks for restroom use, eating at their desks...

You can do that in your twenties and thirties but chances are, when you get into your forties, your hands and arms are going to start complaining.

As the saying goes, "I was that soldier". But, at least in my case I was mostly working with desktop machines and large-ish keyboards in my crazy coder days.

Compare today's high-octane coders. Weapon of choice is a laptop, often rested on the knees or propped precariously on a table along with 12 other laptops, hunched shoulders, knotted brows, bad light...

Not good. It *will* catch up with you. Speaking from personal experience, there is nothing more depressing that wanting to code and not being able to because you hands/arms are screaming at you. At one stage, I was reduced to tapping keys with the eraser tip of a pencil in order to get my e-mail.

Don't do what I did. Pay attention to RSI risks in your twenties/thirties and with luck you will never be visited with any problems.

Wednesday, September 21, 2011

Visual Tables and Meaning

John Lewis commented on my post about the thorny issue of visual tables and content semantics. John goes on to ask if the tables can be removed from legal or para-legal documents somehow?

I do not think so unfortunately. The inter-weaving of pure content and semantics are too deep. Douglass Hofstadter's article in Scientific American about Knuth's Meta-Font system is a great examination of how deep this problem really is. (The article is re-printed in Hofstadter's book Metamagical Themas. It does not appear to be online.) Hofstadter coined the term Ambigram to show how even simple typographic constructs can lead to interesting semantic ambiguities. I have seen some accidental ambigrams in legal texts over the years:-)

The field of mathematics has long struggled with this in its search for an executable representation of mathematical constructs. Some notations are just so visual its difficult to see how there could ever be a useful separation made between their content and the presentation. For example, Penrose's Tensor Diagram Notation. In extreme cases, the presentation is the content. No wonder that TeX remains the weapon of choice for mathematicians :-)

And yet, the legal world manages to survive the ambiguities and contradictions in its corpus. How? Via what is known in semiotics as a dynamical intrepretant known as the Judiciary :-) It is a beautifully simple idea. If there is a doubt as to the meaning of a text, the Judiciary tells you want it means. The explications provided are then themselves captured in textual form known as case law and the case law becomes legally powerful thanks to stare decisis.

An analogy from software development is unit testing. The code is the code is the code but the true meaning of the code? The unit tests tell you that. The code “means” what the unit tests tell you it means. All else is just syntax. Case law is a bit like a unit test suite.

Is it possible to remove the tables completely from legal/para-legal documents? No the meaning is just too subtly inter-twined with the presentation. It is possible to remove the need for unit tests in software development? No, the meaning of source code is impossible to separate from its interpretant – an execution environment. A great way to see this is to look at static analysis tools and realize what it is about your code that static analysis can never tell you. Arguable the limitations of static textual analysis were established by the great Alan Turing back in 1936 with the the halting problem.

So, if that tables cannot come out, what to do? I believe the most promising approach is to use the interpretant and stare decisis to remove as much ambiguity as possible. i.e. legally/socially binding exposition on what parts of tabular material contribute to meaning and what parts do not. That way, computer system designers like me would have guidance as to what needs to retained and what doesn't. Examples are things like fixed widths, tab leaders, vertical character alignments etc.

I honestly do not think it is possible to completely separate typesetting attributes into a nice binary "keep/optional" split but we won't know until we get the strare decisis process kicked off and let the interpretants in the Judiciary do their thing.

I know of no jurisdiction that has attempted to grapple with this issue to date but it is becoming more and more pressing as the need for digital "authentic" legal materials grows and grows.

Friday, September 02, 2011

On URIs and URNs: Every problem can be solved with another level of indirection...

John Sheridan is pondering URIs and URNs. I have pondered that a lot too. The idea of having another degree of independence between names (e.g. cites to legislation) and actual dereferencable identifiers makes a lot of sense of course. We don't want to tie ourselves down to implementations or platforms or server hosts if we can avoid it. Especially if the goals is to have very long lived identifiers.

However, a few things worry me about the standard "lets use URNs" reaction to long term identifiers.

1) URL's are already completely and utterly devoid of any direct connection to the underlying assets they point to. The number of levels of indirection present in your average resolution of a URL to a stream of bytes in RAM is already very large and many of them are under our control. I.e. we can change the mappings at will. The days of "static" IP addresses are long gone. So, the notion that URNs help because you can change the resolution process without touching the assets themselves doesn't sit well with me because I can do that with plain old URLs...At many levels from DNS to VLANs to NATting to HOSTS files to http redirects etc. etc.

Given the plethora of mappings already present in the (URL->Resource Representation) resolution process, do we gain much adding another one in the form of a URN mapping?

2) URN schemes need resolvers and in many systems I have seen that use URNs, representations get served up with embedded hyperlinks. The embedded hyperlinks often use URLs to access the URN resolver. I.e. http://.../resolve_urn?urn=foo. But of course, in order to do that, the representation ends up creating a dependency on the URL that accesses the resolver :-) If I save that asset, I now have a rendering that is dependent on the URL resolve - despite the presence of the URNs in the asset. So, have I gained anything?

3) It is true of course that domains are rented not owned and this makes folks uneasy about long term reliance on a namespace that is not fully under their control. However, the world is now so utterly dependent on DNS that a fair amount of caselaw exists to protect entities against cybersquats and loss of access to DNS rental rights. Plus standing up your own DNS inside a firewalled environment is straightfoward. Plus creating local mappings in a hosts file is very staightforward. And so on. Lots of options if you need to take control of the resolution process and re-map it.

4) Finally, a non-technical argument that also plays into my skepticism about URNs. They have been around forever. So have identifier schemes like DOI and SSN etc. The internet seems to have voted with its feet already and subsumed all these into URL based resolvers of various kinds. Witness the recent explosion in link shorterners. They map a URL to another URL...I just don't see market pressure out there for a different way to control de-referencing on the Internet.

All in all, with all the mappings already present and the malleability/configurabilty of same, I don't see the compelling rationale for adding other one in the form of URNs.

What am I missing?

Thursday, August 18, 2011

Looking forward to the GIS-Pro event

I will be doing the closing keynote at the URISA GIS-Pro event. I enjoy closing keynotes when - like this event - I get a chance to attend the whole event. That way I get to Zeit the Geist, so to speak, in my talk.

Monday, August 15, 2011

NIEM and EDemocracy

Today we are instigating an initiative to leverage and apply NIEM to EDemocracy in the USA.

A mailing list has been established:

Details to follow on the mailing list in the days ahead.

Algorithmic legal advice...

This is fascinating. As computing increases to advance into decision support/advisory domains, we are going to see more dis-intermediation pressure.

Monday, July 25, 2011

Authenticated, Preserved, Accessible.

Authenticated, Preserved, Accessible. Some day (hopefully soon), all laws will be made this way.

Wednesday, July 20, 2011

In design, unknowns are a form of constraint

Seth says embrace the constraints absolutely. Some of the best design (in s/w and elsewhere) I have ever seen has had to deal with serious constraints of different forms. Some of the worst design (in s/w and elsewhere) I have seen has been in situations where there were few constraints on the designers.

A very common form of constraint in s/w design is the unknown. "I cannot progrsss the design because I don't know enough about X or Y or Z"...

It is often best to embrace these rather than allow them to slow you down. Make assumptions, document your assumptions and then proceed with the design. Rank assumption variants in terms of their probability and do variant designs based on the high probabilities first.

Unknowns are a fact of life. Deal with it.

Wednesday, July 06, 2011

Updated MicroXML draft spec published

Updated MicroXML draft spec published. The JsonML stuff is of particular interest.

Tuesday, July 05, 2011

Whither Scala?

Hey, I have nothing against clojure ((really), (I don't)), but how come Scala isn't on Heroku yet?

KLISS and Cloud Computing

Short piece on about KLISS.

Friday, July 01, 2011

Open Government New York

Road Map for the Digital City: Achieving New York City's Digital Future is very well presented and an interesting read.
The tenets of Open Government in the doc are:

1. Open Government democratizes the exchange of information and public services, inviting all citizens to participate and engage.
2. Open Government connects citizens to one another, supporting more efficient collaborative production of services over the traditional mode of citizen consumption of government-produced services.
3. Open Government information is more valuable when it is collected at the source, and published in near-real-time.
4. Open Government data is machine-processable.
5. Open Government invites all information consumers- inside and outside government - to correct, improve, and augment data.
6. Open Government uses open standards, formats, APIs, licenses, and taxonomies.
7. Open Government is accountable and transparent, perpetually self-evaluating, iterating, and exploring new ways to solve old problems.
8. Open Government makes as much information as possible available to as many actors as possible and is designed to minimize financial and technological barriers to accessibility.
9. Open Government enables efficiency, cost savings, and the streamlining of government services.
10. Open Government is compatible, nimble, and mashable, fostering collaboration, coordination, and innovation with other governments, academic institutions, and organizations.

I like the "compatible, nimble and mashable" characterization. That sums it up really.

Tuesday, June 28, 2011

M is for "Map" as well as "Mobile"

Remember all the M- stuff? M-Government for example? I am seeing an interesting trend in eDemocracy-related data sets. People want to consume/contribute to the information corpus while mobile - that much is a given. iPads, Android phones etc...But a lot of eDemocracy-related information has a strong geo-component to it.

Thence M for Maps on top of M for Mobile. A powerful combination for eDemocracy. A real paradigm shift I think, away from the P for Paper and P for PDF that sadly, currently dominates.

So, I'm not surprised to see M-for-Maps (Government) becoming a buzz item in Asia where a lot of the M-for-Mobile (Government) emanted from. Inciting citizen engagement with Spatial Information

Version control for legislation

An interesting discussion on versioning law has started on Quora.

Webinar and slides available from Open Government Canada event

The webinar recording+slides of the Open Government Canada event are available here

Wednesday, June 22, 2011

Lawmaking as Geo-Design : Propylon at ESRI User Conference

We will be exhibiting and presenting at the ESRI User Conference this year. Looking forward to it. If you are going to the event and want to talk about the power of GIS in Law Making, contact me.

Access to good law : the nub of the problem

Very good piece on binarylaw blog about accessible law. It is a tragedy that the Governments who produce the stuff are not considered to be good sources of the raw materials.

I have no problem with the idea of commercial third parties adding value on top of the base corpus but it is just not healthy to have the base corpus itself unavailable, not only to citizens but to entrepreneurs in the space. The situation is so bad currently in many jurisdictions that Government bodies buy back access to their own work product:-/

Anf by base corpus here I mean bills, journals, statute, regulations, case law etc. You need the entire corpus to have a full picture of legislation. If any one part of the above is missing, you don't have a complete base from which to work.

Tuesday, June 21, 2011

KLISS : A hybrid cloud

In most of my posts about KLISS I have concentrated on the logical architecture aspects and have not spent much time talking about how the "iron" underneath is handled.

KLISS is a hybrid cloud built with Cisco networking, HP compute, EMC storage/backup, VMWare virtualization, RedHat OS and LWB.

Monday, June 20, 2011

Paperless operation, time-based XML, content authentication etc.

Great stuff from Tom Bruce testifying before Congress.

"Such a point-in-time system -- one that makes it possible to know what the state of the law was at a particular time in the past, or what it will be at some point in the future when pending laws come into effect -- would be a very valuable tool".

Indeed. I would go further however. Without such a system you do not have a comprehensive audit trail that explains how your laws came to say what they do. Nobody would accept a P+L and a Balance Sheet without detailed transactions to back it up. Why do we accept it for something as critically important as law?

It is nice to see many of the concepts that underpin the LWB architecture coming more into main stream thinking about legislative IT.



Thursday, June 16, 2011

Busy conference season ahead

Thursday, June 09, 2011

Imprecise Statute versus imprecise Regulations

Justice Scalia on a key difference between Statute and Regulation in American law : "When Congress enacts an imprecise statute that it commits to the implementation of an executive agency, it has no control over that implemen­tation (except, of course, through further, more precise, legislation). The legislative and executive functions are not combined. But when an agency promulgates an imprecise rule, it leaves to itself the implementation of that rule, and thus the initial determination of the rule’s meaning. And though the adoption of a rule is an exercise of the executive rather than the legislative power, a properly adopted rule has fully the effect of law. It seems contrary to fundamental principles of separation of powers to per­mit the person who promulgates a law to interpret it as well."

In software terms, I think of it this way: if you write an API, make sure somebody else implements it as well as you. Your API will be that much better for it. True meaning is the result of independent interpretation.

Monday, June 06, 2011

Digital Signing and Action at a Distance

Love the story today about President Obama and the auto-pen. Gotta love the pic of Thomas Jefferson’s Polygraph on that page too.

Both Jefferson's gadget and the auto-pen raise the question of cause and effect in the legal concept of signing/notarizing/witnessing.

In the old days a single cause (person with pen) created a single effect (signed vellum sheet). As soon as something - anything - intermediates between the cause and the effect, things get a lot more complex.

In IT, a so-called "digital signature" has little in common with its physical world analogs. For example a single cause may general multiple effects (one signed doc gets replicated a million times - each "copy" indistinguishable from the "original"). Moreover, the "cause" is always a computer program - not a person.

If I write the software that has the button that you press to "sign". Who/what does the signing? My software or you? If I change my software so that it just signs whatever needs to be signed, every day at 12:00 O'Clock. Who/what is doing the signing now? Given that "signed" documents produced from the latter process are completely indistinguishable from the former, how can the courts deal with the fact that my software may have gone haywire and signed a bunch of things on your behalf that never should have been signed?

In the world of bits, the blunt truth of the matter is that an "original" bit-stream of any object is a tough, tough thing to pin down. The closest analog that I know of is tamper evident content addressable storage like EMC Centera (That is what we use in the KEEP system here in Kansas).

Friday, June 03, 2011

More on media-neutral citation

I applaud the start of the initiative but would urge those involves to not limit their purvue to just caselaw.

A full picture of the law at any point in time requires not just caselaw but also statutes, session laws and regulations to name but three. Relevant here is Joe Carmel's initiative.

Law is a wonderful example of a domain where standards for citation are at least as important as standards for formats of the documents themselves.

The last 20 years are littered with failed initiatives to standardize the formats for all the document types relevant to law. Universal standard (semantic) formats for legal documents is a wonderful target to aim for but a tremendous amount of value can be unlocked by the significantly less daunting challenge of a universal citation framework.

In KLISS, we use the phrase "no wrong door" to describe the idea that any legal asset (a bill, a statute section, a journal, an attorney general opinion, a regulation, a court judgement etc.) that has cross-references *should* be traversable by machine. Moreover, to do this properly the linkages ahould be such that retrieving new documents by following links, retrieve the assets as they were at the time the originating document was created.

Anything short of that results in an incomplete (or worse, false!) view of the overall legal synoptics involved.

Apache OpenOffice

As a big user of OpenOffice, Xerces, Ant and SVN - to name 4, I'm very happy to see OpenOffice move to Apache Foundation.

Wednesday, June 01, 2011

The end of the 2011 Legislative Session in Kansas

Today, the House and the Senate adjourned sine die. We have a lot of very exciting plans for KLISS going forward and work on those starts now. I will be blogging the journey as it unfolds...

Tuesday, May 31, 2011

Another step beyond page/line-based citation

I see Illinois Supreme Court has introduced a public domain paragraph-based citation mechanism for their cases.

Those unfamiliar with legal materials may be thinking "Public domain? How could it be otherwise?" Well, believe it or not, page (and line numbers) have, for some considerable time, been contested intellectual property. See star pagination for example.

Thursday, May 05, 2011

Authentication and versions...of legal material

From an interview with Margaret Maes on the Library of Congress's Digital Preservation website
    "Part of authentication is having the right versions – and all of the versions – of something that you might need if you are a researcher and you are really trying to nail down a point," Maes said. As a law develops from a proposal to a final version, there are often elements of meaning or intent that somebody can use when trying to prove a point about a piece of legislation. In the legal discipline, the more people pay attention to versions, the more important it becomes to preserve them.

Amen to that. That is why we will be ingesting complete temporal databases of complete legislative biennia as SIPs in the KEEP archive.

Wednesday, May 04, 2011

Congress and workflow

Interesting post on the role or otherwise of workflow engines using Congress as an example.

In my opinion, the reason that “workflow” is such a bad fit for Congress (or any other law making body) is that that process of making law takes place according to a set of constitutive rules, not a set of imperative rules (after John Searle).

I.e. Congress does its work in a competitive, game-like environment where “plays” in the game are deemed admissable or inadmissable based on a set of constitutive rules. Congress does not follow a set of imperative rules of the form (do this; then do this; if this happens, then do this…). Workflow engines can work fine in the latter environment but are very problematic in the former. Especially when you consider that the constitutive rules under which Congress operates can themselves be modified by Congress. In this sense the “game” underlying congress is very similar to (Nomic).

The LWB system has to deal with these realties of legislative workflows. The white paper covers some of this and is available : here.

Tuesday, May 03, 2011

TV sets going the way of the dodo?

So the TVs are disappearing? Well, it all depends on how you define "TV". If by "TV" you mean a box that limits you to X "stations" and connected into some Eighties-class shielded copper cable thing, then yes, absolutely.

If you mean internet terminal with support for legacy "TV" stations and cabling, then no, they are not going away....Every time I walk into BestBuy a bunch of Internet-enabled flatscreen TVs are walking out.

The concept of a Television set is morphing at the moment, just as the concept of a Telephone morphed with the introduction of cellular and now VoIP.

Thursday, April 28, 2011

Towards multi-modal law

The recording and expressing of law needs to very quickly come to grips with the "new"[1] phenomenon of the internet and digital media. The judicial branch is probably going to be the place where it happens first. See illustrated judgements for some examples.

Law started out as a largely oral phenomenon but became very textual over time. With some notable exceptions (e.g. oral arguments, various forms of declaration), the written record of the thing *is* the thing itself in the world of law. I.e. acts of congress live in the paper they are committed to. The paper is not a second-hand, best-efforts transcript of what was really said out loud by Congress. From a legal perspective, it *is* what Congress said. What Congress said out loud may be useful background information (a view held by some legal theorists, but not all) but the law itself lives in the four corners of the paper it is written on.

Now along comes the Internet. Dynamic web pages. Audio streams. Video streams. Interactive Maps...How much longer can the notion that law-is-mostly-words hold? As the law-making process becomes more and more digital, we will see more and more photos, interactive graphs, dynamic maps, spreadsheets etc. being passed into law.

We are headed towards a multi-modal view of law. A world in which law is not merely written words and not merely audio/video of spoken words. It is a new thing in the world. It is beyond words (so to speak).

A profound change cometh.

[1] At law time-scales, the internet is very new. Recording of law after all, goes back to at least 1700 BC

Wednesday, April 27, 2011

Yet another "eventually consistent" based design

Riak looks good. When it comes to the CAP theorem, A and P are definitely the ones to have in favor of C. You can get very good intuitive behavior without C by combining idempotency with time-stamping. Pat Hellands writings really opened my eyes to that.

Monday, April 25, 2011

Chaos Monkey

Chaos monkey. Great name for a very very important concept in web-scale application engineering.

Thursday, April 21, 2011

Document error : How to fix it...

In most walks of life, mistakes in electronic documents get fixed when found. The fix generally involves just editing the original. No big deal.

However, this is not always the right thing to do. Legal documents for example are best thought of in book keeping terms. If a bookkeeper finds an error in a ledger he or she does not simply edit it. Rather, a new debit/credit is recorded to counter-balance the error but leave the integrity of the records intact.

Here is a good example from the US National Archives. "Fixing" an original - however broken, is a no no. Fixing a variant - with audit trail back to the original. Thats a different matter.

Tuesday, April 19, 2011

Law as source code versus Law is source code

There is no doubt in my mind that some parts of the legal corpus can benefit from a rigorous expression language, be that Python in the case of executable SEC regulations or C# for crisply defining predicate functions.

Things get interesting once one adopts a formal expression syntax for law because even if you are just in the business of "writing it down" in VDM or Z Notation or PNML somebody, somewhere is going to write an interpreter/compiler for it:-)

Speaking of Gall...

Remember The Web Services Stack. Yup. Galls Law again. I have no doubt that over time, REST will accrete some unfortunate complexity but it *started* simple and this appears to be a necessary first step in many fields of endeavor.

You cannot install complexity

You cannot install complexity. How true. Behind any complexity that *works*, there used to be a simplicity that worked : Gall's Law.

Saturday, April 16, 2011

Language Universals...

The universality - or otherwise - of human language constructs continues to be hotly debated. From this article : "cultural evolution is the primary factor that determines linguistic structure, with the current state of a linguistic system shaping and constraining future states.".

The whole debate relates to programming languages, too, in my opinion, as these are, no less than Gaelic or Japanese, linguistic creations of mankind. I think of it this way : the concepts of variable binding and conditional branching occur in every programming language aimed at von Neumann architectures because you cannot be Turing complete without set and branch-test operations. (Right?)

So the existence of these two in the syntax, or underlying parse tree, or underlying machine code, of nearly all programming languages, is to be expected. However, if you find yourself swimming in curly braces (Java, Lua, C++), you are probably, culturally speaking, in C or some descendant thereof. If there are strings everywhere, you are in Snobol or some descendant thereof. Etc.

Now, if in final analysis, nothing is truly universal to these languages other than the concept of a universal Turing machine, then *that*, arguably, is the shared construct - not any linguistic device used to leverage the construct. Having said that,there is no way to wield the construct without creating syntax so the difference is, perhaps, moot. Think of it like gravity. It impacts mass. So, to wield gravity to your advantage, you need mass. The end is gravity, but the means to the end is mass...

"O body swayed to music, O brightening glance,
How can we know the dancer from the dance?" -- W.B. Yeats

Friday, April 15, 2011

Sadly, missing NELIC

Unfortunately, I cannot attend the New and Emerging Legal Infrastructures Conference ( NELIC) tomorrow.

Quantitive legal prediction and legal automation are subjects I am very interested in. I hope to submit a proposal for the next event.

Thursday, April 14, 2011

The terrible beauty of dynamic documents

Ever since the world moved its information from atoms (clay tablets, paper etc.) to bits, we have seen an incredible increase in the volume, richness and sheer power of information.

Much of that power comes from the fact that information has become dynamic. Web pages are created and rendered *on demand*. WYSIWYG word processors construct a layout for a document *when you ask for it*. What you see on that web page is increasingly likely to be different from what I would see, if I looked too because the page is customized to you - your browser, your location, your profile history, the time of the request...etc. What I see when I open that PDF depends on what the embedded javascript does and it might well do something different for me than it would for you. Worse, it might do something different for me five minutes from now!

This phenomenon is accelerating and unleashing amazing things. However, we still live in a world where information such as a document is thought of in paper terms. I.e. something that is not invented on demand. Something that is the same for all observers. Something that will not change itself automatically over time. Much of the world of law revolves around that idea of stable information. Much of the world of regulatory oversight revolves around that idea too.

Well, it is becoming increasingly hard to wield IT with a stable-information world view. The world isn't going to row back from WYSIWYG or Web App Servers or Mashups ...and I'm not suggesting for a minute that it should. However, until such time as the legal world and the regulatory world find a way to live in a completely dynamic information environment, practitioners in those fields need to be very careful.

For years, I have been looking around for some crisp terminology to capture the challenge we face. I think I might have found it. Information is increasingly exogenous. Documents do not hold your information. Your information is the result of programs (such as WYSIWYG word processors) acting on the data (the documents).

If you have perfect replicas made of your documents, you haven't got a perfect replica of your information unless you are very careful to control the exogenous factors. Thankfully, for a lot of the worlds information, it does't really matter if you do it imperfectly. However, it really, really matters in the world of law and regulation.

Thursday, April 07, 2011

Wednesday, April 06, 2011

Harvard bound

I plan to attend The Future of Law Libraries: The Future Is Now? at Harvard in June. Looking forward to participating, especially with respect to primary legal materials.

I see Carl will be talking about the Twelve Tables...I used those as a springboard for my talk on authenticity of legal materials at AALL 2009.

Tuesday, March 29, 2011

Under the hood of KLISS - a technical white paper

KLISS is built on top of the LWB platform. This white paper talks about how LWB, at a technical level, addresses the many challenges of legislative IT and the intricacies of legislatures/parliaments.

Wednesday, February 23, 2011

KLISS is alive and so am I

Well, I have been offline since XMas as a result of KLISS roll-out but I'm finally coming up for air.

For a whole bunch of very good reasons late last year, we added an internet facing front end to the KLISS system launch this year (in time for the 2011-12 legislative session).

KLISS phase 1 was mostly about the complex back-office functions of the legislature. In particular, replacing a very labor intensive, paper based bill processing system that involved scissors and colored pencils and glue sticks. (See the "flagged bill" in this short video).

Although adding a front-end to all of the new back-end machinery (circa 130 virtual machines of integrated back-office functions) added significantly to Propylon's coffee intake, we have made it through a very challenging timescale. The KLISS Internet-facing website is

We still have a lot of work to do to expose the power of the back-office systems on the website. The KLISS website truly is just the foyer into the vast factory complex where the real legislative content work takes place. It is in that behind-the-scenes factory that most of KLISS lives.

Over the next while I hope to blog more about how it all works under the hood and about plans for the future of KLISS. Now that we have the critical eDemocracy back-office platform in place, the sky is the limit on what we can do in information provision and services to legislators and the public.

I am very much looking forward to being a part of that.