Friday, May 20, 2016

From textual authority to interpretive authority: the next big shift in legal and regulatory informatics

This paper : law and algorithms in the public domain from a journal on applied ethics is representative, I think, of the thought processes going on around the world at present regarding machine intelligence and what it means for law/regulation.

It seems to me that there has been a significant uptick in people from diverse science/philosophy backgrounds taking an interest in the world of law. These folks range from epistemologists to bioinformaticians to statisticians to network engineers. Many of them are looking at law/regulation through the eyes of digital computing and asking, basically, "Is law/regulation computation?" and also "If it is not currently computation, can it be? Will it be? Should it be?"

These are great, great questions. We have a long way to go yet in answering them. Much of the world of law and the world of IT is separated by a big chasm of mutual mis-understanding at present. Many law folk - with some notable exceptions - do not have a deep grasp of computing and many computing folk - with some notable exceptions - do not have a deep grasp of law.

Computers are everywhere in the world of law, but to date, they have primarily been wielded as document management/search&retrieval tools. In this domain, they have been phenomenally successful. To the point where significant textual authority has now transferred to digital modalities from paper.

Those books of caselaw and statute and so on, on the shelves, in the offices. They rarely move from the shelves. For much practical, day-to-day activity, the digital instantiations of these legislative artifiacts are normative and considered authoritative by practitioners. How often these days to legal researchers go back to the paper-normative books? Is it even possible anymore in a world where more and more paper publication is being replaced by cradle-to-grave digital media? If the practitioners and the regulators and the courts are all circling around a set of digital artifacts, does it matter any more if the digital artifact is identical to the paper one?

Authority is a funny thing. It is mostly a social construct.  I wrote about this some years ago here: Would the real, authentic copy of the document please stand up?  If the majority involved in the world of law/regulation use digital information resource X even though strictly speaking X is a "best efforts facsimile" of paper information resource Y, then X has de-facto authority even though it is not de-jure authoritative. (The fact that de-jure texts are often replaced by de facto texts in the world of jure - law! - is a self-reference that will likely appeal to anyone who has read The Paradox of Self Amendment by Peter Suber.

We are very close to being at the point with digital resources in law/regulation have authority for expression but it is a different kettle of fish completely to have expression authority compared to interpretive authority.

It is in this chasm between authority of expression and authority of interpretation that most of the mutual misunderstandings between law and computing will sit in the years ahead I think. On one hand, law folk will be too quick to dismiss what the machines can do in the interpretive space and IT people will be too quick to think the machines can quickly take over the interpretive space.

The truth - as ever - is somewhere in between. Nobody knows yet where the dividing line is but the IT people are sure to move the line from where it currently is (in legal "expression" space) to a new location (in legal "interpretation" space).

The IT people will be asking the hard questions of the world of law going forward. Is this just computing in different clothing? If so, then lets make it a computing domain. If it is not one today, then can we make it one tomorrow? If it cannot be turned into a computing domain - or should not be - then why, exactly?

The "why" question here will cause the most discussion. "Just because!", will not cut it as an answer. "That is not what we do around here young man!" will not cut it either. "You IT people just don't understand and can't understand because you are not qualified!", will not cut it either.

Other domains - medicine for example - have gone through this already. Medical practitioners are not algorithms or machines but they have for the most part divested various activities to the machines. Not just expressive (document management/research) but also interpretive (testing,  hypothesis generation, outcome simulation).

Law is clearly on this journey now and should emerge in better shape, but the road ahead is not straight, has quite a few bumps and a few dead ends too.

Strap yourself in.

Monday, May 16, 2016

From BSOD to UOD

I don't get many "Blue Screen of Death" type events these days : In any of the Ubuntu, Android, iOS, Window environments I interact with. Certainly not like the good old days when rebooting every couple of hours felt normal. (I used to keep my foot touching the side of my deskside machine. The vibrations of the hard disk used to be a good indicator of life back in the good old days. Betcha that health monitor wasn't considered in the move to SSDs. Nothing even to listen too these days, never mind touch.)

I do get Updates of Death though - and these are nasty critters!

For example, your machine auto-updates and disables the network connection leaving you unable to get at the fix you just found online....

Grrrrrrrrrrr.

Monday, May 09, 2016

Genetic Football

Genetic Football is a thing. Wow.

As a thing, it is part of a bigger thing.

That bigger thing seems to be this: given enough cheap compute power, the time taken to perform zillions of iterations can be made largely irrelevant.

Start stupid. Just aim to be fractionally less stupid the next time round, and iterations will do the rest.

The weirdest thing about all of this for me is that if/when iterated algorithmic things start showing smarts, we will know the causal factors that lead to the increased smartness, but not the rationale for any individual incidence of smart-ness.

As a thing, that is part of a bigger thing.

That bigger thing is that these useful-but-unprovable things will be put to use in areas where humankind as previously expected the presence of explanation. You know, rules, reasoning, all that stuff.

As a thing, that is part of a bigger thing.

That bigger thing is that in many areas of human endeavor it is either impossible to get explanations - (i.e. experts who know what to do, but cannot explain why in terms of rules.), or the explanations need to be taken with a pinch of post-hoc-ergo-propter-hoc salt, or a pinch or retroactive goal-setting salt.

As a thing, that is part of a bigger thing.

When the machines come, and start doing clever things but cannot explain why....

...they will be just like us.

Thursday, May 05, 2016

Statistics and AI

We live at a time where there is more interest in AI than ever and it is growing every day.

One of the first things that happens when a genre of computing starts to build up steam is that pre-existing concepts get subsumed into the new genre. Sometimes, the adopted concepts are presented in a way that would suggest they are new concepts, created as part of the new genre. Sometimes they are. But sometimes they are not.

For example, I recently read some material that presented linear regression as a machine learning technique.

Now of course, regression has all sorts of important contributions to make to machine learning but it was invented/discovered long long before the machines came along.


Thursday, April 14, 2016

Cutting the inconvenient protrusions from the jigsaw pieces

There is a school of thought that goes like this....

(1) To manage data means to put it in a database
(2) A 'database' means a relational database. No other database approach is really any good.
(3)  If the data does not fit into the relational data model, well just compromise the data so that it does. Why? See item (1).

I have no difficulty whatsover with recommending relational databases where there is a good fit between the data, the problem to be solved, and the relational database paradigm.

Where the fit isn't good, I recommend something else. Maybe index flat files, or versioned spreadsheets, documents, a temporal data store....whatever feels least like I am cutting important protrusions off the data and off the problem to be solved.

However, whenever I do that, I am sure to have to answer the "Why not just store it in [Insert RDB name]?" question.

It is an incredibly strong meme in modern computing.

Monday, March 14, 2016

Algorithms where human understanding is optional - or maybe even impossible

I think I am guilty of holding on to an AI non-sequitur for a long time. Namely the idea that AI is fundamentally limited by our ability as humans to code the rules for the computer to execute. If we humans cannot write down the rules for X, we cannot get the computer to do X.

Modern AI seems to have significantly lurched over to the "no rules" side of the field where phrases like CBR (case based reasoning) and Neural Net Training Sets abound...

But with an interesting twist that I have only recently become aware of. Namely, using bootstrapping to use generation X of an AI system to produce generation X+1.

The technical write-ups about the recent stunning AlphaGo victory make reference to the boostrapping of AlphaGo. As well as learning from the database of prior human games, it has learned by playing against itself....

Doug Englebart springs to mind and his bootstrapping strategy.

Douglass Hofstadter springs to mind and his strange loops model of consciousness.

Stephen Wolfram springs to mind and his feedback loops of simple algorithms for rapidly generating complexity.

AI's learning by using the behavior of the previous generation AI as "input" in the form of a training set sounds very like iterating a simple Wolfram algorithm or a fractal generating function, except that the output of each "run", is the algorithm for the next run.

The weird, weird, weird thing about all of this, is that we humans don't have to understand the AIs we are creating. We are just creating the environment in which they can create themselves.

In fact, it may even be the case that we cannot understand them because, by design, there are no rules in there to be dug out and understood. Just an unfathomably large state space of behaviors.

I need to go to a Chinese room, and think this through...

Thursday, March 10, 2016

LoRa

LoRa feels like a big deal to me. In general, hardware-lead innovations tend to jumpstart software design into interesting places, moreso than software-lead innovations drag hardware design into interesting places.

With software driving hardware innovation, the results tend to be of the bigger, faster, cheaper variety. All good things but not this-changes-everything type moments.

With hardware driving software innovation however, software game changers seem to come along sometimes.

Telephone exchanges -> Erlang -> Elixer.
Packet switching -> TCP/IP -> Sockets

BGP Routers -> Multihoming
VR Headsets -> Immersive 3D worlds

etc.

I have noticed that things tend to come full circle though. Sooner or later, the any hardware bits that can themselves be replaced by software bits, are replaced:-)

This loopback trend is kicking into a higher gear at the moment because of 3D printing. I.e. a hardware device is conceived of. In order to build the device, the device is simulated in software to drive the 3D printer.  Any such devices that *could* remain purely software, do so eventually.

A good example is audio recording. A modern DAW like ProTools or Reaper now provides pure digital emulators for pretty much any piece of audio hardware kit you can think of: EQs, pre-amps, compressors, reverbs etc.