Previously: what is law part 12
Perhaps the biggest form of push-back I get from fellow IT people with respect
to the world of
law relates to the appealing-but-incorrect notion
that in the text
of the law, there lies a data model and a set of
procedural rules
for operating on that data model, hidden inside the language.
The only thing
stopping us
computerizing the law, according to this line of reasoning, is that we just need to get past all the historical
baggage of foggy language and extract out the procedural rules
(if-this-then-that) and the data model (definition of a motor
controlled vehicle, definition of 'theft', etc.). All we need to
do
is leverage all our computer science knowledge with respect to programming
languages and data modelling, combine it with some NLP (natural
language processing) so that we can map the legacy linguistic form
of
law into our shiny new digital model of law.
In previous parts in this series I
have presented a variety of technical arguments as to why this is not
correct in my opinion. Here I would like to
add some more but this
time from a more sociological perspective.
The whole point of
law, at the end of the day, is to allow
society to regulate its own
behavior, for the greater good of
that society. Humans are not made
from diamonds cut at right angles. Neither
are the societal structures we make for
ourselves, the cities we
build, the political systems we create etc.
The world and the societal
structures we have created on top of it
are messy, complex and
ineffable. Should we be surprised that the
world of law which
attempts to model this, is itself, messy, complex
and ineffable?
We could all live in
cities where all the houses are the same
and all the roads are the
same and everything is at right angles and fully logical.
We could speak perfectly
structured languages where all sentences
obey a simple set of
structural rules. We could all eat the same
stuff. Wear the same clothes. Believe in the
same stuff...but we do not. We choose not to.
We like messy and
complex. It suits us. It reflects us.
In any form of digital model, we are seeking the ability to model the
important stuff. We
need to simplify - that is the purpose of a model after all - but we
need to preserve the essence of the thing modeled.
In my opinion, a
lot of the messy stuff in law is there because
law tries to model a
messy world. Without the messy stuff, I don't
see how a digital
model of law can preserve the essence of what law actually is. The
only outcome I can imagine from such
an endeavor (in the classic
formulation of data model + human
readable rules) is a model that
fails to model the real world.
In my opinion, this
is exactly what happened in the Eighties
when people got excited
about how Expert Systems[1] could
be applied to law. In a nutshell,
it was discovered that the modelling activity lost so much of the
essence of law,
that the resultant digital systems were quite
limited in
practice.
Today, as interest in Artificial Intelligence
grows
again, I see evidence that the lessons learned back in the
Eighties
are not being taken into account. Today we have XML and
Cloud
Computing and better NLP algorithms and these, so the story goes, will fix the
problems we had in the Eighties.
I do not believe this is the case. What we do have
today, that did not exist in the Eighties, is
much much better
algorithms for training machines - not programming
them to act intelligently - training
them to act intelligently. When I studied AI in the Eighties, we spent
about a week on
Neural Networks and the rest of the year on
expert systems i.e.
rules-based approaches. Today's AI courses
are the other way around!
Rightly so, in my opinion because there has
not been any great
breakthrough in the expert systems/business
rules space since the Eighties. We tried all the rules-based approaches in the Eighties. A lot of great computer science minds worked on
it. It came up short in the real world of law.
When you combine the
significant advances in Neural Network
approaches with all the
compute advantages of cloud and the ready availability of lots and
lots of digital data, things get interesting
again. This is where we
are today. And it is very interesting indeed.
I numbered this
blog post "12a", for a reason that is
hopefully both humorous and relevant.
I know of both legal texts
and legal business processes that avoid
the number 13. I know
of a legal text with so many sub-paragraphs
that the number
666 was needed, and 665a was used instead. This
kind
of thing drives rules-based computing mad but is exactly the
kind of human footprint that is literally all over the world
of
law.
The human touch can
be seen in all its splendor in the area
of legal fictions[2].
Everything from life insurance claims
to resigning from office uses
forms of logic that are very
foreign to the world of classic
computing concepts of rules and data models.
Yet there the are... in all their messy, complex, splendidly human
glory. Spend a few moments with the Chiltern Hundreds. It is worth your time [3]. Spend some time thinking about how we humans can both navigate ambiguity when we have to, or when it suits us, and - when it suits us - create new ambiguity. Then read about contra proferentem[4].
Now we can refuse to
believe the messy ambiguity and complexity is
intrinsic and spend our time trying
to remove it with
computers - as we did
in the Eighties. Or we can
take a deep breath, dive in and
embrace it.
I recommend the
latter. Next up: What is Law? - Part 14.
[2] https://en.wikipedia.org/wiki/Legal_fiction
No comments:
Post a Comment