Previously: What is Law? - part 12a
Mention has been
made earlier in this series to the presence
of ambiguity in the
corpus of law and the profound implications
that the presence of
ambiguity has on how we need to conceptualize computational law, in
my opinion.
In this post, I
would like to expand a little on the sources of ambiguity in law.
Starting with the linguistic aspects but then
moving into law as a process and an activity that plays
out over time, as opposed to
being a static knowledge object.
In my opinion,
ambiguity is intrinsic in any linguistic formalism that is expressive
enough to model the complexity of the real world. Since law is attempting to model the complexity of the real world, the ambiguity
present in the model is necessary and intrinsic in my opinion. The
linguistic nature of law is not something that can be pre-processed
away with NLP tools, to yield a mathematically-based corpus of facts
and associated inference rules.
An illustrative
example of this can be found in the simple sounding concept of legal
definitions. In language, definitions are often
hermeneutic
circles[1] which are formed whenever we define
a word/phrase in
terms of other words/phrases. These are themselves defined in terms
of yet more words/phrases, in a way that creates definitional loops.
For example, imagine
a word A that is defined in terms of words B, and C. We then proceed
to define both B and C to try to bottom out the definition of A.
However, umpteen levels of further definition later, we create a
definition which itself depends on A – the
very thing we are
trying to define - thus creating a definitional loop. These definitional
loops are known as hermeneutic circles[1].
Traditional computer
science computational methods hate hermeneutic
circles. A large part
of computing consists of creating a model of data that "bottoms
out" to simple data types. I.e. we take the concept
of customer
and boil it down into a set of strings, dates and numbers.
We do not
define a customer in terms of some other high level concept
such as
Person which might, in turn, be defined as a type of customer.
To
make a model that classical computer science can work on, we
need a
model that "bottoms out" and is not self-referential in
the way hermeneutic circles are.
Another way to think
about the definition problem is in term
of Saussure's linguistics[2]
in which language (or more generically
"signs") get their
meaning because of how they differ from other
signs - not because
they "bottom out" into simpler concepts.
Yet another way to
think about the definition problem is in terms
of what is known as
the descriptivist theory of names[3] in which
nouns can be though of
as just arbitrary short codes for potentially
open-ended sets of
things which are defined by their descriptions.
I.e. a "customer"
could be defined as the set of all objects
that (a) buy products
from us, (b) have addresses we can send invoices
to, (c) have given
us their VAT number.
The same hermeneutic
circle/Sauserrian issue arises here however
as we try to take the
elements of this description and bottom out
the nouns they depend on
(e.g., in the above example, "products",
"addresses",
"invoices" etc.).
For extra fun, we
can construct a definition that is inherently
paradoxical and sit
back as our brains melt out of our ears trying
to complete a
workable definition. Here is a famous example:
The 'barber' in town X is defined as the person in town X who cuts the hair of anyone in town who do not choose to cut their own hair.
This sounds like a
reasonable starting point for a definition of
a 'barber', right?
Everything is fine until we think about who cuts the barber's
hair[4].
The hard facts of
the matter are that the real world is full of things we want to make
legal statements about but that we cannot formally define, even
though we have strong intuitions about what
they are. What is a
"barber"? What is the color "red"? Is
tomato
ketchup a vegetable[5]? What is "duty"? What is
"ownership"?
etc. etc. We all carry around intuitions
about
these things in our heads, yet we struggle mightily to define
them.
Even when we can find a route to "bottom out" a
definition, the
results often seem contrived and inflexible. For
example
we could define "red" as 620–750 nm on the
visible spectrum but
are we really ok with 619nm or 751nm being "not
red"?
Many examples of
computing blips and snafus in the real world can be traced
to the
tendency of classical computing to put inflexible boundaries
around
things in order to model them. What does it mean for a fly-by-wire
aircraft to be "at low altitude"? What does it mean for an
asset
to be trading at "fair market value"? The more we
attempt to bottom
these concepts out into hard numeric ranges -
things classical computing can easily work with - the more we risk
breaking our
own intuitions with the real world versions of these
concepts.
If this is all
suggesting to you that computational law sounds
more like a problem
that requires real numbers (continuous variables) and statistical
calculations as opposed to natural numbers and linear algebraic
calculations, I think that is spot on.
I particularly like
the concept of law as a continuous, analog
process as it allows a
key concept in law to be modeled more
readily - namely the impact
of the passage of time.
We have touched on
the temporal aspects already but here I would
like to talk a little
about how the temporal aspects impact
the ambiguity in the corpus.
As time passes, the
process of law will itself change the law. One
of the common types
of change is a gradual reduction in levels of ambiguity in the
corpus. Consider a new law which needs to define a concept. Here is
how the process plays out, in summary form.
- A definition is created in natural language. Everybody involves in the drafting knows full well that definitions cannot be fully self-contained and that ambiguity is inevitable. In the interests of being able to actually pass a law before the heat death of the universe, a starter definition is adopted in the law.
- As the new law finds its way into effect, regulations, professional guidance notes etc. are created that refine the definition.
- As the new law/regulations/professional guidance impacts the real world, litigation events may happen which result in the definition being scrutinized. From this scrutiny, new caselaw is produced which further refines the definition, reducing but never completely removing, the amount of ambiguity associated with the defintion.
A closely related
process - and a major source of pragmatic, pre-meditated ambiguity in
the process of law - is contracts. While drafting a contract, the
teams of lawyers on both sides of the contract know that ambiguity is
inevitable. It is simply not possible,
for all the reasons mentioned
above, to bottom out all the ambiguities.
The ambiguity that
necessarily will remain in the signed
contract is therefore used as
a negotiating/bargaining item as
the contract is being worked.
Sometimes,
ambiguity present in a draft contract gives you a
contractual
advantage so you seek to keep it. Other times, it
creates a disadvantage so you seek to have it removed during contract negotiations. Yet other times, the competing teams of lawyers working
on a contract with an ambiguity might know full well
that it might cause
difficulties down the road for both
sides. However it might cost so
much time and money to reduce the ambiguity now that both sides let
it slide and hope it never becomes contentious post contract.
So to summarize,
ambiguity in law is present for two main reasons.
Firstly there is
ambiguity present that is inevitable because of what law is trying to
model - i.e. the real world. Secondly, there is ambiguity present
that is tactical as lawyers seek to manipulate
ambiguities so as to favor their clients.
Next up: Part 15
Next up: Part 15
[5] https://en.wikipedia.org/wiki/Ketchup_as_a_vegetable