Previously: What is Law? - part 14.
In part one of this series, a conceptual model of legal reasoning was outlined based on a “black box” that can be asked legal type questions and give back legal type answers/opinions. I mentioned an analogy with the “Chinese Room” used in John Searle's famous Chinese Room thought experiment[1] related to Artificial Intelligence.
In part one of this series, a conceptual model of legal reasoning was outlined based on a “black box” that can be asked legal type questions and give back legal type answers/opinions. I mentioned an analogy with the “Chinese Room” used in John Searle's famous Chinese Room thought experiment[1] related to Artificial Intelligence.
Simply put, Searle
imagines a closed room into which symbols (Chinese language ideographs) written on cards, can be inserted via a slot. Similar
symbols can also emerge from the room.
To a Chinese speaking person
outside the room inserting cards and and receiving cards back,
whatever is inside the room appears to understand Chinese. However,
inside the box is simply a mechanism that matches input symbols to
output symbols, with no actual understanding of Chinese at all.
Searle's
argument is that such a room can manifest “intelligence” to a
degree, but that it is not understanding what it is doing in the way
a Chinese speaker would.
For our purposes
here, we imagine the symbols entering/leaving the room as being legal questions. We can write a
legal question on a card, submit it into the room and get an opinion
back. At one end of the automation spectrum, the room could be the
legal research department shared by partners in a law firm. Inside
the room could be lots of librarians, lawyers, paralegals etc. taking
cards, doing the research, and writing the answer/opinion cards to
send back out. At the other end of the spectrum, the room could be a
fully virtual room that partners interact with via web browsers
or chat-bots or interactive voice assistants.
Regardless of where
we are on that spectrum, the law firm partners will judge the quality
of such a room by its outputs. If the results meet expectations, then
isn't it a moot point whether or not the innards of the room in some
sense “understand” the law?
Now let us imagine that we are seeing
good results come from the room and we wish to probe a little to get
to a level of comfort about the good results we are seeing. What
would we do to get to a level of comfort? Well, most likely, we would ask the virtual
box to explain its results. In other words, we would do exactly what we would do
with any person in the same position. If the room can explain its reasoning to our satisfaction, all is good, right?
Now this is where
things get interesting. Imagine that each legal question submitted to
the room generates two outputs rather than one. The first being the
answer/opinion in a nutshell (“the parking fine is invalid : 90%
confident.”). The second being the explanation “The reasoning as to why the parking fine is invalid is as follows....”). If the explanation we get is logical i.e. it
proceeds from facts through inferences to conclusions, weighing up the
pros and cons of each possible line of reasoning....we feel good
about the answer/opinion.
But how can we know
that the explanation given is actually the reasoning that was
used in arriving at the answer/opinion? Maybe the innards of the room just picked a conclusion based on its
own biases/preferences and then proceeded to back-fill a plausible
line of reasoning to defend the answer/opinion it had already arrive
at?
Now this is where
things may get a little uncomfortable. How can we know for sure that
a human presenting us with a legal opinion and an explanation to back
it up, is not doing exactly the same thing?
This is an old old
nugget in jurisprudence, re-cast into today's world of legal tech and
Artificial Intelligence. Legal scholars refer to it as the conflict
between so-called rationalist and realist models of legal reasoning.
It is a very tricky problem because recent advances in cognitive
science have shone a somewhat uncomfortable light on what actually
goes on in our mental decision making processes.
Very briefly, we are
not necessarily the bastions of cold hard logic that we might think
we are. This is not just true in the world of legal reasoning, by the
way. The same is true for all forms of reasoning including – shock!
- mathematicians.
Recent
research[2][3] suggests that human legal reasoning is best viewed as
a bi-directional process that oscillates between working forward
from premises/facts and working backwards from conclusions to
supporting premises/facts.
Mention was
previously made of the feature of law whereby different legal minds
can look at the same corpus and come up with different conclusions.
In this respect, our virtual legal reasoning room is just another
source of a legal opinion. Another legal “mind” if you will. The
quality of the opinions produced are judged on their merits – the
explanations - not on its actual means of production of
answers/opinions.
To this way of
thinking, lawyers should enthusiastically embrace these new virtual
research assistants that are emerging. Who wouldn't see benefit from
being able to get other legal “minds” to look at a legal question
and offer opinions. Who wouldn't see benefit from being able to ask
such a virtual research assistant to argue for and against a given
assertion to help sharpen a line of reasoning for use in a legal opinion or in a court room?
Some see problems
with the modern machine learning approach to legal AI because of the
inability of these systems to explain their conclusions in the form
of classic forward-chaining logic. I do not see this being a problem
in practice because these systems will develop ways to explain their
opinions. They will most likely do it as a completely separate activity. We may know for a fact that they are reasoning "backwards" but we can never know if the same isn't true
for the opinions given by our fellow humans – including the
opinions we provide to ourselves!
We have a tendency
to get caught up in the notion of intelligent machines replacing
humans. We look at the incredible progress machines have made in
playing Chess of Go, identifying faces in photographs etc. and some
wonder how long it will be before the machines replace the lawyers. I
believe there is a qualitative difference between practicing law
and, say, playing chess that gets glossed over in the excitement
about AI in law.
In chess, there is a small number of variables and a
huge, huge set of permutations/combinations of possible moves. Moreover, the key
variables can all be encoded for the machine to work with. This makes
this sort of game-playing a great candidate for complete
mechanisation. i.e. getting to the point where the machine can play
the game unaided.
Not so with law. A
lawyer's reasoning processes invariable are a lot more expansive
covering variables such as the overall goals of the client, trade
offs between time and opportunity cost, reputational risk factors, budget constraints, team dynamics etc. etc. On top of these, I have argued
in previous posts that the entire legal system is not and cannot be,
reduced to a set of rules – no matter how large the set of rules
might be envisaged to be.
Rather than think of
machines are replacements for lawyers, better to think of machines as
augmenting lawyers in my opinion. Machines are no longer confined to document
management and mechanical search&retrieval. Machines are
increasingly offering opinions as to what is relevant. They
have been doing that for quite some time - from the dawn of search result ranking - but in recent years their
role as sources of opinion has grown significantly. This trend will
continue apace in my opinion. I think we will soon see the day when
every lawyer in private practice has access to legal virtual
assistants that can provide answers/opinions to supplement the
lawyers own research/experience and that of their colleagues.
If I were a
professional chess player, I would be a lot more worried about career
viability in the age of intelligent machines than I would be as an
lawyer, or an accountant or
a medical doctor. Yes, intelligent machines will impact these
professions as more and more of the mechanizable tasks become mechanized.
But the machines can only compute with what they have visibility of
and it is in all the stuff that the machines cannot have visibility
of that the 21st Century professionals will live.
A good example of
this can be found in the world of contracts and in particular, the
emerging world of “smart contracts” which is where we will turn
to next.
No comments:
Post a Comment