This is an interesting piece on the opacity of the algorithms that run legal research platforms.
http://www.lawpracticetipsblog.com/2017/08/algorithms-that-run-legal-research.html
Digital machinery - in general - is more opaque than analog machinery. In years gone by, analog equipment could be understood, debugged, tweaked by people not involved in its original construction: mechanics, plumbers, carpenters, musicians etc. As digital tech has advanced, eating into those analog domains, we appear to loosing some control over the "how" of the things we are building...
The problem, quite ironically, also exists in the world of digital systems. These are regularly redone from scratch when the "how" of the systems is lost, typically when the minds involved in
its original construction - the holders of the "how" - cease to be involved in its maintenance.
With Deep Learning, the "how" gets more opaque still because the engineers creating these systems cannot explain the "how" of the decisions of the resultant system. If you take any particular decision made by such a system and look for a "how" it will be an essentially meaningless, extremely long mathematical equation multiplying and adding up lots of individually meaningless numbers.
In part 15 of the What is Law series I have posited that we will deal with the opacity of deep learning systems by inventing yet more digital systems - also with opaque "hows" - for the purposes of producing classic logic explanations for the operation of other systems:-)
I have also suggested in that piece that we cannot, hand on heart, know if our own brains are not doing the same thing. I.e. working backwards from a decision to a line of reasoning that "explains" the decision.
Yes, I do indeed find it an uncomfortable thought. If deductive logic is a sort of "story" we tell ourselves about our own decision making processes then a lot of wonderful things turn out to be standing on dubious foundations.
No comments:
Post a Comment