Tuesday, July 25, 2017

Top Adviser warns of out-of-control autonomous robot lawyers

Mark Walker, an eDiscovery adviser, author and frequent commentator, advocated Wednesday for "creating rules that govern how we use Artificial Intelligence (AI) with legal technology lest we unleash on humanity a set of autonomous robot lawyers that we don't know how to control.”
 “I’ve spent my entire adult life serving lawyers. Non-lawyers like myself have a hard-enough time now getting the human kind to listen", Walker stated.


Walker was responding to the mention of numerous articles, blog posts and commentary around the rise of AI in the business world, military and more recently the legal technology field. A little-known fact – the military has a rule in place, “due to expire later this year” that restricts the use of autonomous drones to take human life. Under the current rule, the drone that delivers the lethal munition must be manned by a human.
"I don't think it's reasonable for us to put robots in charge of complex legal advice without some measure of restriction, for example," Walker told a colleague during a trip to the water cooler.
That conversation also included discussion about a wide range of topics, including North Korea, Iran, defense budget issues, and how we may want to consider changing where we get those breakfast tacos. Full disclosure and complete transparency: No one else attended the impromptu water-cooler session. The “colleague” wishes to remain anonymous.  During that same water-cooler conversation, Walker also opined:
"There will be a raucous debate in the industry about whether we take humans out of the decision to decide whether a document is relevant or not, for example," but added that he was "an advocate for keeping human lawyers involved in those decisions – for now."
Walker further said humans need to remain in the decision-making process "because we take our values and subjective judgement into account. AI’s don’t" He pointed to the rules of civil procedure and the need to consider issues like proportionality and discriminate action against doing something that might make perfect sense to an AI, but doesn’t to the human lawyer.
His comments come as the US military has sought increasingly autonomous weapons systems and fears that corporate GC’s may begin doing the same with robot lawyers. As noted by a recent CNN Politics article US general warns of out -of-control killer robot - 
“In July 2016, a group of concerned scientists, researchers and academics, including theoretical physicist Stephen Hawking and billionaire entrepreneur Elon Musk, argued against the development of autonomous weapons systems. They warned of an artificial intelligence arms race and called for a "ban on offensive autonomous weapons beyond meaningful human control."
OK,….by now hopefully you know this post is a parody of that CNN article referenced above about autonomous drones the military wants to deploy. Few in the legal technology industry believe that jobs are going to be taken away by autonomous robots that think and make decisions on their own. I must say “few” because we really don’t know what those IBM Watson types have on the drawing board, but we can be confident in our narrow “eDiscovery” world there won’t be any “autonomous” technologies anytime soon.
Most “Predictive Analytics” technology that we use to reduce the number of electronic documents needing review at its root is indeed based on complex technology that does use Machine Learning, which is a form of AI. The underlying technology is so complex that most layman don’t understand how “it” works with any depth. That lack of understanding isn’t due to a lack of aptitude – lawyers and their staff are obviously smart folks. Rather, that underlying “math” is so mature that a deep understanding by someone at that level simply isn’t necessary. The root technology has been around for decades and is very mature. How you train the technology and what you do with the results is the more important place to gain an understanding. From a workflow standpoint, different applications work in different ways. We don’t need technologist over complicating what happens under the hood. We didn’t do that when “search and retrieval” technology was developed and rolled out. Everyone just assumes that if I search for “dynamite”, I will retrieve all the documents that contain the word dynamite. 
Machine Learning, the form of AI we use in “Predictive Analytics” technology, doesn’t just look for single words, it “classifies” all documents into conceptual buckets by considering all the words in a document, usually focused on noun and noun phrases. The technology also compares the frequency of certain words and how words and phrases in one document compares to sentences with similar words in another document. The right technology does this continuously across millions of documents as humans review. With the right technology, humans review “samples”, either selected by humans, the technology, or both. The technology presents sample documents that contain similar content to the human for consideration. In other words, humans, not the technology, decide what’s relevant. The technology simply places documents into conceptual buckets and then finds documents that are similar in content to documents identified by the humans as either relevant or not relevant. It really is just that simple. 
Conclusion
As previously noted, no one is seriously staying away from “Predictive Analytics” technology because they fear that we are going to weaponize autonomous robot lawyers that are going to take their job. Rather, the problem with adoption of this technology has been a lack of basic understanding because WE (that royal industry we) have made this technology sound too complex and risky. The reality is that the risk in using this technology resides with the humans, just as it does with search and review the result workflows. Predictive Analytics technology doesn’t make autonomous decisions. If there is a flaw – something gets missed – it’s the human usually at fault, not the technology. Garbage in, garbage out. Sure, some technology stacks are better than others. The differences are usually around workflow and feature set, not the quality of the classifier method. There are differences in performance where a technology is using an old method or algorithm that can only utilize simple passive learning as opposed to constant re-ranking of documents as document review progresses. For that continuous re-ranking, you need active learning capability.
We are finally seeing adoption of “Predictive Analytics” technology on the rise. 
Are you at least exploring the use of Predictive Analytics?

No comments: