Everyday Law focuses on artificial intelligence as evidence through the expertise of Judge Paul Grimm and Professor Maura Grossman with host, Bob Clark.

The latest episode of Everyday Law focused on Artificial Intelligence as evidence in court proceedings.

Host, Bob Clark, spoke to Judge Paul Grimm of the the United States District Court for the District of Maryland and Professor Maura Grossman of the Univeristy of Waterloo in Ontario, Canada, who together had previously authored an authoritative article in the Northwestern School of Law's Journal of Technology. entitled " Artificial Intelligence as Evidence".

Both Judge Grimm and Professor Grossman had taken somewhat atypical paths to their legal careers. Judge Grimm started his legal career in the military and Professor Grossman earned a PHD in psychology, actively practicing in that field for a number of years before she concluded the law was her future.

They are each now academics with Judge Grimm helming the Duke University Law School's Bolch Judcial Institute and Professor Grossman a research professor in the school of computer science at the University of Waterloo as well as an adjunct professor at Osgoode Hall Law School.

The origin of Professor Grossman and Judge Grimm's work together was in the context of issues arising in electronic discovery. Professor Grossman and her husband, Gordon Cormack have been instrumental in setting legal standards for dealing with e-discovery and Judge Grimm was at the forefont of judicial efforts to formulate rules regarding admissibility of such evidence.

The episode is an hour in length and after charting the fascinating career paths of the guests, turns to  the fundamental question of what is artificial intelligence and what is problematic regarding its use in court.

Artificial intelligence is computers performing cognitive tasks. That these processes are often opaque is beyond dispute. The fundamental question for admissibility concerns what validation was done to ensure that the algorithims consistently and accurately produce their results.

Judge Grimm discussed the continuing evolution of evidentiary standards noting that blood spatter and hair fiber analysis as well as eyewitness identification have been increasingly subject to skeptical court scrutiny.

He indicated that judges need a set of tools and that approaches to A.I. have been derivative of Daubert and the changes it gave rise to in the Federal Rules of Evidence.

Considerations for admissibility include relevance, error rate and the prejudice associated with wrongful admission. Judge Grimm suggested that asking fundamental questions such as what the A.I. was designed to do, whether it has been peer reviewed.and whether its process can be explained, are important.

A recurrent stumbling block concerns " trade secrets" which is subject to a qualified privilege but may disqualify admisssion of an A.I. function, where the progenitor of the A.I. refuses to explain how it works for fear they will disclose the secret sauce which distinguishes its product from a competitors.

As with all evolving evidentiary issues it is likely that the usefullness of the technology must be balanced against the prejudice its use entails and the party adversely affected must be afforded the opportunity to explore the possibility that its output is inaccurate.

For more go to:https://everydaylaw.podbean.com/e/artificial-intelligence/

Robert V. Clark
Maryland Car Accident and Personal Injury Lawyer