Judge Facciola: We’re walking a tightrope between two terrible alternatives

This is part two of our three-part conversation with retired United States judge John Facciola, among the foremost authorities on the relationship between technology and law. Part one is here. Below we discuss the appropriate role of the judiciary in facilitating discovery and attempting to assure the efficacy of search methodologies. 

Logikcull: I want to ask you about some recent trends in eDiscovery. In O’Keefe, you famously expressed concern about the ability of lawyers and judges to assess the efficacy of search terms without the benefit of expert testimony. In light of that, I’m wondering what you think about what appears, to me at least, to be a more active judicial involvement in assessing the validity of competing search methodologies.

Hon. John Facciola: When I was writing about this, lawyers were just pulling keywords out of the sky. And I don’t know if I’ve said this in my opinions, but I’ve certainly said this publicly: there are some lawyers who think they are experts in search because they once used Google to find a Chinese restaurant in San Francisco that served dim sum that was open on Sundays. That doesn’t make you an expert. Sometimes use of keyword searches, as (US Magistrate) Judge Andrew Peck said in his wonderful opinion, resembles a poor game of “go fish.”

“There are some lawyers who think they are experts in search because they once used Google to find a Chinese restaurant in San Francisco.”

So the first thing we have to understand is our own inadequacies. On the other side of the coin, there have been extraordinary and dramatic developments in new technology, both inside and outside the legal field, that make searching much more efficient. In terms of where the judiciary is, I think we’re feeling our way. Under traditional analysis of the discovery rules, one party says “I want this” and the other says “you can look at it.” And the judge’s role was insignificant. Now the party says, “I have this database. It’s 2 million documents. I intend to search it using the following methodology.” At that point, the traditional analysis would be: Well, you search this as you see fit, you make your production, we (the other side) will assess the validity of the production after you produce it. The rules only require, at least in a paper universe, that you permit the copying of the information that is requested. There’s nothing in the rules that mandates the manner under which you search it.

But by the same token, I can understand why judges may want to be involved at the get go because there’s this terrifying fear that if we do it and we do it wrong, we’ll have to do it all over again. This happened in the Biomet case, so that explains to me judicial involvement. But I can understand the argument that never before in the history of discovery have we focused on how someone searches. We usually only looked at the results and assessed their significance if a complaint arose by one side or the other that the search was insufficient because it failed to produce what everyone would expect to be there.

“Never before in the history of discovery have we focused on how someone searches.”

To answer your question, it’s a very complicated matter. You’re really walking a tightrope between two terrible alternatives. One is, you don’t get involved and you have to go through the process all over again at twice the expense. Or you do get involved and you bring the process to a screeching halt because now it becomes a judiciary super-process — and that’s a landmark difference in the way the question of document production requirements has always been approached.

Logikcull: Do you think Daubert should apply to predictive coding? (Editor’s note: The question is asking whether the judge thinks courts have the duty to act as gatekeepers to seek to assure that search methodologies using predictive coding produce relevant, reliable evidence — specifically, by vetting the qualifications of expert witnesses who testify to the efficacy of the predictive coding mechanism. There is a split of opinion as to whether search methods employing predictive coding should be bound by Daubert, which applies to Federal Rule of Evidence 702 — and if they do, who (or what) would be qualified to give credible testimony as to the predictive coding machine’s inputs, application and results.)

JF: I, certainly in my opinions, particularly in O’Keefe, express my concern in using keywords without a sophisticated understanding of statistics. But now keyword deficiencies have been documented in the literature of the TREC process and so forth. The use of technology assisted review and use of algorithms is the coming way. So I still believe that the question of the validity of the search and whether it can be reasonably expected to have been done the way it should have been done is a matter beyond the ken of a layman under rule 701 of the Federal Rules of Evidence, and it may require that an expert speak to the validity of the search.

“Certainly I have no idea, nor do most judges, if the particular use of a particular algorithm meets the standards that a statistician would impose.”

Certainly I have no idea, nor do most judges, if the particular use of a particular algorithm meets the standards that a statistician would impose. I don’t see how that is anything but an expertise that is beyond the judge’s ken and beyond the layperson’s ken — and requires expert testimony. Judge Waxse agrees with me in an article he wrote, and I have great confidence in him.

As told to Robert Hilson, a director at Logikcull. He can be reached at robert.hilson@logikcull.com. The photo in this post is courtesy of the National Law Journal