Last June, we reported that the year in eDiscovery case law so far had been “unusually quiet.” Half way through the year, there were few blockbuster cases, no major sanctions, no upsets to long-followed practices. (There was, however, a great case about boilerplate objections and technology that allows you to stuff a hamburger patty with the filling of your choice.)
But if 2018 came in like a lamb, it went out like a lion—or at least an angry tomcat. In the last half of the year, we’ve seen significant opinions on:
- The appropriateness of, and persistence of, keyword search and linear review
- What sanctions follow a party's “bollixed” legal holds
- Litigators’ need to understand technology and the evidence they are putting forth
- What can happen when an AI-driven discovery process flies off course
- Why eDiscovery expertise is no longer just for the experts
And that’s just to name a few.
Of course, you can’t read every opinion. And why would you? In 2018, there have been over 100 federal opinions citing Rule 37(e) alone. (That’s half as many as in 2008, but still not a quick read.)
So we’ve, ahem, culled through them for you. Recently, Logikcull hosted a 2018 eDiscovery Case Law Review webinar, featuring the Honorable Thomas Vanaskie of the Third Circuit, Vincent Catanzaro, senior attorney at Morgan Lewis, and Michael Simon, principal at Seventh Samurai. Together, they synthesized some of the most important case law lessons of the past year.
You’ll need to watch the webinar, now available on demand by clicking above, for the full treatment. (And view our mid-year case law review for an even more encompassing look at this year’s most important case law.) But you can read on for the cheat-sheet version.
2018’s Most Significant eDiscovery Case Law: A Quick Overview
City of Rockford v. Mallinckrodt ARD Inc.: Don’t Be Afraid of eDiscovery and Don’t Ignore Your Null Set
Takeaways:
- Keyword search is here to stay.
- Sampling a null set, those documents not returned by a search or not identified as relevant during review, can constitute a reasonable inquiry under Rule 26(g)
- Random null set sampling will often be proportional to the needs of the case
- eDiscovery is nothing to be afraid of. Clowns are.
City of Rockford v. Mallinckrodt ARD Inc., 3:17-cv-50107 (N.D. Ill. August 7, 2018), is, as Judge Vanaskie noted, a “classic asymmetrical discovery case.” Mallinckrodt faced racketeering antitrust accusations after the price of its prescription medication for multiple sclerosis and infantile spasms, Acthar, jumped from $40 a vial to a shocking $40,000 a vial.
Facing the potential review of millions of documents, the parties negotiated an ESI agreement that governed search terms, date restrictions, custodian restrictions, and even a validation protocol to ensure the accuracy of their keyword search. They could not, however, agree on how to address concerns that a production may be incomplete.
That is, how does one determine the “known unknowns,” to quote former Secretary of Defense Donald Rumsfeld, as U.S. Magistrate Judge Iian D. Johnston did in the opening paragraph of this reference-rich opinion.
In eDiscovery, one way to determine the accuracy of a search and review process, to move an “unknown unknown” to a “known unknown,” is to sample the “null set.” A null set is defined as those documents that are not returned as responsive by the search process or not identified as relevant during review. By performing an eDiscovery quality control check on a sample of that data, a party may determine the relative accuracy of a production.
Here, Judge Johnston was asked to tackle two issues related to such sampling. First, does sampling a null set qualify as a reasonable inquiry under rule Rule 26(g)? The answer, Judge Johnston determined, is yes. “Because a random sample of the null set will help validate the document production in this case,” he writes, “the process is reasonable under Rule 26(g).”
Secondly, is that sampling proportional under Rule 26(b)(1)? Again, the answer is yes, as “the Court’s experience and understanding is that a random sample of the null set will not be unreasonably expensive or burdensome” and—as is often the case—the objecting party had failed to offer any actual evidence as to expense and burden. Even when dealing with millions of documents, only a small amount may need to be reviewed in order to obtain a reliable sample.
Finally, Judge Johnston reminds us that discovery, and eDiscovery technology, is not something to be afraid of:
In life, there are many things to be scared of, including, but not limited to, spiders, sharks, and clowns – definitely clowns, even Fizbo. ESI is not something to be scared of. The same is true for all the terms and jargon related to ESI. Discovery of ESI is still discovery, governed by the same Federal Rules of Civil Procedure as all other civil discovery.
Franklin v. Howard Brown Health Center: Harsh Sanctions for a “Bollixed” Legal Hold
Takeaways:
- “Bollixing” your legal hold is bad
- Failure to institute a defensible legal hold can lead to significant sanctions
- Harsh sanctions may be available even without proof of intent to deprive
The recent(ish) revisions to Rule 37(e) were, in the words of the Committee Note, meant to create a “uniform standard” for the application of the harshest spoliation sanctions, reserving those for instances where there is a finding that the spoliating party acted with intent to deprive. But that hasn’t eliminated similarly harsh sanctions when there is not a showing of intent, as Franklin v. Howard Brown Health Center, 1:17-cv-8376 (N.D. Ill. Oct. 4, 2018), demonstrates.
The case involved claims of workplace discrimination and harassment, revolving around instant messages sent over technology provided by the employer. During discovery, plaintiff’s counsel sought the production of email and text messages, but did not refer to “instant messages”—and only two such instant messages were produced.
Despite being one of the main vehicles of the alleged harassment, the defendant’s legal hold process had failed to properly preserve the instant messages. A hold was issued, but the company’s legal department seemed to misunderstand how long the messages were routinely saved—allowing them to be destroyed pursuant to the company’s regular, and brief, retention schedule. Indeed, the defendant had also failed to institute a legal hold for several months after litigation was reasonably anticipated, had allowed employees to determine what data should be preserved, and had wiped the computer of its key employees just a week after the lawsuit was promised.
That left the defendant forced to concede, U.S. Magistrate Judge Jeffrey Cole wrote, “that, at the very least, it bollixed its litigation hold—and it has done so to a staggering degree and at every turn.”
But “bollixed” isn’t one of the standards of Rule 37(e). So, faced with a massively deficient legal hold and the resultant spoliation of ESI, what remedy was appropriate? Noting that the plaintiff “doesn’t go into any depth as to whether defendant acted intentionally,” Judge Cole limits himself to Rule 37(e)(1)—and allows the parties to present evidence to the jury “regarding the situation that was caused by defendant’s faulty and failed litigation hold.” That sanction falls just short of the adverse inference instruction reserved to Rule 37(e)(2), but it is not foreclosed by the rules, Judge Cole notes.
The Franklin case is further evidence, as Vincent Catanzaro noted during the webinar, that, “in order to make the parties as close to whole as possible, judges will provide whatever sanctions are necessary.”
In Re: Domestic Airline Travel Antitrust Litigation - What Happens When Discovery Goes Off Course
Takeaways:
- AI-driven discovery isn’t autopilot. It requires supervision.
- Getting TAR wrong can lead to significant delays
- TAR adoption will likely remain slow until cost, complexity, and risk come under control
In Re: Domestic Airline Travel Antitrust Litigation, 1:15-mc-01404 (D.D.C. Sept. 13, 2018), features a discovery process that went significantly off course, resulting in the production of millions of unresponsive documents—and without any easy way to separate the unresponsive docs from the responsive ones.
The case arose over price fixing allegations against the nation’s four biggest airlines, United, Delta, Southwest, and American Airlines. (Southwest and American have both settled.) During pre-class-certification discovery, the plaintiffs and United agreed to use technology-assisted review (also known as TAR, predictive coding, machine learning, or AI) to identify the responsive documents to be produced.
But that TAR approach, as Judge Colleen Kollar-Kotelly noted, ran into a “glitch.” And that glitch resulted in millions of unresponsive documents being produced. As we wrote at the time:
United’s TAR process ended up producing more than 3.5 million documents, with only an estimated 600,000 docs, or 17 percent, being responsive to the plaintiff’s request. That AI-powered document dump left the plaintiffs with little option but to demand an extension of six months, just to get through the millions of documents accidentally rerouted their way.
That glitch involved significant discrepancies between the planned recall rate and precision level and what the TAR process actually produced:
Under the agreed-upon protocol, United was to have a minimum recall rate of 75 percent and a "reasonable level" of precision. United was to review representative samples to ensure accuracy and completeness. But those metrics were not shared with the plaintiffs until 7:23 pm on the Friday before United’s Monday production deadline. While the control set showed acceptable rates of recall and precision, the validation samples were far different, revealing that United’s TAR process was incredibly over-inclusive (a nearly 98 percent recall) and extraordinarily imprecise.
It took United weeks to explain the discrepancy and when the explanation came, it was so opaque that the response was simply “Plaintiffs do not understand this explanation.”
In the end, United was unable to simply redo its process, because it would have to retrain its TAR protocol from scratch. The plaintiffs, too, were forced to abandon their own planned use of TAR, as their model could not be adjusted to weed out the unresponsive documents. Instead, they hired a small army of document reviewers and moved for an adjustment to the existing scheduling order. The delay stemming from the TAR glitch provided good enough cause to modify the existing scheduling order.
All told, United’s discovery turbulence led to a delay of at least six months and a potential cost of hundreds of thousands of dollars. That’s an unfortunate outcome for the parties, but also for TAR adoption generally:
TAR was designed for colossal, break-the-bank cases like this, gargantuan pieces of litigation involving incredibly data-rich defendants and millions of potentially relevant documents. Yet the legal industry has been slow to adopt TAR, and not just because gargantuan MDLs make up only a tiny share of the national docket. The cost, complexity, and potential risk of such processes seem to have prevented their wider adoption. Cases like In Re: Domestic Airline Travel Antitrust Litigation are unlikely to help TAR take flight.
Lawrence v. City of New York: Attorneys’ Responsibility to Understand Metadata
Takeaways:
- Rule 26(g) sanctions may be appropriate even when an attorney has conducted a reasonable inquiry
- Simply checking a file’s metadata can go a long way to confirming, or refuting, a client’s allegations and can protect you from producing falsified documents
From the high technology of In Re: Domestic Airline Travel Antitrust Litigation we close out on something a bit more rudimentary: understanding metadata. Lawrence v. City of New York, 1:15-cv-8947 (S.D. N.Y. July 27, 2018), involves accusations that NYPD officers illegally searched the plaintiff’s home without a warrant, injured her, damaged property, and stole $1,000 in cash.
The plaintiff claimed she had photographic evidence of the damage. Her lawyer reviewed the photos, saved them as PDFs, Bates-stamped them, then sent them to the defendants. During depositions, the plaintiff initially claimed that her son or a friend had taken the photos, a few days after the incident. She later said that she had taken the pictures, or her son had taken a few—the friend was gone. Catching the contradiction, the defendants requested the smartphones used to capture the photos.
The defendants then did what the plaintiff’s lawyer had not. They looked at the original files’ metadata. What they found was that 67 of the 70 photos had been taken immediately before they were turned over to the plaintiff’s lawyer, two years after the incident occurred.
Had the attorney checked the metadata himself, he would have discovered that. But he had not. In fact, the attorney claimed he did not know how to check metadata, which can be done simply by right-clicking on a file (or control-clicking on a Mac) and viewing the document’s properties.
Once the false nature of the photos was revealed, he hired ethics counsel and moved to withdraw from the case. A motion for sanctions soon followed.
The court here did not struggle to impose the harshest of sanctions, dismissal, citing both the Federal Rules and its inherent powers. Under Rule 26(g), counsel have an affirmative duty to conduct a reasonable inquiry into discovery response, including that they are “not interposed for any improper purpose”.
By the court’s analysis, relying on a client’s assertions alone may be justified and the ultimate production of the photos “may have been careless, but was not objectively unreasonable.” Nonetheless, Rule 26 sanctions were appropriate, as sanctions apply both to the signer and “the party on whose behalf the signer was action.”
Further, since the falsified photos constituted fraud upon the court, and the plaintiff’s “deceptive conduct and shifting excuses have completely undermined her credibility,” dismissal was warranted.
The plaintiff did manage to escape an award of attorneys’ fees and costs, largely because they would be been unrecoverable. Her lawyer, too, managed to avoid sanctions, but not without having to hire his own ethics attorney, with the associated burdens, both in cost and reputational damage. And it all could have been avoided with a simple right-click.
Lessons Learned
If 2018’s case law tells us anything, it’s that the future of eDiscovery is here, it’s just not evenly distributed—to borrow a phrase from William Gibson. Today, eDiscovery is no longer limited to the largest firms or the largest cases, as discovery expertise spreads in-house to corporate law departments, outward to tech savvy small firms, and to more and more judges. However, the spread of eDiscovery hasn’t always been pretty, as some of these cases demonstrate.
Yet the missteps and false starts of others are no reason to remain ignorant yourself. Indeed, the lack of blockbuster sanctions cases this year is a reminder that more and more parties are getting eDiscovery right—and that some of the fear and uncertainty surrounding eDiscovery in the past may now be fading away.
To quote Judge Johnston once again, eDiscovery is “not something to be scared of… So don’t freak out.”
The Most Important eDiscovery Cases of 2018 (So Far)