Over the nine years since it first rose to prominence in eDiscovery, technology-assisted review has expanded to include numerous new tools, more potential workflows, and a variety of legal issues
In “Alphabet Soup: TAR, CAL, and Assisted Review,” we discussed TAR’s rise to prominence and the challenges that it has created for practitioners. In “Key Terms and Concepts,” we discussed the key terms and concepts practitioners need to know. In this Part, we review the applications, aptitudes, and effectiveness of TAR approaches.
Now that we have established a coherent framework of terms and concepts for discussing TAR, let’s discuss the contexts in which TAR can be applied, the relative aptitudes of TAR 1.0 and TAR 2.0, and the general effectiveness of TAR as an approach.
Review of documents for potential production during discovery is the primary application of TAR workflows, but it is not the only application. TAR tools and workflows may also be leveraged in other contexts, such as early case assessment (ECA). Even if a party is uncomfortable relying upon TAR to decide what gets reviewed for production, they might still leverage TAR to organize and prioritize their document collection for a more traditional review process or to create a quality control yardstick against which to measure a more traditional review process.
TAR approaches are also valuable options in investigations. In the context of an internal investigation, there is no need to be concerned about another party objecting to your TAR use or to specifics of your TAR workflow, allowing you to take advantage of TAR’s greater speed and efficiency worry-free. Many federal agencies are also now comfortable with TAR being used for responses to their investigatory requests (although methodology details generally have to be provided to the agency to secure approval, and document samples are sometimes required).
Because you will now generally have the option of choosing between TAR 1.0 and TAR 2.0 approaches for your projects, it is important to understand their relative aptitudes:
The next obvious question is: how effective are TAR approaches? The short answer is: when used correctly, at least as effective as traditional human review. This is, in part, because TAR is good and, in part, because traditional human review is not as perfect as practitioners assume.
The Sedona Conference’s Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery describes a persistent myth in eDiscovery:
It is not possible to discuss this issue without noting that there appears to be a myth that manual review by humans of large amounts of information is as accurate and complete as possible – perhaps even perfect – and constitutes the gold standard by which all searches should be measured.
The reality is quite different from this myth. In reality, even the best reviewers make numerous mistakes due to simple human fallibility, and reviewers frequently come to different conclusions regarding questions of relevance, privilege, and more. Studies have shown surprisingly low consistency between the independent results of equivalent review teams (“Assessor Overlap”):
In 2011, a seminal journal article was published titled Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review. This article examined the results of the 2009 Text Retrieval Conference’s Legal Track Interactive Task to see how TAR approaches compared to traditional approaches, and it found (a) that human review was far from perfect and (b) that TAR was as good or better, particularly with regard to precision:
And, as we will see, TAR has since been deemed adequately effective – both in theory and in practice – in a variety of cases.
Upcoming in this Series
In the next Part, we will continue our discussion of assisted review with a look at some of the case law addressing whether parties are allowed to use TAR.