A multi-part series on the fundamentals eDiscovery practitioners need to know about document review planning and execution
In “The Main Event,” we reviewed the costs and significance of review, as well as the question of what gets reviewed. In “For What It Gets Reviewed,” we discussed the range of determinations that you might want reviewers to be making. In “Who Does the Reviewing,” we discussed your options for review staffing. In this Part, we turn our attention to review workflow design considerations.
In the last Part, we reviewed your options for who does the reviewing: internal resources (i.e., the case team, existing corporate or firm staff), external resources (i.e., contract reviewers, managed review services), or a combination. Once you know what you’re reviewing, for what you’re reviewing it, and who’s doing the reviewing, you can plan the actual workflow by which the review work will be executed.
Designing an effective document review workflow is a project-specific exercise that requires consideration of a wide range of options and factors, including the features and functions available to you in your chosen document review platform, the volumes and types of materials being reviewed, the number and nuance of things for which the materials must be reviewed, the number and skill level of the chosen reviewers, and the available time for completion of the review.
Smaller, simpler projects may require only a simple workflow, with just a traditional first level review checking for both relevance and privilege and a second level quality control review double-checking some of that work prior to production. More complex projects may call for multi-level, multi-path workflows with specialized teams handling specific tasks. For example:
To some extent, the range of workflows you can create will be dictated by the review management tools available to you in your chosen document review platform. Obviously, all manner of workflows can be executed and tracked manually – as they were in the days before sophisticated review platforms were available, but the manual management and documentation burden is (and was) much greater. Thankfully, most document review platforms have now evolved to offer a great deal of review management, workflow customization, and progress monitoring functionality.
As we note above, there is a tension in document review between speed, accuracy, and nuance: the more determinations a reviewer must make, the longer it will take them, and the more mistakes they will make. This is reflected in the creation of the tagging palette you create for reviewers to annotate documents with their determinations.
Reviewers only working with tags for simple relevance, potential privilege, and “hot” documents will be able to work more quickly and consistently than those who must also apply tags for specific issues, specific privilege types, and other nuances. When thinking about what tagging should happen in each phase of your review workflow, a good rule of thumb is to try to keep each reviewer from having to make more than about five determinations at a time about each document. Some platforms allow for the creation of multiple, separate tagging palettes to support complex workflows involving multiple teams.
Depending on your workflow and your chosen platform’s built-in review tracking features, you may also need to include tags designed to aid you in:
Ideally, you should rely as much as possible on the review tracking functions built into your platform to minimize complexity in the tagging palette (or palettes) being used.
Upcoming in this Series
In the next Part, we will continue our discussion of review fundamentals with a look at some additional workflow design considerations regarding batch creation, tracking, reporting, and documentation.
Whether you prefer email, text or carrier pigeons, we’re always available.
Discovery starts with listening.