The idea of researching venire members during voir dire is nothing new to the legal practice. In fact, the goal of selecting a fair and impartial jury is supported through lawyers having more information on venire members to learn more about each individual’s experiences and preferences. To aid in this objective, many states allow attorneys to perform research on venire members. In fact, one state requires that attorneys perform at least cursory research on venire members. This requirement serves as “reasonable investigation” that promotes the efficiency and fairness of court proceedings.
With the emergence and prevalence of artificial intelligence (AI) to collect online information at a dizzying pace, the question of whether AI can be used to gain substantive advantages in jury selection remains. This issue is not science fiction. Today, companies exist that have developed proprietary AI systems to search the internet and amass information on venire members. The systems vary by corporation, but many offer the capability to collect and review the social media profiles and publicly available information of venire members. These capabilities are in line with AI’s strengths in data gathering. However, some experts have voiced concerns when these online insights become an “evaluative rating” for lawyers to blindly trust, when AI is used to appraise in-person venire qualities, and where social media posts are used to approximate a juror’s opinions on any given topic.
While AI seems well-suited for all of these tasks, there are a variety of documented biases in AI technologies that have become problematic. First, giving a juror a selection “rating” on their amenability to a favorable outcome oversimplifies the selection process. As an overall score does not offer lawyers an opportunity to understand the factors and processes that informed the AI’s decisions in crafting that number, the score is inherently flawed in an age where fact-checking AI systems is particularly crucial in the legal profession. Second, AI cannot be permitted to screen jurors based on subjective inputs. For example, facial scanning technologies have higher errors rates for underrepresented individuals, which are potentially caused by the data used to train the system. In another instance, the human-input preference for using specific words on resumes was found to select more men. Finally, a juror’s social media presentation is not necessarily analogous to that individual’s “decision-making criteria, psychological makeup, and how they’re going to interact with a group.” Social media representations are typically the curated highlights of an individual’s activities and could be misleading.
While one may think that modeling AI’s decisions off of human decisions would be a better choice, and AI machine learning technologies can learn how to make choices based on historical human decisions, history has shown that copying human choices can also lead to biased outcomes. Frequently, the aforementioned bias in AI systems comes from unintended human biases. On the other hand, leaving bias reduction to machines can be imperfect as well. Often a component of AI, machine learning can have bias that leads to unintentional yet unfair or illegal outcomes based on a machine’s assumptions.
The current problem with the use of AI to aid in jury selection is that AI systems have the potential to reduce equity through implicit bias. This removes the “representative cross section of the community” that is essential to supporting equitable case outcomes, which will ultimately reduce trust in AI systems and has the potential to produce unintended, distorted results. Therefore, without proactive engagement from lawyers—who are typically slow to embrace new technology—to understand how AI is being used in their jury selection, any perceived time efficiencies are rendered moot by inaccurate conclusions.
Nevertheless, there is a possibility that AI could someday be used for jury selection in a prominent manner. For this idea to become a reality, processes to oversee and mitigate bias must be enacted, and investments must be made to provide more holistic data to AI machines for learning and the AI field itself must diversify to build a larger community of insight.
Christine Pangborn is a second-year law student at Wake Forest University School of Law. Prior to law school, Christine worked in financial services consulting. As a proud Double Deac, Christine holds a degree in Business and Enterprise Management from Wake Forest University School of Business.
Email: [email protected]