If you’re using artificial intelligence (AI) to power your ediscovery, how can you be certain that the AI system is properly doing its job?
Trust in AI-Generated eDiscovery
Well, the National Institute of Standards and Technology (NIST) is working on that problem and last month proposed “four principles of explainable artificial intelligence” designed to guide AI and their programmers in providing information about the logic behind specific AI decision-making, as well as enhance accountability and trust of AI systems. Noting that rapidly advancing AI systems “have become components of high-stakes decision processes,” there is a need to “create algorithms, methods, and techniques to accompany outputs (defined by NIST as “result of a query to an AI system”) from AI systems with explanations.” This need, it should be noted, is driven in part by Fair Credit Reporting Act and European Union General Data Protection Regulation dictates requiring information about how automated systems logically create their decisions.
NIST’s proposed principles are “heavily influenced by considering AI system’s interactions with the human recipient of the information” and are intended to “capture a broad set of motivations, reasons, and perspectives.” The principles are also designed to account for the requirements of a given situation—such as regulatory or legal—the task at hand, and the consumer. In the legal industry, this includes promoting trust in AI-generated ediscovery.
So, Without Further Ado, the Four Principles:
Explanation—this principle “obligates AI systems to supply evidence, support, or reasoning for each output.” Interestingly, the principle doesn’t require that the evidence be correct, informative, or even intelligible, but those are apparently covered by the other principles.
Meaningful—this principle can be fulfilled if the recipient understands the system’s explanations. To be useful, any explanations should be applicable to the end user’s needs. For example, the meaningful explanation would be tailored differently for lawyers than it would for a digital forensic practitioner.
Explanation Accuracy—pretty self-explanatory, but this principle dictates that the explanation correctly reflects the system’s process for generating its output. Similar to the meaningful principle, it allows for different explanation accuracy metrics for different end users.
Knowledge Limits—this principle simply requires that systems “identify cases in which they are not designed or approved to operate, or their answers are not reliable.” This principle is designed to increase trust in a system by “preventing misleading, dangerous, or unjust decisions or outputs.
The Future is Now?
Little doubt that these principles will help ensure accountability and trustworthiness of AI-generated ediscovery. Given the fast pace of technological advancement, little doubt also that AI will soon deliver you its accountability statements by voice. You know, kind of like how the computer Hal 9000 in “2001 Space Odyssey” explains to Astronaut David Bowman that “the mission is too important for me to allow you to jeopardize it.”
The post NIST Principles to Enhance Trust in AI-Generated eDiscovery appeared first on Lumix.