Artificial intelligence (AI) is fast becoming one of the most valuable tools in ediscovery, but AI is also emerging as a significant crime threat based on its ability to facilitate different types crimes across a broad spectrum of applications. Interestingly, the two disparate AI uses could easily intersect, as AI-produced fake content has been identified as a significant AI-affiliated crime threat.

That was my thinking, anyway, upon reading that AI-generated audio/video and other fake content created for criminal or otherwise nefarious purposes was recently ranked among the highest AI-related criminal threats by a panel of experts. Such fake content, I assume, could easily make their way into ediscovery. Where that might lead, of course, would be highly dependent upon both the substance of the AI-generated fake content and the nature and or purpose of the ediscovery. Whatever the case, I would assume that this idea should be of concern to ediscovery practitioners of all stripes.

Experts Identify 20 Biggest AI-Affiliated Crime Threats

The threat of AI-affiliated crime was highlighted by an Aug. 5th report—“AI-Enabled Future Crime”—released by researchers at the Dawes Centre for Future Crime at the University College of London. The researchers compiled a ranking of the 20 biggest AI-affiliated crime threats based on academic papers, news, and a two-day workshop attended by more than 30 academic and professional experts in the cybercrime, computer technology, and criminal fields.

The report warns that AI has the potential to be involved in criminal activity in a variety of ways and notes that AI systems themselves may become “the target of criminal activity.” That said, the range of criminal acts that “may be enhanced by use of AI depends significantly on how embedded they are in a computational environment.” Thus, AI is “better suited to participate in a bank fraud than a pub brawl.” 

May Discredit Genuine Evidence, Compromise Investigations

Advances in AI have significantly “increased the scope for the generation of fake content,” with AI’s ability to generate fake video and audio that impersonates real people considered especially worrisome. Not only can such be used for numerous criminal schemes, but its propagation make its “easier to discredit genuine evidence [and] undermine criminal investigations and the credibility of political and social institutions that rely on trustworthy communications.” 

Not only was the threat level of AI-generation of fake content considered among the highest, but also the hardest to defend against. “Disrupting AI-controlled systems” was also high on the scale of threat and harm levels, which should give ediscovery practitioners even more reason to keep track of AI advances and the intersection of crime and AI.

Other AI-affiliated crime threats included:

-Large-scale blackmail

-Learning-based cyber attacks

-Driverless vehicles as weapons

-AI-assisted stalking

-Forgery

-Burglar bots

-Data poisoning

-Autonomous attack drones

So, really, pretty much everyone needs to pay attention to the rise of AI and how it’s being used.

The post Will AI as an eDiscovery Tool Intersect With its Use to Facilitate Crime? appeared first on Lumix.