Generative AI: A Double-Edged Sword
If you’ve read any of my recent articles, you know that I have high hopes for generative AI tools like ChatGPT. However, I can’t help but brace myself for disappointment.
Right now, we’re in the midst of a fleeting honeymoon phase, during which this cutting-edge technology is seamlessly enhancing our lives, reducing friction, and promising a brighter future.
And yet I feel a strong sense of foreboding.
What will happen when this very brief period of harmony ends? Will we find ourselves trapped in a dystopian nightmare where AI, in a dramatic twist of fate, turns on its human creators? Will we face a future where bad actors use AI to create even more political chaos, sowing discord and uncertainty, on a scale never before seen?
Or, will we find ourselves, as cybersecurity expert Bruce Schneier anticipates, in the midst of a grossly magnified version of the current online advertising hellscape, where we’re subjected to a non-stop onslaught of digital snake oil?
In a recent post, Schneier contends that we’re at a pivotal crossroads with AI, and if we don’t get it right, we’re in for a very rough ride. According to Schneier, corporate interests may very soon rain on the generative AI parade and trample on the enormous potential offered by this technology.
He cautions that there is an imminent risk that corporate greed may overshadow the remarkable potential of generative AI: “Imagine you’re using an AI chatbot to plan a vacation. Did it suggest a particular resort because it knows your preferences, or because the company is getting a kickback from the hotel chain? Later, when you’re using another AI chatbot to learn about a complex economic issue, is the chatbot reflecting your politics or the politics of the company that trained it?”
Schneier warns that if corporate interests remain unchecked, generative AI will follow the same path as the technologies that preceded it. Absent a concerted effort to travel a different road, we can look forward to more of the same, only on a much more intrusive and all-encompassing scale:
“Twenty years ago, Google’s search engine rapidly rose to monopolistic dominance because of its transformative information retrieval capability. Over time, the company’s dependence on revenue from search advertising led them to degrade that capability. Today, many observers look forward to the death of the search paradigm entirely. Amazon has walked the same path, from an honest marketplace to one riddled with lousy products whose vendors have paid to have the company show them to you. We can do better than this. If each of us are going to have an AI assistant helping us with essential activities daily and even advocating on our behalf, we each need to know that it has our interests in mind. Building trustworthy AI will require systemic change.”
Like Schneier, I find myself questioning whether we, as a society, have the collective will to rise to this challenge. We’re caught in a complex web of technological advancement, corporate interests, and ethical concerns. Balancing the immense potential of AI with equally significant risks will be challenging, especially given the rapid rate of technological advancement.
As we continue to develop and rely on these systems, transparency, accountability, and user-centricity must not become casualties of progress. The time is ripe for us to confront these challenges head-on.
Are we up for the collective challenge of shaping our AI-driven future to be one that we want to live in? Or will we allow ourselves to be swept away by the tide of unchecked technological development, leaving our hopes and dreams in their wake as we cope with the unforeseen consequences?
Nicole Black is a Rochester, New York attorney, author, journalist, and the head of SME and External Education at MyCase law practice management software, an AffiniPay company. She is the author of the ABA book Cloud Computing for Lawyers, co-authors the ABA book Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York, a Thomson Reuters treatise. She writes legal technology columns for Above the Law and ABA Journal and speaks regularly at conferences regarding the intersection of law and technology. You can follow her on Twitter at @nikiblack or email her at email@example.com.