In this peer-reviewed article (Journal of Science and Law; open source), my co-author and I discuss how access to data is an essential part of artificial intelligence (AI) technology development efforts. But government and corporate actors have increasingly imposed localized and hyper-localized restrictions on data due to rising mistrust—the fear and uncertainty about what countries and companies are doing with data, including perceived and real efforts to exploit user data or create more powerful and possibly dangerous AI systems that could threaten civil rights and national security. If the trend is not reversed, over-restriction could impede AI development to the detriment of all. Solutions are offered to improve trust through the adoption of legal and social policies that ensure transparency in data collection and use, and explainability of decisions made by AI systems that affect people’s lives.
The post Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.