Editor’s Note: The European Union has officially begun enforcing its landmark AI Act, ushering in a new era of AI regulation. Designed to curb the most harmful uses of artificial intelligence, the law bans AI-driven manipulation, social scoring, and unauthorized biometric surveillance while imposing strict oversight on high-risk applications. With heavy penalties for non-compliance, the AI Act is setting a global precedent that businesses cannot afford to ignore. As companies navigate these sweeping changes, the Act’s phased rollout will continue shaping AI governance worldwide.
Industry News – Artificial Intelligence Beat
EU Cracks Down on AI Abuses as Landmark Law Takes Effect
ComplexDiscovery Staff
The European Union’s AI Act is now in effect. Officially entering into force on August 1, 2024, the regulation has begun reshaping the artificial intelligence landscape across Europe. While some of its key prohibitions will be enforced starting February 2, 2025, the law’s full implementation will unfold in phases, with high-risk AI systems facing stricter rules by August 2026 and obligations for general-purpose AI models coming into effect in August 2028.
The AI Act, adopted in June 2024, is built on a risk-based framework that categorizes AI into different levels of concern. At its core is a simple yet sweeping mandate: AI applications that pose “unacceptable risks” to fundamental rights and democracy will be prohibited outright. To ensure compliance, the European Commission has issued detailed guidelines outlining exactly what kinds of AI systems will be banned, providing much-needed clarity as companies scramble to adjust.
Among the key restrictions, the use of AI for manipulative or deceptive practices stands out as one of the broadest prohibitions. Systems that distort human behavior through subliminal messaging, psychological exploitation, or deceptive techniques designed to push users toward actions they wouldn’t otherwise take will no longer be allowed. The Commission has singled out AI-driven content that influences people without their conscious awareness, such as emotionally manipulative chatbots or systems that subtly alter choices through targeted prompts.
Exploitation of vulnerable populations will also be strictly banned. AI systems that take advantage of children, the elderly, people with disabilities, or those in socio-economically disadvantaged situations will be off-limits. The guidelines illustrate examples such as AI-powered toys that encourage children to engage in risky behavior or financial algorithms that prey on individuals struggling with debt. If an AI system knowingly manipulates these vulnerable groups, it will fall under the Act’s prohibitions.
The guidelines also confirm a blanket ban on AI-driven social scoring. Any system that evaluates or ranks individuals based on their social behavior or personal characteristics—potentially leading to discrimination—will be considered unlawful. While such scoring mechanisms have been widely discussed in the context of state surveillance models, the EU’s restrictions will apply across both public and private sectors.
Law enforcement will face some of the strictest limitations under the new rules. Predictive policing, which uses AI to assess the likelihood of an individual committing a crime based solely on personal characteristics, will no longer be permissible. The Act draws a clear line between AI systems that assist in human assessments and those that rely entirely on algorithmic predictions to judge criminal risk.
Similarly, the development of facial recognition databases through untargeted scraping of images from the internet or CCTV footage will be outlawed. Companies that have built biometric databases by harvesting public images without consent—often for commercial or security applications—will be required to halt such practices immediately. However, exceptions exist under strict judicial oversight for law enforcement agencies in cases of severe security threats or when searching for missing persons. The guidelines make it clear that mass biometric surveillance without clear justification will not be tolerated under EU law.
Even within workplaces and schools, AI restrictions will be felt. Emotion recognition technology, which attempts to infer human emotions through AI analysis, will be banned in professional and educational environments unless explicitly used for medical or safety purposes. At the same time, the Commission has introduced new AI literacy obligations, requiring organizations to ensure that employees working with AI systems receive adequate training. This requirement, which also took effect on February 2, 2025, aims to bridge the knowledge gap as AI adoption grows.
One of the most debated aspects of the AI Act has been its stance on real-time facial recognition in public spaces. Under the new rules, law enforcement agencies will largely be prohibited from deploying AI-powered facial recognition systems in public areas. There are narrow exceptions, including cases involving severe security threats or targeted searches for missing persons. However, the use of such systems will require prior judicial authorization and must meet strict legal standards.
While much of the focus has been on prohibited AI practices, the AI Act also establishes a separate category of high-risk AI systems, which face rigorous scrutiny before deployment. These include AI applications in healthcare, employment, education, and law enforcement—sectors where AI-driven decisions can significantly impact people’s lives. Before these high-risk systems can enter the market, they must undergo conformity assessments and meet strict transparency, human oversight, and risk management requirements.
For businesses and AI developers, the consequences of non-compliance will be severe. Companies that violate these prohibitions could face fines reaching up to €35 million or 7 percent of their global revenue—whichever is higher. The EU’s enforcement mechanisms will rely on national market surveillance authorities to monitor compliance, and regulators will be empowered to investigate and penalize offending companies.
As the first weeks of the AI Act’s implementation unfold, it is clear that the EU is setting a global precedent for AI governance. The bloc’s rules stand in contrast to the more flexible approaches seen in the United States and parts of Asia, where AI regulation has been slower to materialize. While some critics warn that heavy-handed restrictions could stifle AI innovation, supporters argue that clear ethical boundaries will build public trust in AI systems and prevent the most dangerous applications from ever reaching the market.
The AI Act’s impact is expected to extend well beyond Europe’s borders. International tech companies hoping to operate in the EU will need to comply with the new regulations, potentially reshaping the design and deployment of AI worldwide. The law is likely to influence regulatory discussions in other regions, as governments worldwide grapple with how to balance AI development with fundamental rights and democratic safeguards.
For now, the message from Brussels is clear: AI should serve people, not exploit them. With these new rules already in effect and additional phases rolling out over the coming years, the European Union is making it known that the era of unchecked AI development is over. Businesses have limited time to align with the new standards, and the clock is ticking.
News Sources
- Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act
- EU puts out guidance on uses of AI that are banned under its AI Act
- EU kicks off landmark AI Act enforcement as first restrictions apply
- The EU’s AI Act is now in force
Assisted by GAI and LLM Technologies
Additional Reading
- AI in Journalism: Enhancing Newsrooms or Undermining Integrity?
- The Rise of Techno-Journalists: Addressing Plagiarism, Patchwriting, and Excessive Aggregation in the Era of LLMs
Source: ComplexDiscovery OÜ
The post EU Cracks Down on AI Abuses as Landmark Law Takes Effect appeared first on ComplexDiscovery.