The AI Act explained: New Rules and Developments in AI Regulation

The Artificial Intelligence Act (AI Act) has been updated by the European Commission, marking a significant milestone in the regulation of AI technologies. This act is a major step forward for AI legislation and puts the EU at the forefront of AI innovation and monitoring. By addressing the risks connected with the technology, this act seeks to establish comprehensive guidelines for the development, advertising, usage, and adoption of artificial intelligence. The final approval for the new version of the AI Act by the Council of the EU was in May 2024. Generally speaking, organizations will have until May 2026 to meet the standards. Fines will be imposed as a penalty for noncompliance. The AI Act is a pioneering legislative effort designed to ensure that AI technologies are developed and deployed in a manner that is safe, transparent, and aligned with European values. It aims to mitigate the potential risks associated with AI, such as bias, discrimination, and privacy violations, while fostering innovation and competitiveness within the EU. The legislation is important because it sets a global precedent for how AI should be regulated, balancing the need for technological advancement with the protection of fundamental rights.

The proposed risk classification under the AI Act has different standards for every risk level. Three risk categories are distinguished: unacceptable risk, high risk, and reduced risk. Systems that present an intolerable risk are forbidden, such as those that use discrete methods to change people’s behavior or take advantage of collective weaknesses. High-risk systems need to pass a conformance evaluation and fulfill a number of stringent requirements. This classification ensures that the most dangerous AI applications are prohibited, while high-risk applications are carefully monitored and regulated to prevent harm. Reduced risk systems, while subject to fewer restrictions, must still comply with basic standards to ensure their safe and ethical use. By establishing these clear guidelines and regulatory frameworks, the AI Act aims to build trust in AI technologies, encouraging their broader adoption and integration into various sectors of the economy. This legislation not only protects consumers and citizens but also provides a stable and predictable environment for businesses to innovate.

Prohibited practices: Unacceptable Risk Systems

The AI Act established a thorough list of techniques that are not permissible in the field of artificial intelligence. This provides a core framework for controlling AI usage and ensuring ethical norms are followed. The following practices have been designated as prohibited under the AI Act:

  • Using AI systems for subconscious manipulation or exploiting vulnerabilities to alter behavior and hurt others.
  • Using AI to assess or categorize people according to their social conduct or personal traits, which results in unfair treatment.
  • Relying only on AI to profile people and forecast criminal behavior.
  • Building or growing databases for face recognition technology without specific permission.
  • Inferring feelings in work and educational environments, unless necessary for safety or medical reasons.
  • Classifying people according to their biometric information in order to infer sensitive personal data
  • Using real-time biometric identification technologies for law enforcement in public areas, provided that strict precautions are taken and particular circumstances are met


High Risk Systems

The AI Act has identified various AI systems classified as high-risk, including those employed in:

  • Biometrics: Tasks like remote identification, categorization based on sensitive attributes, and emotion recognition.
  • Critical Infrastructure: Managing safety in digital infrastructure, road traffic, and utilities.
  • Education and Vocational Training: Functions such as determining access, evaluating outcomes, assessing educational levels, and monitoring behavior during tests.
  • Employment: Activities like recruitment, decision-making in work relationships, task allocation, and performance monitoring. Essential Services: Applications including determining eligibility for public benefits, evaluating creditworthiness, assessing risks in life and health insurance, and classifying responses during emergencies.
  • Law Enforcement: Tasks such as assessing the risk of becoming a crime victim, using polygraphs, evaluating evidence reliability, assessing recidivism risk, and profiling for investigations.
  • Migration and Border Control: Activities like polygraph usage, risk assessment, assisting in asylum and visa applications, and detecting individuals.
  • Administration of Justice and Democratic Processes: Functions like aiding judicial authorities in legal research and decision-making, and potentially influencing election outcomes

Ethical Obligations for AI Companies

Following the implementation of the new AI Act by the European Commission, companies are encouraged to adhere to comprehensive ethical guidelines and regulatory frameworks to ensure responsible development and deployment of AI technologies. This legislation marks a pivotal moment in AI regulation, aiming to mitigate risks such as bias, discrimination, and privacy violations while fostering innovation within a framework that aligns with European values.

Companies should prioritize ethical principles such as human agency and oversight, technical robustness and safety, privacy and data governance, openness, diversity, non-discrimination and justice, societal and environmental well-being, and responsibility. These principles provide an extensive framework for ensuring that AI technologies are developed and deployed in ways that enhance transparency, accountability, and societal trust.

Verify can help companies be accountable and transparent in support of the AI Act’s goals of ethical AI usage. The AI technology developed by Verify enables source identification of media by embedding invisible fingerprints that provide detailed information about the origin and context of the data. This technology ensures compliance with regulatory standards for high-risk AI systems by allowing for comprehensive tracking and monitoring of media usage. Additionally, it helps prevent unauthorized manipulation and misuse, thereby protecting consumer privacy and fostering trust in AI technologies.

By embracing these guidelines and ethical principles, companies can not only comply with regulatory requirements but also contribute to building a trustworthy environment for AI innovation that prioritizes societal well-being and ethical considerations. This approach not only safeguards individual rights but also fosters a culture of responsible AI use that benefits society as a whole.

“What all of us have to do is make sure we are using AI in a way that is for the benefit of humanity, not to the determinent of humanity.”
Tim Cook