09 Mar 2025
Artificial intelligence is no longer regulated indirectly.
With the adoption of the EU Artificial Intelligence Act (AI Act), Europe introduced the world's first comprehensive legal framework specifically governing AI systems. The regulation affects not only AI developers, but also companies that use, integrate, or distribute AI-powered systems within the EU.
This article explains:
This is an informational overview — not legal advice.
AI systems increasingly influence:
Before the AI Act, regulation relied on existing laws (GDPR, product safety, liability), which were not designed specifically for algorithmic decision-making.
The AI Act aims to:
The AI Act does not regulate all AI equally.
Instead, it classifies systems into risk categories, each with different obligations.
Certain uses are prohibited outright.
These include, for example:
These systems are permitted, but heavily regulated.
They typically involve:
High-risk systems must meet strict requirements around:
Systems with interaction-based risk (e.g. chatbots) require transparency obligations, such as informing users that they are interacting with AI.
Most AI systems fall into this category and remain largely unregulated.
The regulation applies broadly.
It affects:
Geographical location of the company is less relevant than where the system is used.
For many companies, compliance is not a single task, but a process change.
Common areas affected include:
These requirements influence architecture decisions long before deployment.
The AI Act emphasizes that certain AI decisions must be:
This does not require exposing proprietary models — but it does require:
Opaque systems become harder to justify in regulated contexts.
The EU approach differs from other regions.
For global products, this creates regulatory fragmentation.
Many companies choose to align with EU standards as a baseline, then adapt regionally.
Most organizations do not need to stop using AI.
However, they should:
Early alignment reduces future compliance costs.
The AI Act is not a ban on innovation.
It targets specific risk profiles — not AI as a whole.
Overly defensive decisions (e.g. removing all AI features) can be as harmful as ignoring regulation entirely.
Balanced interpretation and proportional implementation are key.
The EU AI Act introduces a new regulatory reality for AI in Europe.
For companies, the challenge is not legal theory — but operational readiness.
Those who understand the risk-based logic and integrate compliance into architecture and product decisions early are best positioned to innovate responsibly within the EU market.
Enter your email to receive our latest newsletter.
Don't worry, we don't spam
Anna Hartung
Anna Hartung
Anna Hartung
While practical quantum computers are still years away, the direction of the industry is already influencing strategic decisions — particularly in security, cryptography, and long-term infrastructure planning. This article focuses on what quantum computing actually is, what quantum advantage means in practice, and why quantum security matters long before quantum computers become mainstream.
For many organizations, a mix of office-based and remote work has become the default operating model. This shift is not primarily cultural — it is technical. This article explains how hybrid and remote work change infrastructure requirements, which technologies become critical, and how organizations can support distributed teams without increasing risk or complexity.
Artificial intelligence is changing cybersecurity on both sides of the equation. Attackers use AI to automate and personalize attacks, while defenders rely on machine learning to detect anomalies and respond faster. This article explores how AI changes modern cyber threats, where AI genuinely improves defense, and how organizations can approach AI-driven security responsibly.
No-code and low-code platforms have moved far beyond experimentation. This article examines why no-code and low-code adoption is accelerating, where these platforms deliver real value, and when classical software development remains the better choice — with a focus on realistic assessment and long-term sustainability.
As digital systems scale, software itself increasingly contributes to energy consumption. This article explores what green coding means in practice, where software efficiency directly affects energy consumption, and how technical decisions influence both sustainability and performance — with a focus on realistic, measurable improvements.
As connected devices, sensors, and real-time systems proliferate, edge computing — processing data closer to where it is generated — is gaining importance. This article explains what edge computing means, why it is closely linked to IoT and 5G, and when edge architectures make sense for real systems — with a focus on practical constraints and architectural decisions.