What is the AI ​​Act and why could your customer service chatbot be illegal in a few months?

6 min read Jose Gonzalez Hot News

The AI Act (EU Artificial Intelligence Law) is the world’s first comprehensive legal framework that regulates artificial intelligence according to its level of risk. For a product manager, this law establishes that customer service chatbots must be transparent, secure and subject to human supervision to operate legally in the European market, avoiding million-dollar fines.

Why is the AI Act your top priority now?

If you manage a customer service team or are responsible for a digital product, the time of “trying things and seeing what happens” with AI is over. We are in 2026 and the grace windows that the European Union granted after the approval of the law in 2024 have closed.

Ignoring this is not just a reputational risk; It’s a massive financial risk. Sanctions can reach 35 million euros or 7% of your company’s annual global turnover. But beyond the fine, it’s about trust. In a market saturated with mediocre automations, complying with the AI Act is the seal of quality that tells your client: “Your data is safe and we are not deceiving you here.”

How does the AI ​​Act work in practice?

The AI ​​Act works in a very similar way to technical inspections of buildings: the same is not required of a garden shed as of a fifty-story skyscraper. The law classifies AI into four risk levels (Unacceptable, High, Limited and Minimum) and applies rules proportional to each one.

Most customer service chatbots fall into the Limited Risk category. This means that the system will not decide whether someone receives an organ transplant, but it will interact with humans. That’s why the law requires you to “pop the hood” and show what’s inside. Your chatbot should be like a nutrition label on a package of cookies: the user has the right to know exactly what they are consuming (in this case, that they are talking to an algorithm) and what ingredients (data) are being processed.

What are the types of AI regulated by this law?

So that you don’t get lost in the legal chaos, the regulations divide the tools into these categories:

  1. Unacceptable Risk: Systems that manipulate human behavior or score people (social scoring). These are prohibited. If your chatbot uses subliminal techniques to force a customer not to cancel a subscription, you are outside the law.
  2. High Risk: Systems that affect security or fundamental rights (health, education, employment, banking for credit). If your chatbot decides on its own whether a customer is suitable for a loan, enter here and the audit requirements are brutal.
  3. Limited Risk (The standard of customer service): This is where your chatbot lives. The main obligation is transparency. The user must know that they are interacting with an AI.
  4. Minimum Risk: Spam or video game filters. They have no additional obligations.

When is your chatbot considered “illegal”?

Your customer service system could be illegal tomorrow if it does not comply with three basic pillars that are already required in 2026:

  • Missidentification: If the user believes they are speaking to “Laura from support” and it is actually a GPT-5 language model without being clearly notified at the start of the session.
  • Opacity in training: If you use copyrighted data without respecting EU directives or if you cannot demonstrate that you have mitigated bias (for example, that your AI is not less friendly to customers in certain regions).
  • No Kill Switch: If the customer does not have an easy way to get out of the AI loop and speak to a human, especially in cases of complex claims.

This is the million-dollar question in product meetings. Responsibility is shared, but the AI Act distinguishes between:

  • Suppliers: Those who develop AI (such as OpenAI, Google or a startup that creates its own model). They carry the heaviest technical load.
  • Deployers (You): The companies that use that AI to provide service to their customers. You are responsible for how it is used, for informing users, and for ensuring that the AI does not “freak out” and promise refunds that do not exist.

If you buy a third-party solution, make sure they are CE marked for artificial intelligence. It’s like buying an outlet: you don’t make the electricity, but you are responsible for not installing bare wires in your office.

How much does it cost to adapt a chatbot to the AI ​​Act?

There is no single figure, but we can talk about investment ranges depending on the state of your technology:

  • Basic adaptation (Transparency): If you already use a solid platform (such as Zendesk, Intercom or Salesforce with its AI layers), the cost is mainly configuration and legal design. It can range between €5,000 and €15,000 in UX consulting and adjustments.
  • Audit of High Risk systems: If your chatbot makes critical decisions (insurance, banking, health), prepare for external audits that can exceed €50,000 annually, added to the need for an AI Compliance Officer.

Brief story of how we got here

In 2021, the European Commission presented the first draft, long before ChatGPT was a household name. When generative AI exploded in 2023, the European Parliament had to rush to include rules on “General Purpose AI Models.”

The law was definitively approved in 2024 and has been applied in phases. In 2025, systems with unacceptable risk were banned and now, in 2026, obligations for general-purpose systems and transparency requirements for chatbots come into force. We have moved from the digital “Wild West” to a regulated environment where ethics is a technical requirement.

Myths vs Reality about the AI ​​Act

Myth: The AI ​​Act is going to kill innovation in Europe and we will fall behind the US or China.

Reality: The law creates a clear “playing field.” Companies know what to expect, which attracts long-term investment because it reduces legal uncertainty.

Myth: If my company is from the US, I don’t have to comply with the law.

Reality: If your chatbot serves a single citizen within the European Union, you are subject to the law. It works just like the GDPR: the impact is global.

Myth: Only big technology companies like Google or Meta should worry.

Reality: Any SME that uses a chatbot to sell or provide support must comply with transparency requirements.

Certifications and accreditations.

We have the certifications that endorse our experience in accessibility.

IAAP - International Association of Accessibility Professionals IAAP CERTIFIED
ISO 9001 - Sistema de Gestión de Calidad ISO 9001