The invisible AI in the office
While many medium-sized companies still hesitate due to concerns about data protection violations and the loss of trade secrets, their employees have long been using tools like ChatGPT. This so-called „shadow AI“ has become established in everyday work. This hesitation between pressure to innovate and legal uncertainty leads to a dangerous standstill in many organisations. But the reality of artificial intelligence and data protection is far more nuanced than most people realise. This article uncovers four of the biggest myths and shows what the path to safe and compliant AI deployment can look like.
Myth 1: „Anonymous“ is always anonymous
The legal grey area of anonymity
The common assumption is simple: data is either personal or it is not. In reality, however, the question of when data is really considered anonymous is the subject of intense legal debate. The reason for this ambiguity is that modern data linking techniques make it possible to assign even supposedly anonymised data to a person again.
In legal discussion, therefore, two main positions are in opposition: On the one hand, the „absolute view“, according to which re-identification must no longer be possible under any circumstances. On the other hand, the „relative view“, according to which re-identification must be practically impossible. To date, courts and data protection authorities have not definitively clarified this question. This means: there is currently No method universally recognised as safee for anonymisation.
The contextualised approach

Instead of relying on a supposedly absolute anonymity, companies need to take a contextualised approach. For managing directors and IT managers, this means being accountable to Article 5 paragraph 2 GDPR carefully document their own risk assessment. In addition, the chosen anonymisation techniques and the reasons why re-identification is considered practically impossible in the specific context should be documented, recorded in detail become.
This documentation not only provides legal protection, but also forces organisations to take a close look at the actual risks. It also enables transparent decision-making when implementing AI systems.
Myth 2: „Cloud AI is taboo – and your own AI is unaffordable“
The real risk with cloud solutions

The legal uncertainty surrounding anonymisation is a key reason why companies mistrust cloud solutions. However, the problem is not the technology itself, but the jurisdiction of the provider. If data is processed on servers of US providers, there is a risk of access by US authorities due to laws such as the Cloud Act. This allows US authorities to access data, even if it is stored on European servers.
This concern is a key obstacle for many companies. They fear that sensitive business secrets could be leaked to the competition via an AI model. This fear is understandable: as soon as company data leaves the company's own data centre, control over its use decreases considerably.
Local AI as a viable alternative
The surprising alternative lies in local open source AI models. These can be hosted in your own data centre or at a European Cloud provider which ensures full data control and GDPR compliance. This is not just a defensive security measure, but a strategic decision.
It enables superior, customised performance by optimising models based on the company's own high-quality data. A case study from practice shows that this approach does not have to be unaffordable: a German municipality has set up a local AI infrastructure for a working group of 20 to 30 users for only around EUR 10,000. Consequently, investing in local AI solutions is also a realistic option for medium-sized companies.
Myth 3: „Once AI has learned something, it never forgets“
The right to be forgotten also applies to AI

The right to cancellation after Article 17 GDPR is a cornerstone of data protection and also applies to data used to train an AI model. At first glance, this appears to be an insurmountable technical hurdle. However, there are also possible solutions here, although a precise distinction must be made between legal and technical aspects.
Technical solutions
One method for implementing a deletion claim is so-called „re-training“. This involves specifically adjusting the internal parameters of the model so that specific information can no longer be derived from the model. This is technically complex, but aims at the actual removal of the information.
This must be distinguished from the use of downstream filters, which block unwanted results before they reach the user. The Data Protection Conference clarifies that the suppression of outputs by means of filters does not generally constitute erasure within the meaning of Article 17 GDPR, as the information in the model itself potentially remains intact.
Even if the process is demanding, these approaches show that even complex AI technologies can be harmonised with fundamental basic rights. However, companies must carefully consider which method they use and document this decision.
Myth 4: „AI will take over and make all the decisions“
The legal requirement of human-in-the-loop

The idea that algorithms alone decide the fate of people is unsettling. However, the GDPR puts a clear stop to this. Article 22 only prohibits automated individual decisions that have a significant legal or similar effect on the data subject.
The principle of the Human-in-the-loop is crucial here. Human involvement must be more than just a formal nod to an AI proposal. Humans in the process must have an actual scope for decision-making. The law thus deliberately retains humans as the final authority for critical decisions in order to ensure responsibility and protect individuals from a purely machine-based judgement.
Practical implementation in day-to-day business
In AI-supported applicant selection, for example, the algorithm alone must not decide on rejections. An HR manager must be able to make the final, reasoned decision and actively review and overrule the AI recommendation. The same applies to an automated credit check, for example.
For companies, this requirement means that they must organise their processes in such a way that qualified employees can critically evaluate the AI recommendations. They must also have the expertise and time to make an independent decision. This requires appropriate Training and clear responsibilities in the organisation.
From hesitation to design: the path to data protection-compliant AI use
The data protection-compliant use of AI is not an insurmountable obstacle, but a design task. There are already legal frameworks and surprisingly practical technical solutions for this. Blanket bans are often based on false assumptions that obscure the view of feasible and secure application scenarios.
Medium-sized companies should take the plunge and use AI systems in a targeted and data protection-compliant manner. It helps to keep the four myths that have been uncovered in mind: Anonymity is context-dependent, local AI is affordable, the right to be forgotten is technically feasible and humans remain the final decision-making authority.
So instead of asking whether companies can use AI in compliance with data protection regulations, the crucial question for the future is: how do they design their use in such a way that it is not only legally compliant, but also creates genuine trust among employees and customers? Every organisation should answer this question for itself and proactively exploit the opportunities offered by AI technologies.
AI in the company: 4 myths about the GDPR
SAP Business One „GDPR Version“ 9.3 Patch4
GDPR practical & entertaining
GDPR and ERP: risks and challenges
Financial accounting 2025 - information & changes