Meta Wins Legal Battle: Can Train AI with EU User Data
Introduction
On 23 May 2025, the Higher Regional Court of Cologne in Germany made an important decision. The court dismissed an attempt to stop Meta Platforms Inc. (the company behind Facebook and Instagram) from using European users’ data to train artificial intelligence (AI). This case is very important for the future of AI and data privacy laws in Europe.
Meta had earlier said it would use public posts from European Union (EU) users to help train its AI models. This announcement raised a lot of concern among people and organizations. Some felt their data was being used without clear consent, while others supported Meta's plan, saying it was legal under current rules.
This article explains the background of the case, the legal arguments from both sides, the court’s decision, and what this means for companies and users across the EU.
What Did Meta Announce?
In mid-2024, Meta informed its users in the EU that it planned to use public posts—like comments, photos, and videos on Facebook or Instagram—for training AI systems. This includes generative AI, which is used to build tools like chatbots, translation software, or content creation systems.
Meta also said that users had a chance to opt out if they didn’t want their data to be used. The deadline for opting out was set to 27 May 2025. If users did nothing, Meta would consider that as permission to use their data.
But some groups, like the Consumer Protection Organization of North Rhine-Westphalia, argued that this process was unfair. They said that users should first be asked for permission (opt-in), rather than being automatically included unless they say no.
The Legal Background: GDPR and DMA
In Europe, there are strict laws about how companies can use people’s personal data.
General Data Protection Regulation (GDPR)
The GDPR is the main privacy law in the EU. It says that companies must have a clear legal reason to use personal data. Meta said it was using a concept called “legitimate interest”, which allows data to be used if:
-
It’s necessary for the company’s goals, and
-
It doesn’t seriously harm the user’s rights.
Meta also said it had given people a clear way to opt out, which reduces harm to users.
But consumer groups disagreed. They said Meta should be using "explicit consent" (also called opt-in). This means users must say “yes” before their data is used. They also said that some types of data—like health information or religion—are more sensitive and need stronger protection.
Digital Markets Act (DMA)
Another law called the Digital Markets Act (DMA) is also important. The DMA applies to big tech companies, like Meta, who are called "gatekeepers". These companies must follow special rules to avoid using their power unfairly.
One big question was whether Meta’s AI training was combining user data from different platforms, like Facebook, Instagram, and WhatsApp, which could be against the DMA.
Mixed Opinions from Data Protection Authorities
Irish Data Protection Authority (DPC)
Because Meta’s EU headquarters is in Ireland, the Irish Data Protection Commission (DPC) is its main data regulator in Europe. After more than a year of investigation, the Irish DPC approved Meta’s AI training plan. It said that Meta had made some good changes:
-
More transparent notices
-
Easier opt-out forms
-
Clear explanations about how the data would be used
The DPC said it would review the situation again in October 2025, to make sure everything stays within the law.
Hamburg Data Protection Commissioner (HmbBfDI)
Not all regulators agreed. The Hamburg Data Protection Commissioner (HmbBfDI) in Germany had a very different opinion.
Just before Meta’s AI training was set to begin, the HmbBfDI started urgent legal action against Meta. They asked for AI training in Germany to be put on hold for at least three months. Meta had to reply to this request by 26 May 2025.
The Hamburg authority raised several serious concerns:
-
Why was Meta using such large amounts of data?
-
Even if names were removed (called “de-identification”), could users still be harmed?
-
Are public posts truly public if they are only visible after logging in?
-
What about old data, which people shared years ago—did they know it could be used for AI?
-
What about people shown in pictures who don’t even have Facebook accounts?
These questions show that data privacy laws are still evolving, especially when it comes to new technologies like AI.
Court Case: Consumer Group vs. Meta
Because of the controversy, the Consumer Protection Organization of North Rhine-Westphalia filed a case at the Higher Regional Court of Cologne. They wanted to stop Meta from using user data for AI, at least temporarily.
The consumer group said:
-
Meta’s legal basis (legitimate interest) was not good enough.
-
Meta should ask for consent (opt-in) from users.
-
Meta was violating the DMA by mixing data from different platforms.
But the court rejected the case. It ruled that:
-
Meta’s interest in training AI was stronger than the harm to users.
-
Meta had reduced the risks by giving users clear options to opt out.
-
There was no illegal combination of personal data, as Meta said it did not merge individual data from different platforms.
This decision was a big win for Meta.
What the Court Decision Means
The court’s decision doesn’t mean that anything goes for AI training in Europe. But it does show that AI training is not automatically illegal, even if it uses personal data.
The case gives us several key lessons:
-
AI can be trained using user data, but companies must follow strict rules.
-
Transparency is essential. Users must know what’s happening.
-
Opt-out systems may be legal in some cases, but this is still debated.
-
Different EU authorities may have different opinions, causing legal uncertainty.
-
Legal reviews and court cases will continue, especially as new AI tools emerge.
Challenges with Data and AI
AI systems need lots of data to become smart and useful. But much of this data is about people—what they say, do, or share online.
This creates a conflict:
-
AI developers want more data to improve their tools.
-
Privacy advocates want better control for users.
Even if names or faces are removed, patterns in the data can still identify people. For example, a unique combination of location, time, and interest might reveal who someone is.
Also, older posts may have been shared under different terms. People didn’t know AI training was a possibility back then.
These questions are difficult to answer, and they show how quickly technology moves ahead of the law.
What Should Companies Do Now?
For companies like Meta—and any business using AI trained with user data—this case sends a clear message:
-
Follow GDPR and DMA closely.
-
Use transparency notices that are simple and easy to understand.
-
Offer easy opt-out options.
-
Work with data protection authorities early.
-
Separate sensitive data like health or religion from general data.
-
Avoid combining data from different services unless users are informed.
Companies must build trust with users and prove they’re acting responsibly. AI is powerful, but it must be used fairly.
What About the Users?
If you are a Facebook or Instagram user in the EU, here’s what you should know:
-
Your public posts may be used to train AI.
-
Meta sent emails to inform you about this.
-
You have the right to object and stop your data from being used.
-
The deadline to opt out was 27 May 2025.
-
You can still ask Meta about your data and file complaints with your country’s data protection authority.
Being informed helps you make better choices.
Conclusion
The case between Meta and the Consumer Protection Organization in Germany is a landmark in the discussion about AI and personal data. It shows that:
-
Laws like GDPR and DMA are being tested in real-time.
-
AI is here to stay, and the way it’s trained matters a lot.
-
Courts and regulators are still figuring out how to balance innovation with privacy.
While the court ruled in Meta’s favor, the debate is far from over. More decisions will follow as AI grows in importance. For now, this case sets a precedent: companies can train AI with existing data if they follow proper rules and offer users real choices.
As AI becomes a part of daily life, it’s up to companies, governments, and users to protect rights and encourage innovation at the same time.