Couldn’t make it to our Empowering Privacy Germany event? Not to worry! In this blog series, we’ll share key takeaways from the panels, keynotes, and presentations that made our Munich event a great place to connect with privacy and compliance leaders.
Among the standout sessions was a panel on EU AI Policy, focusing on personalization, AI agents, and the potential impact of the Omnibus IV package.
The discussion—featuring experts from regulators, big tech, legal practice, and industry—offered diverse perspectives of the regulatory challenges and opportunities shaping the future of artificial intelligence in Europe.
Panelists included:
- Marijam Darakhshan, Director Privacy Global SMB at DataGuard
- Prof. Dr. Dieter Kugelmann, State Data Protection Commissioner at Rhineland-Palatinate
- Marieke Luise Merkle, Lawyer - Associated Partner at Noerr (IT, AI & Data Law)
- Sophie Sohm, Privacy Policy Manager – Innovation & AI at Meta
- Rebekka Weiß, Head of Regulatory Policy and Senior Manager Government Affairs at Microsoft
- Jörn Wittmann, Group Privacy Ambassador at Volkswagen Group
The conversation revealed that while AI’s potential is vast (and potentially dangerous), the rules governing its use are still evolving—and in many areas, unclear. Here are the takeaways that really stood out to us.
The legal gray zones of AI training
A recurring theme during the panel was the uncertainty around the legal basis for AI training. While the EU AI Act is intended to provide clarity, panelists noted that several foundational terms—such as “providers,” “deployers,” and “end users”—are not yet well-defined. This ambiguity complicates compliance, especially when it comes to high-risk AI systems, where definitions and enforcement timelines remain in flux.
Adding to this, Euronews recently reported that Germany was one of several EU countries that did not meet the August 2 deadline for appointing national authorities to oversee compliance with the AI Act. Delays like this add to legal uncertainty, which can lead to stagnant innovation across the continent.
Case study: Meta’s approach to AI development
Sophie Sohm shared Meta’s experience in attempting to compliantly train its open-source AI models with first-party data—specifically, public posts from users over 18. The company based its approach on legitimate interest as a legal basis, offering opt-out mechanisms and notifications for users and even for non-users shown in their friends’ public images.
From Meta’s perspective, the company took reasonable steps to remain responsible, engaging with the Irish Data Protection Commission, waiting for their feedback for a year, and implementing effective mitigation measures.
However, German consumer protection advocates didn’t consider Meta’s actions as lawful, leading to the Verbraucherzentrale NRW applying for a preliminary injunction to prevent the AI training. In the end, the Higher Regional Court of Cologne ruled in Meta’s favor, citing effective safeguards against infringement on people’s personal rights.
Fellow panelists and head of Rhineland-Palatinate's Data Protection Authority noted that Meta’s mitigation documentation came significantly late in the process. While certainly convincing in the end, this delay from Meta’s side slowed authorities down in making a final decision.
Both speakers’ perspectives are, in fact, a perfect example of the procedural frictions that can slow effective regulation and AI innovation. The willingness to work together is there, but industry and government need to find better ways to stay in conversation.
Call for collaboration and pragmatism
As an outcome of this discussion, all panelists agreed that AI governance cannot be approached in isolation. The data protection process can take months or even over a year, meaning many decisions require joint problem-solving between regulators and industry.
The fact is that AI training is not a black-and-white process. There are countless ways to design systems and mitigate risks.
Adding to this is the fact that AI development will never be “done.” Its rapid evolution demands constant adaptation from both compliance frameworks and technological safeguards. Continuous dialogue between both sides is the only sure way to take the right steps forward.
Risk management is a balancing act
Risk mitigation was another central topic for the panel. Experts stressed that while legitimate interest can be a valid reason for AI training—especially when aiming to improve model quality—it is essential to question whether all the data companies collect is truly necessary.
Technical complexity compounds the challenge. Large Language Models (LLMs) process data in varied ways, and there is still much we don’t know about their inner workings. This uncertainty makes it critical to assess risks not just from a legal standpoint, but also from a technical and ethical perspective.
The group agreed that the AI model’s end purpose also matters. There is a significant difference in context between training AI for commercial gain and using it for life-saving medical research. Governance must take these distinctions into account, and the only way to do so effectively is to involve the people building the technology in policy discussions.
The road ahead: Regulation with purpose
While regulation is necessary for managing powerful technologies, what exists now is far from perfect. The AI Act still has some regulatory gaps and inaccuracies, but speakers warned against focusing too much on what’s missing. Instead, they see an opportunity to define more pragmatic, innovation-friendly rules.
One area where speakers want to focus is resolving existing GDPR challenges. At first glance, GDPR may feel disconnected from AI topics, but it can actually make or break the AI Act’s effectiveness entirely. If organizations cannot legally use the data needed to train models in the first place, compliance becomes impossible.
Finally, several panelists urged regulators to remain pragmatic. Overly cautious or unclear rules risk stifling innovation. As Marieke Luise Merkle put it, if there’s uncertainty, many ideas stop.
Key takeaways for privacy, compliance, and AI leaders
So what could you take to your team off the back of this panel? Here are the main points:
- Clarify roles and definitions early: In the absence of clear guidance from governing bodies, establish internal interpretations of key AI Act terms and be ready to adapt as official definitions emerge.
- Document mitigation measures proactively: Avoid delays by preparing transparency reports and safeguards before regulatory inquiries.
- Foster regulator-industry dialogue: Complex challenges like AI training require ongoing, collaborative problem-solving.
- Balance ambition with necessity: Collect only the data essential to achieving your AI objectives and continuously assess whether the risk justifies the reward.
- Differentiate by purpose: Align governance and risk tolerance with the intended use of the AI system.
- Plan for continuous change: AI regulation and technology will both evolve; flexibility is a strategic advantage.
Final thoughts
The EU AI Policy panel was a great example in and of itself of the diversity of perspectives needed to build a responsible and productive AI future. From big tech to local regulators, from legal experts to compliance leaders, the conversation revealed that pragmatism, collaboration, and adaptability will be as important as any written law in guiding AI’s development.
What the panel didn’t cover was the finer details written in the EU AI Act. What are the exact requirements? When do deadlines apply? You can find the answers in our ultimate guide on the EU AI Act.