
The Garante per la Protezione dei Dati Personali, Italy’s data protection regulator, fined Luka Inc., developer of the Replika AI chatbot, €5 million for violating GDPR requirements. The Garante ruled that Replika failed to obtain valid consent, mishandled data, and exposed minors to potentially harmful content. This €5 million fine, or $5.4 million, is among Europe’s most significant regulatory actions against an AI developer this year.
European Regulators Set the Tone on AI Accountability
San Francisco-based startup Replika, launched in 2017, allows users to engage with AI avatars simulating emotional companionship. Initially promoted as a tool for mental health support, Replika has evolved into a complex, AI-powered social interaction platform. However, critics contend that its lack of safety checks and user protections poses a risk, particularly for younger users. Garante emphasised that the app’s failure to prevent children from accessing emotionally charged information was a significant error.
Garante raised serious concerns about Replika’s data practices, especially the privacy risks it poses to younger, vulnerable users. Investigators found Replika processed personal data without valid legal grounds and failed to implement safeguards against unauthorized access. The absence of age verification and emotionally intense interactions with the AI chatbot created major privacy risks for children.
These regulatory violations led the commission to impose a €5 million fine on Replika’s developer, San Francisco-based Luka Inc. In February 2023, Italian authorities temporarily suspended Replika’s operations after early findings showed risks to minors and weak safeguards. Officials cited exposure to inappropriate content and inadequate data protection as key factors behind the suspension and eventual penalty.
The prosecution against Luka Inc. is part of European regulators’ broader assault on AI systems that manage personal data. Italy, in particular, has taken the lead in protecting digital privacy. Similarly, the country’s regulator temporarily banned OpenAI’s ChatGPT in 2023 before imposing a €15 million fine for noncompliance with GDPR.
Replika GDPR Fine and Rising AI Scrutiny
As generative AI becomes more integrated into digital experiences, from virtual therapists to relationship simulators, governments throughout the EU are stepping up efforts to ensure that these technologies are created and implemented under privacy rules. Many of these AI systems acquire highly sensitive user data, emphasising the necessity for strict security measures.
Therefore, the Replika GDPR fine is part of a larger push across the European Union to bring generative AI systems into compliance with privacy rules. The investigation also raises concerns about how AI models are developed, whether user chats are fed into subsequent model iterations without informed consent.
Aside from this regulatory action, Garante has launched a fresh investigation into whether Replika’s AI training practices are consistent with the EU’s General Data Protection Regulation (GDPR). This involves investigating how the chatbot’s underlying language model handles and manages personal information. Nevertheless, Replika has yet to release a public statement in reaction to the verdict.
Conclusion
The Replika GDPR fine is a strong reminder to AI developers operating in Europe: comply with data protection legislation or suffer serious consequences. Companies working with generative AI must incorporate privacy-by-design principles, guarantee clear consent processes, and protect vulnerable users.
As the EU strengthens its stance on AI regulation, the Replika case highlights a clear regulatory trend: artificial intelligence platforms must advance not only technologically but also ethically and legally to satisfy the demands of a digital society built on trust and accountability.