Some items on our site have recently moved. Visit our News Hub for selected articles, special reports, podcasts and other resources.
South Korean AI developer shuts down chatbot following privacy violation probe
13 Jan 2021 10:29 am by Wooyoung Lee
Scatter Lab, the South Korean developer of Artificial Intelligence chatbot named Iruda, is the target of a new investigation by the South Korean privacy regulator.
The probe follows controversy surrounding unethical comments made by the chatbot about sexual minorities and people with a disability, and suspicions the developer unlawfully collected mobile messages which were used as learning material for the chatbot.
“The PIPC launched an investigation today and the agency will look into whether the developer violated the privacy law in the course of offering the chatbot service,” said Kim Jin-hae, spokesman for the Personal Information Protection Commission, in a briefing today.
The controversy spread in online communities over hate speech the chatbot made towards sexual minorities and people with a disability. The chatbot was also found to have revealed names and addresses of people in certain conversations, according to local news reports.
Iruda was launched on Dec. 23 and is identified as a 20-year-old female college student. The developer closed the service on Tuesday amid the controversy.
In a statement, announcing the closure of the chatbot service, Scatter Lab said that the company collected conversation data for its texting relationship analyzer service and used them in the course of developing the chatbot, but confirmed the collection and usage of the data was done within the scope of the privacy law.
Iruda learned how to speak based on some 10 billion conversations collected through the texting relationship analyzer service.
The PIPC spokesman said developers of AI chatbots should make sure that they receive consent from individual users whose conversations could be used for AI learning materials. Moreover, developers should go through a process in which they “de-identify” individuals if the data contains sensitive information about users’ privacy.
20 Oct 2021 2:33 pm by Cynthia KroetFacial recognition tools are coming under intense scrutiny in Europe, with privacy watchdogs using the GDPR
Facebook should be fined and have to make its terms of service more transparent after violating EU data protection
04 Oct 2021 12:00 am by Claude MarxMomentum is building among congressional Democrats to give the FTC funding to create a new bureau to focus on data security and privacy matters,