According to Reuters, The New York Times and other foreign media reports, the Italian privacy regulator has issued a temporary ban on OpenAI’s ChatGPT, saying that the chatbot collects and stores information in an inappropriate way. At present, OpenAI has disabled ChatGPT in the Italian region and stopped processing data belonging to Italian users.

Italy’s data protection agency has accused OpenAI of using the collected data to train ChatGPT’s algorithms without a legal basis. OpenAI now faces regulatory challenge and must 20 daysprove that it has taken effective measures to comply with EU privacy rules, otherwise the company may be fined under the EU’s General Data Protection Regulation.4% of global annual income or $21.8 millionfine (whichever is higher).

In addition, Italian regulators emphasized: OpenAI does notVerify user ageandAvoid use by children under 13Mechanisms of chatbots, which may expose teens to inappropriate content. To pass regulation, OpenAI may need to add age checks, update privacy policies, and provide users with more detailed privacy information, such as how to use user data to meet the requirements of regulators.

Italy is just a microcosm of AI privacy concerns, with a slew of similar regulatory proposals in the U.S. and across Europe, addressing concerns ranging from data protection to disinformation to information security.

Traditional news publishers are studying whether their content is widely used to train AI tools such as ChatGPT and are considering legal remedies against AI companies, according to the Wall Street Journal.Also, some large businesses and organizations, such as JPMorgan Chase, have blocked access to ChatGPT from local networks in recent months.New York City’s public school system also banned chatbots from its internet network and devices in January, citing concerns about cheating and educational advancement.

Related Reading:

#Italy #bans #ChatGPT #privacy #concerns

Leave a Comment

Your email address will not be published. Required fields are marked *