Loading...

Why was ChatGPT Banned in Italy?

Why was ChatGPT Banned in Italy?

ChatGPT banned in Italy over privacy concerns,  Hannah Walker, Analyst at TenIntelligence reports…

Last week the Italian Data Protection Authority (The Garante) took steps to temporarily prevent ChatGPT from processing the personal data of individuals located within Italy. This is so that ChatGPT’s privacy practices can be investigated.

The Garante is implementing the ban due to concerns following ChatGPT’s recent data breach. Information such as Users’ chat titles and payment information was exposed. Following the breach, further questions were raised regarding potential GDPR violations. The main concerns raised by the Garante were the following:

  • OpenAI could not provide users with the required transparency information about the personal data being processed by ChatGPT.
  • No legal basis for the mass collection and processing of personal data for “training” the algorithms that the platform relies on for operation.
  • Regarding the potential inaccuracies with the processing of personal data.
  • A failure to verify the user’s age. This means the users under 13 could receive content not age appropriate. While Googles AI Chatbot “Bard” is only available to over 18s.

What is ChatGPT?

ChatGPT is an AI language model developed by OpenAI, based on the GPT (Generative Pre-trained Transformer) architecture. ChatGPT is designed to generate human-like text based on the input it receives, and it can be used for various purposes, such as answering questions, engaging in conversation, writing articles, generating creative content, and more. It has been trained on a vast dataset of text from the internet, which allows it to generate contextually relevant and coherent responses. However, it is important to note that its knowledge is limited to the information available up to September 2021.

The Technical Bit

ChatGPT is based on the GPT (Generative Pre-trained Transformer) architecture, which is a type of deep learning model specifically designed for natural language processing tasks. The technical aspects of ChatGPT can be divided into three main components: the Transformer architecture, the pre-training, and the fine-tuning.

  1. Transformer architecture: The Transformer is a neural network architecture introduced in a paper by Vaswani et al. (2017) called “Attention Is All You Need.” It is designed to handle sequential data like text, but unlike traditional recurrent neural networks (RNNs) or long short-term memory networks (LSTMs), it relies heavily on the attention mechanism to process input data in parallel, instead of sequentially. This enables the Transformer to scale more effectively and handle long-range dependencies in text.
  2. Pre-training: ChatGPT is pre-trained on a large corpus of text data from the internet. During pre-training, it learns to generate text by predicting the next word in a sentence, given the words that came before it. This process is known as unsupervised learning, as it doesn’t require labelled data. The model learns various language patterns, grammar, facts, and some reasoning abilities through this exposure to a diverse range of text.
  3. Fine-tuning: After pre-training, the model is fine-tuned on a smaller, more specific dataset with human-generated input-output pairs. This step is considered supervised learning, as it uses labelled data. Fine-tuning helps the model generalize its learned knowledge to respond more accurately and appropriately to user inputs, adapting its behaviour to specific tasks or conversational domains.

It’s important to note that, although ChatGPT is a powerful language model, it can sometimes generate incorrect or nonsensical answers due to biases in the training data or the lack of an explicit understanding of the world as humans do. Additionally, it may be sensitive to the phrasing of input queries and might generate different responses based on slight changes in phrasing.

Chat GPT and Italy’s Ban: Exploring the Implications

This means that OpenAI now has 20 days to respond to the alleged breaches, as well as providing corrective measure details. If they fail to provide the requested information, they could be issued with a fine of £17.5 million ($21.7 m) or up to 4% of annual revenues.

The chatbot is blocked in several countries, including China, Russia and North Korea. The ban is also following the recent requests from key figures such as Elon Musk to pause AI development until there is a better understanding of AI systems.

Chat GPT and Italy’s Ban: What You Need to Know

Italy has become the first Western country to block the advanced chatbot ChatGPT, developed by OpenAI, due to privacy concerns raised by the Italian data protection authority. The regulator has banned and initiated an investigation into OpenAI’s compliance with the General Data Protection Regulation (GDPR). The watchdog cited a data breach involving user conversations and payment information and expressed concerns about the mass collection and storage of personal data for algorithm training. It also highlighted the potential exposure of minors to unsuitable content due to the lack of age verification.

OpenAI has disabled ChatGPT for users in Italy and stated its commitment to complying with GDPR and other privacy laws. The company expressed its belief in the necessity of AI regulation and its intention to work closely with the Italian data protection regulator. Other countries are monitoring the situation, with the Irish data protection commission coordinating with EU data protection authorities and the UK’s Information Commissioner’s Office stressing the importance of compliance with data protection laws.

 

TenIntelligence Thoughts

ChatGPT has proven how powerful and easily accessible AI can be but has also shown that it is something that will need additional legislation and stricter regulation. Although the EU is currently working on legislation for AI it does leave consumers at risk from the already available technology until the legislation can take effect.

The risks of ChatGPT to cybersecurity and due diligence could be immense. Here are some points to consider:

  • With Microsoft backing OpenAI, and looking to implement the same chatbot technology into its search engine Bing, it could be very easy for false information to be shared due to the lack of quality checks on the data being collected. From a due diligence standpoint giving clients false information without confirming the validity of the data could create a negative impact on company reputations.
  • A better standard of quality assurance needs to be implemented regarding the information it is collecting and sharing.
  • Another issue with using AI chatbots is how far can the AI go? It is already possible to ask the chatbot to check for security flaws in code snippets, can AI technology be used as a means of targeting companies finding weaknesses in their security?
  • It is also possible to ask the chatbot to generate pieces of code meaning that it is possible to ask it to generate malicious code as well.
  • Thanks to its ability to generate human-like content it is easy for phishing content to be generated, with malicious links included.
  • There has also been reports of cyber-criminals working on ‘deep fake chatbots’ where they use ChatGPT to pose as fake AI Assistants on popular websites, extracting information from unsuspecting users.
  • While OpenAI has included some ethical limitations on ChatGPT, for example someone couldn’t outright ask the chatbot to write a phishing email, but if phrased differently it could still be possible to have an email written with a sense of authority and still include a specific link leading to potentially fraudulent pages.

While there are risks that come with ChatGPT, there are some benefits to using the chatbot with more benefits arising as it is developed further.

  • While there is the disadvantage of being used to create malicious code, businesses are also able to use ChatGPT coding abilities to their advantage to find potential exploits early enough to be fixed.
  • It could also be possible to use it to help strengthen security, asking it to write defensive code, using it for file management, encrypt and store files in safer locations.
  • It could also be possible to ask ChatGPT with its coding abilities to write basic PowerShell scripts which could be useful for malware analytics, creating Python scripts used for detecting network port scans or blocking malicious IPs.
  • It can also be used to carry out repetitive and autonomous tasks, helping with penetration test reports.
  • It can also help with some aspects of cybersecurity training and awareness course for employees, asking employees to rethink opening certain emails. This could be pushed further with more development with AI being able to scan, identify and potentially isolate Phishing emails automatically.

Hannah

 

Written by

Hannah Walker | Analyst at TenIntelligence