What a temporary ban on ChatGPT by a data regulator means for IT contractors

It’s not all plain sailing for the AI darling of the moment -- ChatGPT. On March 31st, 2023, the Italian data protection authority (‘Garante’), announced that it had imposed a temporary limitation on the processing of personal data by Open AI LLC, the creator of the popular AI-powered conversational tool. The notice was issued in response to several infringements of the General Data Protection Regulation (GDPR).

Garante Vs Open AI's showpiece

Garante found that ChatGPT's services lacked transparency and failed to provide appropriate legal bases for processing, while also identifying inaccuracies in their data processing.

In addition, ChatGPT did not implement an age verification system for users, including minors.

For IT contractors and other ContractorUK readers, these findings from the data watchdog underscore the importance of companies that collect and process personal data being accountable for their actions, and ensuring they comply with relevant data protection laws and regulations, writes Maude Lindfelt of digital law firm Gerrish Legal.

Consequently, Garante imposed a temporary ban on Open AI's processing of personal data in Italy, which implies that the company will be prohibited from collecting or using any personal data from Italian users until the identified issues have been remedied. This not just represents a bit of non-plain sailing for ChatGPT, but it’s also far from the admiration that the tool has been accustomed to of late.

Will Garante's ruling on Open AI's data-handling have wider implications?

The question of whether other countries like the UK will follow Italy's decision to ban ChatGPT is currently uncertain.

Some countries in Europe, such as Ireland, have expressed an interest in investigating the basis for the Italian ban, while others have suggested that a ban on AI systems may not be the solution.

In response to the situation, the European Data Protection Board has developed a taskforce dedicated to investigating ChatGPT. The aim of the taskforce is to improve collaboration among data protection authorities and share information about potential enforcement actions.

How Italy’s neighbours have reacted to its data watchdog mauling ChatGPT

Germany's data protection commissioner has indicated that Germany could follow Italy's footsteps in blocking ChatGPT due to data security concerns. This suggests that other countries may take similar actions if they believe that ChatGPT violates their data protection regulations.

Similarly, the Spanish data protection authority (AEPD) has also initiated an investigation into OpenAI for a possible breach of data protection regulations.

The AEPD advocates for the development and implementation of innovative technologies, including AI, while ensuring compliance with current legislation. Thus, if ChatGPT proves to violate data protection standards, Spain may follow Italy's lead and restrict it as well.

Meanwhile, the European Commission executive vice president, Margrethe Vestager, has taken to Twitter to state that the EU will regulate the uses of AI rather than the technology itself. Likewise, France's digital minister Jean-Noël Barrot has argued that it would make more sense to regulate and “master” new technologies, rather than try to ban them altogether.

The UK -- never a fan of the ban

The response in the UK to Italy’s data watchdog mauling ChatGPT has centred on developing guidelines for the ethical use of AI rather than outright bans.

The UK government has also invested significantly in developing its AI sector, and so it is unlikely that it would seek to restrict the use of a major AI tool like ChatGPT without compelling evidence of harm. That being said, the regulatory landscape around AI is rapidly evolving, and it is always possible that new developments or incidents could prompt changes in policy.  

ChatGPT in a commercial environment

As OpenAI faces increasing scrutiny from European regulators, compliance with data protection regulations will be crucial for companies operating in the EU.

This incident with ChatGPT serves as a reminder that companies cannot ignore regulatory compliance if they want to operate in the EU. The Italian DPA gave the company behind ChatGPT 20 days to address the data protection issues or pay a fine of up to €20 million or 4% of annual revenues. It remains to be seen whether other countries will take similar actions against ChatGPT or other AI chatbots, but this incident underscores the importance of data protection compliance for companies developing and deploying AI systems in Europe.

Are UK IT contractors obligated to disclose the use of ChatGPT or similar AI in their work?

In April 2021, the UK government took a step towards increased transparency and ethical usage of AI by releasing proposed regulations for organisations using the developing technology.

The guidelines suggest that companies using AI should be upfront about their use of the technology and should clearly communicate to stakeholders when and how AI is being used. While these guidelines are not legally binding, they indicate a promising direction for the responsible use of AI in the UK.

Industry associations and professional organisations have also been taking action towards ethical use of AI. The Institution of Engineering and Technology (IET) has published a Code of Practice for the ethical use of AI, which includes a section on transparency and disclosure. The Code recommends that organisations using AI should be open about the data and algorithms used in their systems and should clearly communicate how the technology is being used to stakeholders.

Although there is no current legal requirement for UK IT contractors to declare their use of AI tools, the guidance, and recommendations around ethical use of AI are growing. Transparency and communication around AI use are becoming increasingly important, and it is possible that future regulations or guidelines may require more explicit disclosures of AI use in certain contexts.

The future (needs you)…

The UK government's white paper on AI regulation, published on March 29th 2023, supports this trend towards enhancing transparency and communication around AI use, promoting responsible innovation, and increasing public trust in the technology. As AI becomes more integrated into society, it is vital to ensure that it is utilised in an ethical and responsible manner, and the UK is making progress towards this objective. Although it is hard to imagine them not doing so, the nation will need its IT contractors to get behind this technology and sign-up to this objective, if the UK’s progress is to continue rather than be potentially slowed down or thrown off track by regulatory rulings.

Profile picture for user Gerrish Legal

Written by Gerrish Legal

Gerrish Legal is a digital commercial law firm based in London, Stockholm and Paris. Gerrish Legal gives contractors the trusted legal support they need to run their business in all areas of commerical, contract, intellectual property and data protection law. Unlike traditional law firms, we follow your legal matter from A to Z. From the moment contractors partner with us, they can rest assured their legal needs will be looked after with the utmost care. We stay on top of the latest trends, embrace innovation, and provide flexible legal advice in accordance with our contractors’ budgets and deadlines.
Printer Friendly, PDF & Email

Contractor's Question

If you have a question about contracting please feel free to ask us!

Ask a question

Sign up to our newsletter

Receive weekly contractor news, advice and updates.

Every sign up will be entered into a draw to WIN £100 Amazon Vouchers.

* indicates required