Meta has suspended its Llama 2 AI model, snubbing the EU’s regulatory environment.
Facebook parent Meta has suspended its AI model for the EU due to regulatory concerns. The AI model Llama 2 was legally contested in 11 European countries. It plans to release the Llama model over the next few months, but not in the EU. This means European companies won’t be able to use it, even though it has been released under an open license.
Meta’s move is a snub to the EU and is the latest showdown between a tech giant and EU regulators. However, it also sends a message to other countries like India that are still navigating and figuring out AI regulations.
Background
Meta had been devising a new privacy policy by which it would rely on users’ data to train AI models. The new policy would enable it to use Instagram and Facebook user data collected since 2007. The data could include posts and photos but not private messages.
Meta justified the policy on the grounds that the users had opted to make that data public on their social media profiles. Meta contends that it has a legitimate interest in processing the data to build AI technology services. Under the EU General Data Protection Law (GDPR), legitimate interest is grounds to justify data processing. However, this has been criticised because Meta has not divulged the purpose of the AI technology – using data for “AI technology” is a broad and vague purpose. It has faced several data protection complaints.
The tech giant has now announced that it has suspended the AI model virtual assistant that it was set to release in Europe. The company has cited an unpredictable regulatory environment.
The Llama 2 is a large language model, i.e., a foundation model trained on immense amounts of data, which allows it to understand and generate natural language and other types of content to carry out a wide range of tasks. Natural language processing is a subfield of AI that uses machine learning to enable computers to understand and communicate with human language. The Llama 2 is a multimodal AI model that manages video, audio, images, and text. Meta plans to use the new model across a range of products, including smartphones and Meta Ray Ban smart glasses, which are glasses with an AI assistant.
Analysis
Tensions have been brewing for a while between tech companies and EU regulators, which impose relatively stringent privacy and antitrust rules. Tech companies argue that regulations hurt the competitiveness of European companies and consumers.
Last week, the EU set out compliance deadlines for AI companies under its stringent new AI Act. The act covers issues like copyright, transparency, and AI functions like predictive policing – using AI to identify criminal activity. Tech companies in the EU will have to comply with the rules by August 2026.
Meta’s move follows a similar decision by Apple, which said it was planning to suspend the release of Apple Intelligence in the EU due to regulatory concerns – specifically the Digital Markets Act. Apple Intelligence features a set of AI functions for Apple products as the company looks to catch up with the AI features of fellow tech giants like Google.
Meta has followed a unique approach by allowing open access to the code behind its most advanced AI technology to developers and software experts across the world. While AI leaders like Google and Open AI carefully guard their code, Meta has decided to give its latest AI technology away for free. This means other companies can use its large language model Llama 2 to build customised chatbots.
Other tech companies cite safety concerns in sharing advanced AI technology with the public. Generative AI poses risks, such as contributing to online disinformation. However, Meta points out that allowing other programmers to access the technology would help improve it by identifying potential issues.
Cutting-edge large language models like the one behind Chat GPT can cost billions of dollars. Critics maintain that companies restricting access to their AI technology are trying to keep out competition.
Meta points out that rivals like Google and Open AI are already training on European data. It also contends that AI models will improve by learning about cultural, social, and historical aspects. However, it cannot be denied that Meta has access to an enormous amount of personal user data. Further, each time a user interacts with an AI program, it is likely to collect and analyse the data to improve its processing.
For countries like India that are still testing the waters, it is an indication that regulatory hurdles could lead tech companies to exclude a market from their advanced AI technology. The Indian government recently issued an advisory requiring significant tech firms to get government permission before issuing new models, which sparked criticism from entrepreneurs and investors. However, this was subsequently revised in an attempt to encourage AI technology development. As per the new advisory, intermediaries don’t have to obtain government permission before deploying AI models but are encouraged to label AI models to indicate their reliability. While the advisory is not legally binding, it does indicate the approach India is likely to follow in its AI regulation, which seems to be veering towards industry self-regulation based on transparency and accountability.
Assessment
- Meta’s move brings up questions about the costs of improving AI technology. While open access and training based on user data can make it better, it also comes with privacy concerns and issues of misusing AI technology.
- Additionally, it poses a conundrum for regulators as measures to ensure user privacy and safety can risk spooking leading AI companies who decide to take their technology elsewhere.
- Meta’s decision stands to affect not just European companies but also companies planning to provide products and services that rely on these models as they won’t be able to sell them in one of the largest digital markets.