Meta recently quietly terminated its partnership with outsourcing company Sama, after the company was entrusted with using image data collected by Ray‑Ban smart glasses to provide training material for Meta’s generative AI system. Subsequently, Sama announced the layoff of 1,108 employees, some of whom said they were fired in "retaliation" after disclosing to the media that the videos they reviewed contained a large amount of highly private content.

The incident first came to light in February this year. Sama employees in Nairobi, Kenya, told two Swedish newspapers that their work included labeling videos from smart glasses, often without the people in the footage knowing they were being filmed. These Ray‑Ban smart glasses have a built-in AI assistant and need to continuously record audio and video, part of which will be used as AI training data, and then human annotators will supplement and improve content that is difficult for AI to understand.
Meta stated that its terms of service have explained to users how the relevant data will be used, and that the glasses must obtain explicit authorization from the user before enabling AI mode. However, multiple Sama employees said that the footage they came into contact with not only included financial information such as bank accounts and private conversations, but also involved naked footage in the bathroom and scenes of intimate behavior, which clearly exceeded the public's cognitive boundaries for routine data collection.
After the investigative media reports were released, Meta announced it was canceling its contract with Sama, saying the company "failed to meet Meta's standards." Sama immediately responded that it had never received any formal feedback that the quality of its work was "substandard." At the same time, some employees revealed that the company, in the context of upgraded security measures, arranged for them to "work" with almost nothing to do, and was suspected of using internal investigations to target "whistleblowers" who broke the news to the media.
This is not the first time Sama has been involved in AI-related labor and ethics controversies. This California-based outsourcing company was commissioned by OpenAI to provide training services for ChatGPT, which will be officially launched in 2022. In order to reduce the harmful content output by chatbots, Sama arranged for Kenyan employees to review and filter out extremely impactful text and image content for a long time for less than $2 a day. The related work was exposed to cause obvious harm to the mental health of employees. That same year, Meta and Sama also faced charges of recruiting employees through misleading job postings, constituting disguised human trafficking, and firing workers who tried to form a union.
In addition to labor rights issues, this incident has once again brought the privacy risks of smart glasses to the forefront. As early as the era of Google Glass, its recording capabilities were strongly opposed by public opinion because of the possibility of "invisible surveillance" in public spaces. Now, after Meta has "relaunched" the category with products that are more low-key and close to the appearance of everyday glasses, some users have been photographed wearing them during court proceedings, some have enabled recording during police operations, and students have been found to use smart glasses to cheat in exams.
As wearable devices continue to heat up, other technology giants are also accelerating their deployment. According to reports, Apple is currently testing as many as four smart glasses designs, aiming to compete head-on with Meta's Ray‑Ban products in the future. With the trend of deep integration of AI and sensing devices, how to strike a balance between innovative experience, personal privacy, and labor rights is becoming a core problem that the technology industry cannot avoid.