New Delhi: The UK’s data protection regulator has contacted Meta after reports that outsourced workers were able to view highly sensitive videos captured by the company’s AI-powered smart glasses. The UK Information Commissioner’s Office (ICO) said it would seek clarification on how the tech giant protects user data after a media investigation raised privacy concerns.
The Swedish newspaper investigation alleged that the contract workers that would look through the material that was getting trained on AI endured footage of intimate moments that were captured by people using the smart glasses of the firm. There were also some videos that were allegedly recorded in the personal context, which casts doubts on the transparency and awareness among the users.
UK regulator seeks answers
The Information Commissioner’s Office (ICO), the UK data watchdog, said the claims were of interest, and it will be writing to Meta seeking further clarification on its data-handling practices.
The regulator says that gadgets that handle personal data, such as smart glasses, must provide owners with control and an understandable explanation of how their personal data is handled and obtained. The ICO emphasised that businesses should be transparent in the possibility of an individual’s content being reviewed by humans as a way of enhancing AI systems.
Why human review happens
Meta acknowledged that in some cases contractors review images, videos, and transcripts captured through its AI tools to help improve the performance of its systems. The firm claimed that this kind of review is outlined in its privacy policies and is done to improve the user experience.
The smart glasses were created in collaboration with eyewear brand Ray-Ban and can be used to take photos or videos and question artificial intelligence about what is in their sightline. The process of recording can be initiated by voice or manual recording. The frame has a small indicator light that is meant to warn people around the camera when it is in use.
Outsourced workers reviewing content
A Swedish probe indicated that the footage was occasionally reviewed by those data annotators employed by a Nairobi-based outsourcing firm Sama. They had to label pictures and verify the accuracy of the responses of the AI to the questions posed by the user.
Employees told the BBC that the content that they read may be very sensitive. It is claimed that some of the clips captured individuals in intimate areas, such as bedrooms and bathrooms. There were instances where faces could be seen even when the privacy filters, which blur identifying features, were applied.
Text translation, description of the environment and answering questions about objects in sight are only some functions of wearable AI devices that have rapidly gotten attention. Such abilities may be particularly helpful to individuals with visual impairment who are blind or partially blind.
Privacy supporters, however, caution that gadgets which have cameras installed may easily be abused. The issue of individuals being filmed without their permission has already been expressed. Meta advises users to notify others in case they are recording and also not to record in intimate environments.
The company believes that the protection of user data is a priority, and it is continually enhancing its systems to provide higher levels of privacy protection. Nevertheless, regulators are also getting a closer look at the ways in which the AI-enabled devices gather and process personal data.