Metas AI-Powered Ray-Bans Portend Privacy Issues

  /     /     /  
Publicated : 23/11/2024   Category : security


Metas AI-Powered Ray-Bans Portend Privacy Issues


AI will make Metas smart glasses more attractive for consumers. But can the company straddle cutting-edge functionality and responsible data stewardship?



Meta is rolling out an early access program for its upcoming AI-integrated smart glasses, opening up a wealth of new functionalities and privacy concerns for users.
The second generation of Meta Ray-Bans will include Meta AI, the companys proprietary multimodal AI assistant. By using the wake phrase Hey Meta, users will be able to
control features or get information
about what theyre seeing — language translations, outfit recommendations, and more — in real time.
The data the company collects in order to provide those services, however, is extensive, and its privacy policies leave room for interpretation.
Having negotiated data processing agreements hundreds of times, warns Heather Shoemaker, CEO and founder at Language I/O, I can tell you theres reason to be concerned that in the future, things might be done with this data that we dont want to be done.
Meta has not yet responded to a request for comment from Dark Reading.
Meta released its first generation of Ray-Ban Stories in 2021. For $299, wearers could snap photos, record video, or take phone calls all from their spectacles.
From the beginning, perhaps with some
reputational self-awareness
, the developers built in a number of
features for the privacy-conscious
: encryption, data-sharing controls, a physical on-off switch for the camera, a light that shone whenever the camera was in use, and more.
Evidently, those privacy features werent enough to convince people to actually use the product. According to a company document
obtained by The Wall Street Journal
, Ray-Ban Stories fell somewhere around 20% short of sales targets, and even those that were bought started collecting dust. A year and a half after launch, only 10% were still being actively used.
To zhuzh it up a little, the second generation model will include far more diverse, AI-driven functionality. But that functionality will come at a cost — and in the Meta tradition, it wont be a monetary cost, but a privacy one.
It changes the picture because modern AI is based on neural networks that function much like the human brain. And to improve and get better and learn, they need as much data as they can get their figurative fingers into, Shoemaker says.
If a user asks the AI assistant riding their face a question about what theyre looking at, a photo is sent to Metas cloud servers for processing.
According to the Look and Ask features FAQ
, All photos processed with AI are stored and used to improve Meta products, and will be used to train Meta’s AI with help from trained reviewers. Processing with AI includes the contents of your photos, like objects and text. This information will be collected, used and retained in accordance with Meta’s Privacy Policy.
A look at
the privacy policy
indicates that when the glasses are used to take a photo or video, a lot of the information that might be collected and sent to Meta is optional. Neither location services, nor usage data, or the media itself is necessarily sent to company servers — though, by the same token, users who want to upload their media or geotag it will need to enable these kinds of sharing.
Other shared information includes metadata, data shared with Meta by third-party apps, and various forms of essential data that the user cannot opt out of sharing.
Though much of it is innocuous — crash logs, battery and Wi-Fi status, and so on — some of that essential data may be deceptively invasive, Shoemaker warns. As one example, she points to one line item in the companys
information-sharing documentation
: Data used to respond proactively or reactively to any potential abuse or policy violations.
That is pretty broad, right? Theyre saying that they need to protect you from abuse or policy violations, but what are they storing exactly to determine whether you or others are actually abusing these policies? she asks. It isnt that these policies are malicious, she says, but that they leave too much to the imagination.
Im not saying that Meta shouldnt try to prevent abuse, but give us a little more information about how youre doing that. Because when you just make a blanket statement about collecting other data in order to protect you, that is just way too ambiguous and gives them license to potentially
store things that we dont want them to store
, she says.

Last News

▸ Hack Your Hotel Room ◂
Discovered: 23/12/2024
Category: security

▸ Website hacks happened during World Cup final. ◂
Discovered: 23/12/2024
Category: security

▸ Criminal Possession of Government-Grade Stealth Malware ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Metas AI-Powered Ray-Bans Portend Privacy Issues