Meta’s AI Glasses Feed Raw Home Footage to Kenyan Workers, No Opt-Out
A low-angle shot of a woman wearing Ray-Ban Meta glasses in her bedroom, looking at her reflection in a full-length mirror as she adjusts her shirt — her private moment captured and streamed live to Nairobi — chosen f...📷 AI illustration
- ★Nude scenes and bank details in training data
- ★Kenyan workers label private Western household video
- ★No user setting stops AI data sharing
Meta’s smart glasses market themselves as privacy-conscious wearables. A tap of the finger captures what the user sees, and nothing leaves the device without consent — or so the story goes. New reporting reveals the opposite pipeline: private recordings from Western homes stream directly to data workers in Nairobi, Kenya, where the footage is sifted, labeled, and fed back into Meta’s AI training loop.
The material is startlingly intimate. According to workers at the outsourcing firm Sama, the datasets contain nude scenes, sex videos, and banking information — the raw exhaust of everyday domestic life. “We see everything – from living rooms to naked bodies,” a data worker told The Decoder. “It is not just greetings, it can be very dark things as well.”
The core tension is not simply that a tech giant uses offshore annotators — that has been standard practice for years. The tension is that Meta’s Ray-Ban smart glasses process speech, text, images, and video automatically, and users cannot turn that AI processing off. The privacy policy may reference active consent for voice recordings, but the AI features that generate training data operate with an always-on logic that most buyers likely do not understand.
This is the uncomfortable reality beneath the polished demo. The13;Ray-Ban Meta glasses ship with a camera and microphone array that feed multimodal models, and13;every query, glance, or accidental trigger can become a training artifact. That artifact travels13;thousands of miles to a third-party facility where minimum-wage workers annotate13;the most private moments of13;people who assumed they were simply wearing a cool pair of sunglasses.
Europe’s privacy regulators are already circling. GDPR requires transparent data processing, clear legal13;bases, and13;meaningful user control — three things that automated, non-consensual13;AI annotation pipelines inherently struggle to satisfy. If regulators open a formal inquiry into Meta’s13;data annotation practices, the13;company could face not just fines but13;a mandated redesign of how its wearable AI operates in the EU market.
The broader industry implication is13;equally sharp. Almost every major AI company relies on distributed annotation workforces to refine13;multimodal models. Meta’s exposure13;is less a unique scandal than a preview of how13;privacy law will collide with13;the messy, human-powered reality of AI training.13;The product marketing says your data stays private; the13;supply chain says someone in Nairobi has seen your living room.