This robot learned to lip sync like humans by watching YouTube
Researchers trained the robot using self-supervised audio visual learning
Columbia Engineering
Researchers at Columbia Engineering have trained a human-like robot named Emo to lip-sync speech and songs by studying online videos, showing how machines can now learn complex human behaviour simply by observing it.
Emo is not a full humanoid body but a highly realistic robotic face built to explore how humans communicate. The face is covered with silicone skin and driven by 26 independently controlled facial motors that move the lips, jaw, and cheeks.
These motors allow Emo to form detailed mouth shapes that cover 24 consonants and 16 vowels, which is critical for natural speech and singing. The goal was to reduce the uncanny valley effect, where robots look almost human but still feel unsettling because their facial movements do not match their voice.
How Emo learned to lip sync like a human
The learning process happened in stages. First, Emo explored its own face by moving its motors while watching itself in a mirror. This helped the system understand how motor commands change facial shapes.
Researchers then introduced a learning pipeline that connects sound to movement. Emo watched hours of YouTube videos of people speaking and singing, while an AI model analysed the relationship between audio and visible lip motion.
Instead of focusing on language or meaning, the system studied the raw sounds of speech. A facial action transformer converted those learned patterns into real-time motor commands.
This approach allowed Emo to lip sync not only in English but also in languages it was never trained on, including French, Arabic, and Chinese. The same method worked for singing, which is harder because of stretched vowels and rhythm changes.
Researchers say this matters because future robots will need to communicate naturally if they are going to work alongside people. This advancement has arrived when interest in robots for homes and workplaces is climbing fast.
At CES 2026, that momentum was on full display, with demos ranging from Boston Dynamics’ Atlas humanoid which is ready to enter workplace to SwitchBot’s household-focused robot that can cook meals and do your laundry, and LG’s upcoming home assistant robot designed to make everyday life easier.
Add advances like artificial skin that gives robots human-like sensitivity, and paired with realistic lip syncing, it is easy to see how robots are starting to feel less like machines and more like social companions. Emo is still a research project, but it shows how robots may one day learn human skills the same way we do by watching and listening.
Manisha likes to cover technology that is a part of everyday life, from smartphones & apps to gaming & streaming…
MacBook Pro models with more powerful M5 series chips could be right around the corner
A new Apple service aimed at creative professionals could signal the imminent arrival of M5 Pro and M5 Max MacBook Pros.
Apple’s first big hardware launch of the year could be the new MacBook Pro(s), powered by the purported M5 Pro and the M5 Max chips. According to a Macworld report, the Cupertino giant could unveil the new MacBook Pro models with the Creator Studio.
For those catching up, Creator Studio is Apple’s latest service that bundles the company’s pro-grade creative apps, including Final Cut Pro and Logic Pro (along with four others), in a monthly subscription that costs $12.99. It will be available via the App Store from January 28, 2026.
Samsung won’t charge you for Galaxy AI features (or at least some of them)
Samsung redraws the fine print on Galaxy AI, hinting at a future where not all smarts come free.
Along with the Galaxy S24 lineup, Samsung launched Galaxy AI, a useful suite of AI-based features for Samsung’s flagship users. At the launch, the Korean giant said that both the on-device and cloud-based AI tools will remain complementary “through 2025.”
However, the company has reportedly changed the footnotes on the official Galaxy AI landing page, opening the door to charging for some, if not all, AI features. “Galaxy AI basic features provided by Samsung are free,” the updated footnote reads (via Android Authority).
You can now use Gemini’s Thinking model without worrying about Pro model limits
The models now have higher, separate limits on Google AI Pro and Ultra plans.
Google has quietly reworked Gemini’s usage limits, splitting the shared pool and boosting the individual caps for the Thinking and Pro models. At launch, both models had the same daily quota, meaning every prompt used for complex reasoning or advanced math and code counted against a single limit. But that’s no longer the case.
According to 9to5Google, Google has now split the useage limits, giving each model its own separate allowance. For subscribers on the Google AI Pro plan, the Thinking model now has a limit of 300 prompts per day, while the Pro model remains capped at 100 prompts daily. On the Google AI Ultra plan, these numbers jump to 1,500 Thinking and 500 Pro prompts per day. Users on the free tier still have access to both models, though Google simply labels it as “Basic access – daily limits may change frequently” on its support page.
