This artificial skin could give ‘human-like’ sensitivity to robots
Giving robots a sense of touch
This image is for representation purposes only
Maximalfocus / Unsplash
Robots are getting better at seeing, hearing, and moving, but touch has always been the missing piece. At CES 2026, Ensuring Technology showcased a new kind of artificial skin that could finally give robots something close to human-like sensitivity, helping them feel the world instead of just bumping into it.
The company’s latest tactile sensing tech is designed to help robots understand pressure, texture, and contact in ways that go beyond simple touch sensors. At the center of the announcement are two products called Tacta and HexSkin, both aimed at solving a long-standing problem in robotics.
Humans rely heavily on touch to grasp objects, apply the right amount of force, and adapt instantly when something slips. Robots, on the other hand, usually operate with limited feedback. Ensuring Technology’s goal is to close that gap by recreating how human skin senses and processes touch.
Giving robots a sense of touch
Tacta is a multi-dimensional tactile sensor designed for robotic hands and fingers. Each square centimeter packs 361 sensing elements, all sampling data at 1000Hz, which the company says delivers sensitivity on par with human touch. Despite that density, the sensor is just 4.5mm thick and combines sensing, data processing, and edge computing into a single module.
At CES, Ensuring demonstrated a fully covered robotic hand using Tacta, with 1,956 sensing elements spread across fingers and palm, effectively creating a complete network of tactile awareness.
HexSkin takes the idea further by scaling touch across larger surfaces. Built with a hexagonal, tile-like design, HexSkin can wrap around complex curved shapes, making it suitable for humanoid robots.
CES 2026 has been packed with robots that show just how fast the field is moving, and why better touch matters. We’ve seen LG’s CLOiD home robot pitched as a household helper for chores like laundry and breakfast, alongside humanoid robots that can play tennis with impressive coordination and Boston Dynamics’ Atlas, which displayed advanced balance and movement this time.
While these machines already see and move remarkably well, most still rely heavily on vision and rigid sensors. Adding a human-like touch through artificial skin could be what finally makes robots feel a little more human.
Manisha likes to cover technology that is a part of everyday life, from smartphones & apps to gaming & streaming…
Are you buying a drone soon? Here’s how the FCC’s move affects you
U.S. bars new DJI Drones amid rising fears of Chinese tech surveillance
The U.S. Federal Communications Commission (FCC) has just taken a massive swing at the drone industry, blocking new foreign-made drones – including those from DJI – from entering the American market. By adding them to the “Covered List,” the agency is effectively labeling these devices a national security threat. It’s a huge blow to DJI, which currently owns about 90 percent of the consumer market, as Washington grows increasingly worried that these drones could be used by Beijing to peek at sensitive U.S. data.
Washington expands restrictions as concerns grow over Chinese drone dominance
The 3D-printed shoe company that wants to follow Apple’s lead
One of the key things about the people and companies we speak to as part of Trending Forwards is they must be doing something different… and that’s certainly what Syntilay is doing.
They have the lofty goal of changing the world of shoe production – not just in terms of the way they’re made, which is by 3D printing them based on reams of data – but by changing the way they’re ideated, designed and worn.
This new technique can generate AI videos in just a few seconds
TurboDiffusion can generate AI videos upto 200 times faster without losing quality
Researchers have unveiled a new AI video generation technique called TurboDiffusion that can create synthetic videos at near-instant speed. It can generate AI video up to 200 times faster than with existing methods, without sacrificing visual quality.
The work is a joint effort between ShengShu Technology, Tsinghua University, and researchers affiliated with the University of California, Berkeley. According to its developers, the system is designed to dramatically cut the time it takes to generate video, a process that has traditionally been slow and computationally expensive.
