Researchers from Beihang University and the Beijing Institute of Technology in China developed a brain-inspired chip which enables robots to detect and react to movement at four times the speed of human visual processing.
The chip processes visual data through its design, which mirrors the human lateral geniculate nucleus (LGN) structure by giving priority to fast-moving or rapidly changing visual elements.
Traditional robotic vision systems rely on capturing static frames and detecting motion from brightness changes, which can introduce processing delays of over half a second.
In contrast, the new neuromorphic module detects light changes over time, significantly speeding up motion tracking. The robotic arms showed improved performance during simulated driving tests because their processing delays were reduced by 75% and their motion-tracking accuracy showed a 100% improvement.
“This study elevates video analysis speed beyond human levels by applying the brain’s visual processing principles to a semiconductor chip,” the research team said.
Interestingly, the chip enables autonomous vehicles and drones and human-robot interactions to achieve faster response times to gestures and facial expressions. The system uses optical-flow algorithms to interpret images and faces challenges with visually complicated environments but shows considerable progress in real-time robotic perception for controlled and domestic environments.