Bionic LiDAR system achieves beyond-retinal resolution through adaptive focusing


In a recent study, researchers from China have developed a chip-scale LiDAR system that mimics the human eye's foveation by dynamically concentrating high-resolution sensing on regions of interest (ROIs) while maintaining broad awareness across the full field of view.

The study is published in the journal Nature Communications.

LiDAR systems power machine vision in self-driving cars, drones, and robots by firing laser beams to map 3D scenes with millimeter precision. The eye packs its densest sensors in the fovea (sharp central vision spot) and shifts gaze to what's important. By contrast, most LiDARs use rigid parallel beams or scans that spread uniform (often coarse) resolution everywhere. Boosting detail means adding more channels uniformly, which explodes costs, power, and complexity.

The team's design achieves "beyond-retinal" angular resolution of 0.012° in ROIs—twice as sharp as the eye's approximate 0.017° limit. This means the system can distinguish points separated by the smallest angles, like picking out fine details on a distant road sign. It reallocates parallel sensing channels on demand, avoiding costly brute-force scaling.

Phys.org spoke to the study's co-authors, Ruixuan Chen and Xingjun Wang, from Peking University's School of Electronics.

"The motivation comes from a practical mismatch between biological and machine perception," the researchers explained. "The human eye achieves high acuity and energy efficiency by reallocating attention—maintaining broad awareness while concentrating resources on what matters. By contrast, LiDAR resolution is often pursued by 'more channels everywhere,' which quickly becomes expensive and power-hungry."

The scaling problem

Machine vision systems have expanded beyond traditional cameras to include LiDAR sensors, which enable precise distance measurement and 3D environmental perception. Unlike passive cameras, however, LiDAR demands emission and reception hardware for every pixel, capping achievable resolution.

Current approaches to improving LiDAR resolution face a critical bottleneck. Channel duplication delivers linear resolution gains but triggers superlinear explosions in complexity, power, and cost.

"First, resolution is tightly coupled to hardware channel count and scanning mechanics. Second, LiDAR is an active sensor: every pixel effectively costs both transmit and receive resources," the researchers explained. "That makes adaptive focusing fundamentally harder than in passive imaging, because you must manage optical power, receiver sensitivity, and digitization bandwidth while meeting eye-safety constraints."

For coherent frequency-modulated continuous wave LiDAR, this challenge is particularly acute. Each coherent channel requires stable frequency control, sophisticated reception hardware, and tight calibration. This makes massive channel duplication far harder to justify economically.

A biomimetic solution

The researchers' solution combines two key technologies. These include an agile external-cavity laser (ECL) with a tuning range of over 100 nm, and reconfigurable electro-optic frequency combs built on thin-film lithium niobate (TFLN) platforms.

The ECL provides high-quality FMCW chirp signals for coherent ranging and acts as a wavelength-controlled beam-steering mechanism. By tuning the center wavelength, the system can rapidly redirect its viewing direction within a wide field of view.

The electro-optic comb then generates multiple parallel FMCW carriers from the same chirped laser source. Crucially, adjusting radio frequency drive conditions changes comb spacing.

"This is what enables 'zoom'—we can increase the point density in a selected region (finer sampling) or relax it (coarser sampling) without changing the optics or adding channels," the researchers added.

The system employs what the researchers call "micro-parallelism." This means using a moderate number of physical channels to achieve the equivalent of far more scanning lines through dynamic repositioning.

Experimental validation

The team demonstrated the system's capabilities across three experimental scenarios, achieving angular resolution of 0.012° in focused regions—surpassing the human retina's nominal limit.

In static scene imaging, the system captured a simulated road environment at resolutions of 54 by 71 pixels for full field-of-view scans and 17 by 71 pixels for locally focused scans. These focused scans quadrupled the vertical detail density, revealing obstacles previously invisible, with 90% of points precise to under 1.3 cm.

The researchers also demonstrated LiDAR-camera fusion, creating colorized point clouds that combine precise 3D geometry with RGB appearance data. When comparing standard versus focused scans, color histogram alignment improved by approximately 10%, indicating better correspondence between 3D points and image pixels.

"By fusing LiDAR with a camera, we generate colorized point clouds and enrich the scene representation, which improves interpretability and supports downstream perception tasks that depend on texture and semantic cues," the researchers explained.

Perhaps most impressively, the team captured real-time 4D-plus imaging—a basketball toss where each point showed position, spin velocity, surface reflectivity, and color simultaneously. At 8 Hz across a wide field of view, this revealed motion patterns invisible to standard 3D LiDAR.

The experimental work revealed important system-level tradeoffs that inform future development paths.

"The clearest one is the tension between angular resolution and per-channel measurement headroom," the researchers noted. "In our parallel coherent readout, each channel must occupy its own non-overlapping electrical band. When we reduce the repetition rate, we can indeed push the angular sampling finer, but the experiment shows that this also compresses the per-channel readout bandwidth."

The team identified several priority directions for advancing the technology toward practical deployment. These include deeper monolithic integration on TFLN platforms, development of ultra-wideband swept sources for improved range resolution, and implementation of closed-loop attention policies for event-driven perception.

Current experiments using fiber links introduce polarization instability that limits material classification capabilities.

"However, we envision that monolithic integration will fundamentally resolve this bottleneck," the researchers said. "By shifting from unstable fiber paths to confined on-chip waveguides, we can achieve stable polarization recovery."

The bionic LiDAR system offers potential applications spanning autonomous vehicles, aerial and marine drones, robotics, and neuromorphic vision systems. Beyond LiDAR, reconfigurable combs enable fast spectral analysis for optical communications, coherence tomography, compressive sensing, and precision metrology, according to the researchers.



Comments