MUMBAI, India, March 13 -- Intellectual Property India has published a patent application (202641003454 A) filed by G. Shanmugasundar; T. Balasubramanian; and M. Vanitha, Chennai, Tamil Nadu, on Jan. 13, for 'embedded two-wheeled autonomous robot navigation system utilizing graysccale-optimized vision processing and sensor fusion for efficient path detection.'
Inventor(s) include G. Shanmugasundar; T. Balasubramanian; and M. Vanitha.
The application for the patent was published on March 13, under issue no. 11/2026.
According to the abstract released by the Intellectual Property India: "The invention relates to the design, development and experimental validation of a vision-guided autonomous mobile robotic system optimized for operation on resource-constrained embedded hardware. The system employs a two-poweredwheel differential drive configuration supplemented by a passive roller support to achieve mechanical stability without requiring active balancing mechanisms. The primary technical contribution lies in the implementation of an efficient computer vision-based navigation framework using gray scale image processing combined with intelligent masking techniques. An ESP32-CAM module is utilized for real-time image acquisition, while an ESP32-WROOM-32 embedded controller executes vision processing and navigation control algorithms. To reduce computational and memory overhead, captured RGB image frames are converted into grayscale image data, resulting in approximately 75% reduction in processing time and 66% reduction in memory usage compared to color-based processing. The grayscale images are further processed using polygonal region-of-interest masking, adaptive thresholding and morphological filtering to isolate lane features and suppress irrelevant visual data. Lane detection is performed through a multi-stage pipeline comprising Gaussian blurring, masked thresholding, edge detection and probabilistic Hough transformation to extract lane parameters. Obstacle detection is achieved using multi-frame differencing of grayscale images, followed by contour analysis and sizebased filtering to distinguish valid obstacles from noise. An hierarchical control architecture is employed, wherein high-level navigation decisions are derived from vision outputs and low-level proportional-integral-derivative (PID) controllers regulate individual wheel speeds via H-bridge motor drivers. The system integrates an inertial measurement unit to enhance heading estimation and compensate for wheel slip through sensor fusion. Power is supplied by rechargeable lithium-ion batteries regulated through buck converters to ensure stable operation. Experimental evaluation demonstrates reliable autonomous navigation with an average lanefollowing deviation of approximately 1.2 em, obstacle detection reliability of approximately 94% for objects larger than 10 x 10 em and real-time performance at 18-20 frames per second. The results confirm that grayscale conversion combined."
Disclaimer: Curated by HT Syndication.