MUMBAI, India, Feb. 27 -- Intellectual Property India has published a patent application (202631020380 A) filed by Jis College Of Engineering, Kalyani, West Bengal, on Feb. 21, for 'including compiler-directed waveguide conversion to a photonic tensor-core booster.'
Inventor(s) include Mr. Subhodip Koley; Mr. Pronay Pal; Debasish Saha Roy; Rudra Samanta; Shibani Debnath; Jaya Bhattacharjee; and Jayashree Dhara.
The application for the patent was published on Feb. 27, under issue no. 09/2026.
According to the abstract released by the Intellectual Property India: "Reconfigurable photonic tensor-core accelerators use compiler-directed hardware design to combine the ultra-fast, low-energy properties of integrated silicon photonics. A dense network of optical waveguides connects a mesh of microring-resonator multiply-accumulate cells on a chip. At each junction, a thermo-optic or electro-optic 2x2 switch sends light between two orthogonal paths in less than a nanosecond. A domain-specific compiler finds sparsity in high-level tensor-computation graphs for machine learning and breaks up non-zero sub-tensors into photonic mesh patches that are next to each other. The compiler makes reconfiguration byte-code that changes all of the on-chip switches and sends resonance-detuning values to digital-to-analog converters that run microring heaters before each layer of the neural network. So, in real time, physical optical routing is set up for each layer's connection, keeping idle waveguides dark and not wasting any optical energy. A built-in calibration engine keeps an eye on how the resonance peak moves because of changes in temperature and manufacturing. It uses closed-loop feedback to change the heater currents to keep the detuning precision within five picometres, so it stays accurate without outside help. Photodiode arrays convert the analogue outputs of optical MAC operations into digital signals that can be stored in an electronic buffer before being sent over PCIe or Compute Express Link. A new LLVM back-end and lightweight runtime API make the whole hardware-software stack available, letting TensorFlow and PyTorch workloads offload computation kernels without any problems. The accelerator has 10 times the energy efficiency and 6 times the performance-per-area of fixed-topology photonic neural-network prototypes, thanks to compiler-directed waveguide reconfiguration. Routing patterns are coded at compile time, so the same silicon can run future model designs, like sparsely activated mixture-of-experts and adaptive transformer versions, without having to change the mask. This makes the product last longer and lowers the costs of non-recurring engineering. The idea provides a scalable, adaptable, and software-compatible pathway to photonic AI acceleration, resolving the rigidity and calibration challenges that have impeded the commercial feasibility of integrated optical computing systems."
Disclaimer: Curated by HT Syndication.