Multimode fiber-based greyscale image projector enabled by neural networks with high generalization ability

Author(s):

Wang, Jian; Zhong, Guangchao; Wu, Daixuan; Huang, Sitong; Luo, Zhi-Chao & Shen, Yuecheng

Abstract:

“Multimode fibers (MMFs) are emerging as promising transmission media for delivering images. However, strong mode coupling inherent in MMFs induces difficulties in directly projecting two-dimensional images through MMFs. By training two subnetworks named Actor-net and Model-net synergetically, [Nature Machine Intelligence 2, 403 (2020) [CrossRef] ] alleviated this issue and demonstrated projecting images through MMFs with high fidelity. In this work, we make a step further by improving the generalization ability to greyscale images. The modified projector network contains three subnetworks, namely forward-net, backward-net, and holography-net, accounting for forward propagation, backward propagation, and the phase-retrieval process. As a proof of concept, we experimentally trained the projector network using randomly generated phase maps and their corresponding resultant speckle images output from a 1-meter-long MMF. With the network being trained, we successfully demonstrated projecting binary images from MNIST and EMNIST and greyscale images from Fashion-MNIST, exhibiting averaged Pearson’s correlation coefficients of 0.91, 0.92, and 0.87, respectively. Since all these projected images have never been seen by the projector network before, a strong generalization ability in projecting greyscale images is confirmed.”

Link to Publications Page

Publication: Optics Express
Issue/Year: Optics Express, Volume 31; Number 3; Pages 4839; 2023
DOI: 10.1364/oe.482551

Fourier-inspired neural module for real-time and high-fidelity computer-generated holography

Author(s):

Dong, Zhenxing; Xu, Chao; Ling, Yuye; Li, Yan & Su, Yikai

Abstract:

“Learning-based computer-generated holography (CGH) algorithms appear as novel alternatives to generate phase-only holograms. However, most existing learning-based approaches underperform their iterative peers regarding display quality. Here, we recognize that current convolutional neural networks have difficulty learning cross-domain tasks due to the limited receptive field. In order to overcome this limitation, we propose a Fourier-inspired neural module, which can be easily integrated into various CGH frameworks and significantly enhance the quality of reconstructed images. By explicitly leveraging Fourier transforms within the neural network architecture, the mesoscopic information within the phase-only hologram can be more handily extracted. Both simulation and experiment were performed to showcase its capability. By incorporating it into U-Net and HoloNet, the peak signal-to-noise ratio of reconstructed images is measured at 29.16 dB and 33.50 dB during the simulation, which is 4.97 dB and 1.52 dB higher than those by the baseline U-Net and HoloNet, respectively. Similar trends are observed in the experimental results. We also experimentally demonstrated that U-Net and HoloNet with the proposed module can generate a monochromatic 1080p hologram in 0.015 s and 0.020 s, respectively.”

Link to Publications Page

Publication: Optics Letters
Issue/Year: Optics Letters, Volume 48; Number 3; Pages 759; 2023
DOI: 10.1364/ol.477630

Simultaneously sorting vector vortex beams of 120 modes

Author(s):

Jia, Qi; Zhang, Yanxia; Shi, Bojian; Li, Hang; Li, Xiaoxin; Feng, Rui; Sun, Fangkui; Cao, Yongyin; Wang, Jian; Qiu, Cheng-Wei & Ding, Weiqiang

Abstract:

“Polarization (P), angular index (l), and radius index (p) are three independent degrees of freedom (DoFs) of vector vortex beams, which have been widely used in optical communications, quantum optics, information processing, etc. Although the sorting of one DoF can be achieved efficiently, it is still a great challenge to sort all these DoFs simultaneously in a compact and efficient way. Here, we propose a beam sorter to deal with all these three DoFs simultaneously by using a diffractive deep neural network (D^2NN) and experimentally demonstrated the robust sorting of 120 Laguerre-Gaussian (LG) modes using a compact D^2NN formed by one spatial light modulator and one mirror only. The proposed beam sorter demonstrates the great potential of D^2NN in optical field manipulation and will benefit the diverse applications of vector vortex beams.”

Link to Publications Page

Publication: arXiv
Issue/Year: arXiv, 2022
DOI: 10.48550/ARXIV.2212.08825

Optical classification and reconstruction through multimode fibers

Author(s):

Kürekci, Şahin

Abstract:

“When a light beam travels through a highly scattering medium, two-dimensional random intensity distributions (speckle patterns) are formed due to the complex scattering within the medium. Although they contain valuable information about the input signal and the characteristics of the propagation medium, the speckle patterns are difficult to unscramble, which makes imaging through scattering media an extremely challenging task. Multimode fibers behave similarly to scattering media since they scramble the input information through modal dispersion and create speckle patterns at the distal end. Because multimode fibers are compact and low-cost structures with the ability to transmit large amounts of data simultaneously for long distances, decoding the speckle patterns formed by a multimode fiber and reconstructing the input information has great implications in a wide range of applications, including fiber optic communication, sensor technology, optical imaging, and invasive biomedical applications such as endoscopy. In this thesis, we decode the speckle patterns and reconstruct the input information on the proximal end of a multimode fiber in three different scenarios. Our choice of input signals consists of numbers encoded as binary digits, handwritten letters, and optical frequencies. We train a deep learning model to classify and reconstruct the handwritten letters, while for the rest of the cases, we construct a transmission matrix between the input signals and the output speckle patterns, and solve the inverse propagation equation algebraically. In all cases, the relation between a speckle pattern and the corresponding input signal is learned with low error rates; thus, the signals are classified and reconstructed successfully using the speckle patterns they created. Classifying digits, letters, or images with speckle information aims to build useful systems in optical imaging, communication, and cryptography, while the classification of optical frequencies paves the way for building novel spectrometers. In addition to replicating the currently existing compact, low-budget, and high-resolution multimode fiber spectrometer, we also build a single-pixel fiber spectrometer in order to increase the compactness on the detection side and expand the application areas of the system. The single-pixel spectrometer we offer is based on the integrated intensity measurements of a fixed target region, where the light is focused by shaping the wavefront with a spatial light modulator. Spatial light modulators and wavefront shaping techniques are also utilized in other classification tasks in this thesis to generate the desired input signals.”

Link to Publications Page

Publication: Middle East Technical University, Thesis
Issue/Year: Graduate School of Natural and Applied Sciences, Thesis, 2022
DOI: https://hdl.handle.net/11511/101287

Diffraction model-informed neural network for unsupervised layer-based computer-generated holography

Author(s):

Shui, Xinghua; Zheng, Huadong; Xia, Xinxing; Yang, Furong; Wang, Weisen & Yu, Yingjie

Abstract:

“Learning-based computer-generated holography (CGH) has shown remarkable promise to enable real-time holographic displays. Supervised CGH requires creating a large-scale dataset with target images and corresponding holograms. We propose a diffraction model-informed neural network framework (self-holo) for 3D phase-only hologram generation. Due to the angular spectrum propagation being incorporated into the neural network, the self-holo can be trained in an unsupervised manner without the need of a labeled dataset. Utilizing the various representations of a 3D object and randomly reconstructing the hologram to one layer of a 3D object keeps the complexity of the self-holo independent of the number of depth layers. The self-holo takes amplitude and depth map images as input and synthesizes a 3D hologram or a 2D hologram. We demonstrate 3D reconstructions with a good 3D effect and the generalizability of self-holo in numerical and optical experiments.”

Link to Publications Page

Publication: Optics Express
Issue/Year: Optics Express, Volume 30; Number 25; Pages 44814; 2022
DOI: 10.1364/oe.474137

Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network

Author(s):

Li, Yunzhe; Cheng, Shiyi; Xue, Yujia & Tian, Lei

Abstract:

“Coherent imaging through scatter is a challenging task. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach can make high-quality and highly generalizable predictions through unseen diffusers. Here, we propose a new deep neural network model that is agnostic to a broader class of perturbations including scatterer change, displacements, and system defocus up to 10× depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our deep learning model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that our model can unmix the scattering-specific information and extract the object-specific information and achieve generalization under different scattering conditions. Our work paves the way to a robust and interpretable deep learning approach to imaging through scattering media.”

Link to Publications Page

Publication: Optics Express
Issue/Year: Optics Express, Volume 29; Number 2; Pages 2244; 2021
DOI: 10.1364/oe.411291

DeepSTORM3D: dense three dimensional localization microscopy and point spread function design by deep learning

Author(s):

Nehme, Elias; Freedman, Daniel; Gordon, Racheli; Ferdman, Boris; Weiss, Lucien E.; Alalouf, Onit; Orange, Reut; Michaeli, Tomer & Shechtman, Yoav

Abstract:

“Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their images. This is the key ingredient in single/multiple-particle-tracking and several super-resolution microscopy approaches. Localization in three-dimensions (3D) can be performed by modifying the image that a point-source creates on the camera, namely, the point-spread function (PSF). The PSF is engineered using additional optical elements to vary distinctively with the depth of the point-source. However, localizing multiple adjacent emitters in 3D poses a significant algorithmic challenge, due to the lateral overlap of their PSFs. Here, we train a neural network to receive an image containing densely overlapping PSFs of multiple emitters over a large axial range and output a list of their 3D positions. Furthermore, we then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach numerically as well as experimentally by 3D STORM imaging of mitochondria, and volumetric imaging of dozens of fluorescently-labeled telomeres occupying a mammalian nucleus in a single snapshot.”

Link to Publications Page

Publication: Nature Methods 17
Issue/Year: Nature Methods 17 (2020) 734-740, Volume 17; Number 7; Pages 734–740; 2019
DOI: 10.1038/s41592-020-0853-5