Holographic display is ideal for true 3D technology because it provides essential depth cues and motion parallax for the human eye. Real-time computation using deep learning was explored for intensity and depth images, whereas real-time generating holograms from real scenes remains challenging due to the trade-off between the speed and the accuracy of obtaining depth information. Here, we propose a real-time 3D color hologram computation model based on deep learning, realizing stable focusing from monocular image capture to display. The model integrates monocular depth estimation and a transformer architecture to extract depth cues and predict holograms directly from a single image. Additionally, the layer-based angular spectrum method is optimized to strengthen 3D hologram quality and enhance model supervision during training. This end-to-end approach enables stable mapping of real-time monocular camera images onto 3D color holograms at 1024×2048 pixel resolution and 25 FPS. The model achieves the SSIM of 0.951 in numerical simulations and demonstrates artifact-free and realistic holographic 3D displays through optical experiments across various actual scenes. With its high image quality, rapid computational speed, and simple architecture, our method lays a solid foundation for practical applications such as real-time holographic video in real-world scenarios.
Open Access
HOLOEYE Photonics AG
Volmerstrasse 1
12489 Berlin, Germany
Phone: +49 (0)30 4036 9380
Fax: +49 (0)30 4036 938 99
contact@holoeye.com
© 2023 HOLOEYE Photonics AG