Citation:

Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks


  • Light: Advanced Manufacturing  2, Article number: (2021)
More Information
  • Corresponding author:
    Robert Kuschmierz (robert.kuschmierz@tu-dresden.de) Jürgen W. Czarske (juergen.czarske@tu-dresden.de)
  • These authors contributed equally: Robert Kuschmierz, Elias Scharf

  • Received: 09 June 2021
    Revised: 12 November 2021
    Accepted: 20 November 2021
    Accepted article preview online: 25 November 2021
    Published online: 06 December 2021

doi: https://doi.org/10.37188/lam.2021.030

  • Minimally invasive endoscopes are indispensable in biomedicine. Coherent fiber bundles (CFBs) enable ultrathin lensless endoscopes. However, the propagation of light through a CFB suffers from phase distortions and aberrations that can cause images to be scrambled. The correction of such aberrations has been demonstrated using various techniques for wavefront control, especially using spatial light modulators (SLMs). This study investigates a novel aberration correction without SLM for the creation of an efficient and compact system. The memory effect of CFBs enables a paradigm shift in the use of static diffractive optical elements (DOEs) instead of dynamic modulation with SLM. We introduce DOEs produced by 2-photon polymerization lithography for phase conjugation on a CFB for focusing, raster scanning, and imaging. Furthermore, a DOE with random patterns is used to encode the three-dimensional (3D) object information in a 2D speckle pattern that propagates along the ultra-thin CFB. Neural networks decode the speckles to retrieve the 3D object information using single-shot imaging. Both DOE methods have compact low-cost concepts in common, and both pave the way for minimally invasive 3D endomicroscopy with benefits for optical imaging in biomedicine.
  • 加载中
  • [1] Kakkava, E. et al. Selective femtosecond laser ablation via two-photon fluorescence imaging through a multimode fiber. Biomedical Optics Express 10, 423-433 (2019). doi: 10.1364/BOE.10.000423
    [2] Turtaev, S. et al. High-fidelity multimode fibre-based endoscopy for deep brain in vivo imaging. Light: Science & Applications 7, 92 (2018).
    [3] Gissibl, T. et al. Two-photon direct laser writing of ultracompact multi-lens objectives. Nature Photonics 10, 554-560 (2016). doi: 10.1038/nphoton.2016.121
    [4] Li, J. W. et al. Ultrathin monolithic 3D printed optical coherence tomography endoscopy for preclinical and clinical use. Light: Science & Applications 9, 124 (2020).
    [5] Lorenser, D. et al. Ultrathin side-viewing needle probe for optical coherence tomography. Optics Letters 36, 3894-3896 (2011). doi: 10.1364/OL.36.003894
    [6] Huo, L. et al. Forward-viewing resonant fiber-optic scanning endoscope of appropriate scanning speed for 3D OCT imaging. Optics Express 18, 14375-14384 (2010). doi: 10.1364/OE.18.014375
    [7] Wurster, L. M. et al. Endoscopic optical coherence tomography angiography using a forward imaging piezo scanner probe. Journal of Biophotonics 12, e201800382 (2019). doi: 10.1002/jbio.201800382
    [8] Aljasem, K. et al. Scanning and tunable micro-optics for endoscopic optical coherence tomography. Journal of Microelectromechanical Systems 20, 1462-1472 (2011). doi: 10.1109/JMEMS.2011.2167656
    [9] Burkhardt, A. et al. Investigation of the human tympanic membrane oscillation ex vivo by Doppler optical coherence tomography. Journal of Biophotonics 7, 434-441 (2014). doi: 10.1002/jbio.201200186
    [10] Li, J. N. et al. High speed miniature motorized endoscopic probe for optical frequency domain imaging. Optics Express 20, 24132-24138 (2012). doi: 10.1364/OE.20.024132
    [11] Qiu, Z. & Piyawattanamatha, W. New endoscopic imaging technology based on MEMS sensors and actuators. Micromachines 8, 210 (2017). doi: 10.3390/mi8070210
    [12] Philipp, K. et al. Diffraction-limited axial scanning in thick biological tissue with an aberration-correcting adaptive lens. Scientific Reports 9, 9532 (2019). doi: 10.1038/s41598-019-45993-4
    [13] Rahmani, B. et al. Actor neural networks for the robust control of partially measured nonlinear systems showcased for image propagation through diffuse media. Nature Machine Intelligence 2, 403-410 (2020). doi: 10.1038/s42256-020-0199-9
    [14] Rothe, S. et al. Deep learning for computational mode decomposition in optical fibers. Applied Sciences 10, 1367 (2020). doi: 10.3390/app10041367
    [15] Amitonova, L. V., Mosk, A. P. & Pinkse, P. W. H. Rotational memory effect of a multimode fiber. Optics Express 23, 20569-20575 (2015). doi: 10.1364/OE.23.020569
    [16] Chen, H. et al. Binary amplitude-only image reconstruction through a MMF based on an AE-SNN combined deep learning model. Optics Express 28, 30048-30062 (2020). doi: 10.1364/OE.403316
    [17] Zhu, C. Y. et al. Image reconstruction through a multimode fiber with a simple neural network architecture. Scientific Reports 11, 896 (2021). doi: 10.1038/s41598-020-79646-8
    [18] Caravaca-Aguirre, A. M. et al. Real-time resilient focusing through a bending multimode fiber. Optics Express 21, 12881-12887 (2013). doi: 10.1364/OE.21.012881
    [19] Trägårdh, J. et al. Label-free CARS microscopy through a multimode fiber endoscope. Optics Express 27, 30055-30066 (2019). doi: 10.1364/OE.27.030055
    [20] Deng, S. N. et al. Raman imaging through multimode sapphire fiber. Optics Express 27, 1090-1098 (2019). doi: 10.1364/OE.27.001090
    [21] Haufe, D. et al. Transmission of multiple signals through an optical fiber using wavefront shaping. Journal of Visualized Experiments 55407 (2017).
    [22] Büttner, L., Thümmler, M. & Czarske, J. Velocity measurements with structured light transmitted through a multimode optical fiber using digital optical phase conjugation. Optics Express 28, 8064-8075 (2020). doi: 10.1364/OE.386047
    [23] Lee, S. Y. et al. Reciprocity-induced symmetry in the round-trip transmission through complex systems. APL Photonics 5, 106104 (2020). doi: 10.1063/5.0021285
    [24] Gu, R. Y., Mahalati, R. N. & Kahn, J. M. Design of flexible multi-mode fiber endoscope. Optics Express 23, 26905-26918 (2015). doi: 10.1364/OE.23.026905
    [25] Gordon, G. S. D. et al. Characterizing optical fiber transmission matrices using metasurface reflector stacks for lensless imaging without distal access. Physical Review X 9, 041050 (2019).
    [26] Osnabrugge, G. et al. Generalized optical memory effect. Optica 4, 886-892 (2017). doi: 10.1364/OPTICA.4.000886
    [27] Kuschmierz, R. et al. Self-calibration of lensless holographic endoscope using programmable guide stars. Optics Letters 43, 2997-3000 (2018). doi: 10.1364/OL.43.002997
    [28] Warren, S. C. et al. Adaptive multiphoton endomicroscopy through a dynamically deformed multicore optical fiber using proximal detection. Optics Express 24, 21474-21484 (2016). doi: 10.1364/OE.24.021474
    [29] Weiss, U. & Katz, O. Two-photon lensless micro-endoscopy with in-situ wavefront correction. Optics Express 26, 28808-28817 (2018). doi: 10.1364/OE.26.028808
    [30] Scharf, E. et al. Video-rate lensless endoscope with self-calibration using wavefront shaping. Optics Letters 45, 3629-3632 (2020). doi: 10.1364/OL.394873
    [31] Andresen, E. R. et al. Toward endoscopes with no distal optics: video-rate scanning microscopy through a fiber bundle. Optics Letters 38, 609 (2013). doi: 10.1364/OL.38.000609
    [32] Herman, O. et al. Time multiplexed super resolution of multicore fiber endoscope using multimode fiber illumination patterns. Optical Fiber Technology 54, 102122 (2020). doi: 10.1016/j.yofte.2019.102122
    [33] Tsvirkun, V. et al. Flexible lensless endoscope with a conformationally invariant multi-core fiber. Optica 6, 1185-1189 (2019). doi: 10.1364/OPTICA.6.001185
    [34] Davis, J. A. et al. Encoding amplitude information onto phase-only filters. Applied Optics 38, 5004-5013 (1999). doi: 10.1364/AO.38.005004
    [35] Sarkadi, T., Kettinger & Á. Koppa, P. Spatial filters for complex wavefront modulation. Applied Optics 52, 5449-5454 (2013). doi: 10.1364/AO.52.005449
    [36] Häfner, M., Pruss, C. & Osten, W. Laser direct writing. Optik & Photonik 6, 40-43 (2011).
    [37] Paz, V. F. et al. Development of functional sub-100 nm structures with 3D two-photon polymerization technique and optical methods for characterization. Journal of Laser Applications 24, 042004 (2012). doi: 10.2351/1.4712151
    [38] Toulouse, A. et al. 3D-printed miniature spectrometer for the visible range with a 100 × 100 μm2 footprint. Light: Advanced Manufacturing 2, 20-30 (2021).
    [39] Sartison, M. et al. 3D printed micro-optics for quantum technology: optimised coupling of single quantum dot emission into a single-mode fibre. Light: Advanced Manufacturing 2, 6 (2021).
    [40] Sivankutty, S. et al. Extended field-of-view in a lensless endoscope using an aperiodic multicore fiber. Optics Letters 41, 3531-3534 (2016). doi: 10.1364/OL.41.003531
    [41] Sivankutty, S. et al. Nonlinear imaging through a Fermat’s golden spiral multicore fiber. Optics Letters 43, 3638-3641 (2018). doi: 10.1364/OL.43.003638
    [42] Yang, X., Pu, Y. & Psaltis, D. Imaging blood cells through scattering biological tissue using speckle scanning microscopy. Optics Express 22, 3405-3413 (2014). doi: 10.1364/OE.22.003405
    [43] Porat, A. et al. Widefield lensless imaging through a fiber bundle via speckle correlations. Optics Express 24, 16835-16855 (2016). doi: 10.1364/OE.24.016835
    [44] Singh, A. K. et al. Scatter-plate microscope for lensless microscopy with diffraction limited resolution. Scientific Reports 7, 10687 (2017). doi: 10.1038/s41598-017-10767-3
    [45] Berto, P., Rigneault, H. & Guillon, M. Wavefront sensing with a thin diffuser. Optics Letters 42, 5117-5120 (2017). doi: 10.1364/OL.42.005117
    [46] Antipa, N. et al. DiffuserCam: lensless single-exposure 3D imaging. Optica 5, 1-9 (2018). doi: 10.1364/OPTICA.5.000001
    [47] Wu, J. C. et al. Single-shot lensless imaging with fresnel zone aperture and incoherent illumination. Light: Science & Applications 9, 53 (2020).
    [48] Ludwig, S. et al. Scatter-plate microscopy with spatially coherent illumination and temporal scatter modulation. Optics Express 29, 4530-4546 (2021). doi: 10.1364/OE.412047
    [49] Borhani, N. et al. Learning to see through multimode fibers. Optica 5, 960-966 (2018). doi: 10.1364/OPTICA.5.000960
    [50] Li, S. et al. Imaging through glass diffusers using densely connected convolutional networks. Optica 5, 803-813 (2018). doi: 10.1364/OPTICA.5.000803
    [51] Kakkava, E. et al. Imaging through multimode fibers using deep learning: the effects of intensity versus holographic recording of the speckle pattern. Optical Fiber Technology 52, 101985 (2019). doi: 10.1016/j.yofte.2019.101985
    [52] Zhao, J. et al. Deep learning imaging through fully-flexible glass-air disordered fiber. ACS Photonics 5, 3930-3935 (2018). doi: 10.1021/acsphotonics.8b00832
    [53] Wu, J. C., Cao, L. C. & Barbastathis, G. DNN-FZA camera: a deep learning approach toward broadband FZA lensless imaging. Optics Letters 46, 130-133 (2021). doi: 10.1364/OL.411228
    [54] Zhang, H., Kuschmierz, R. & Czarske, J. Miniaturized interferometric 3-D shape sensor using coherent fiber bundles. Optics and Lasers in Engineering 107, 364-369 (2018). doi: 10.1016/j.optlaseng.2018.04.011
    [55] Hu, X. W. et al. Robust imaging-free object recognition through anderson localizing optical fiber. Journal of Lightwave Technology 39, 920-926 (2021). doi: 10.1109/JLT.2020.3029416
    [56] Sun, J. W. et al. Rapid computational cell-rotation around arbitrary axes in 3D with multi-core fiber. Biomedical Optics Express 12, 3423-3437 (2021). doi: 10.1364/BOE.423035
    [57] Sun, J. W., Koukourakis, N. & Czarske, J. W. Complex wavefront shaping through a multi-core fiber. Applied Sciences 11, 3949 (2021). doi: 10.3390/app11093949
    [58] Vellekoop, I. M. & Mosk, A. P. Focusing coherent light through opaque strongly scattering media. Optics Letters 32, 2309-2311 (2007). doi: 10.1364/OL.32.002309
    [59] Ronneberger, O. Fischer, P & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (2015).
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(7)

Research Summary

Diffractive optical elements: DOEs and Neural Networks enable miniaturized 3D endoscopes

3D printed diffractive optical elements (DOEs) with sub-micron feature size enable sub-millimeter sized 3D endoscopes. Miniaturized, flexible endoscopes rely on coherent fiber bundles (CFBs), which relay images from inside the body to a camera for 2D imaging. However, the propagation of light through a CFB suffers from random distortions that hinder 3D imaging without complex and bulky setups. Dr. Kuschmierz, Prof. Czarske and colleagues from TU-Dresden, Germany report on DOEs produced by 2-photon polymerization lithography to compensate these distortions to enable 3D imaging without any lenses on the CFB. Furthermore, a DOE with random patterns can be used in conjunction with neural networks to circumvent the distortions altogether for single shot 3D imaging. Both methods enable compact, low cost 3D systems with resolution of around 1 µm and diameters below 0.5 mm for biomedical applications. 


show all

Article Metrics

Article views(6474) PDF downloads(1520) Citation(0) Citation counts are provided from Web of Science. The counts may vary by service, and are reliant on the availability of their data.

Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks

  • 1. Laboratory of Measurement and Sensor System Technique, TU Dresden, Helmholtzstrasse 18, 01069 Dresden, Germany
  • 2. Competence center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Germany
  • 3. Institute of Applied Physics, School of Sciences, TU Dresden, Germany
  • 4. Excellence Cluster Physics of Life, TU Dresden, Germany
  • Corresponding author:

    Robert Kuschmierz, robert.kuschmierz@tu-dresden.de

    Jürgen W. Czarske, juergen.czarske@tu-dresden.de

  • These authors contributed equally: Robert Kuschmierz, Elias Scharf

doi: https://doi.org/10.37188/lam.2021.030

Abstract: Minimally invasive endoscopes are indispensable in biomedicine. Coherent fiber bundles (CFBs) enable ultrathin lensless endoscopes. However, the propagation of light through a CFB suffers from phase distortions and aberrations that can cause images to be scrambled. The correction of such aberrations has been demonstrated using various techniques for wavefront control, especially using spatial light modulators (SLMs). This study investigates a novel aberration correction without SLM for the creation of an efficient and compact system. The memory effect of CFBs enables a paradigm shift in the use of static diffractive optical elements (DOEs) instead of dynamic modulation with SLM. We introduce DOEs produced by 2-photon polymerization lithography for phase conjugation on a CFB for focusing, raster scanning, and imaging. Furthermore, a DOE with random patterns is used to encode the three-dimensional (3D) object information in a 2D speckle pattern that propagates along the ultra-thin CFB. Neural networks decode the speckles to retrieve the 3D object information using single-shot imaging. Both DOE methods have compact low-cost concepts in common, and both pave the way for minimally invasive 3D endomicroscopy with benefits for optical imaging in biomedicine.

Research Summary

Diffractive optical elements: DOEs and Neural Networks enable miniaturized 3D endoscopes

3D printed diffractive optical elements (DOEs) with sub-micron feature size enable sub-millimeter sized 3D endoscopes. Miniaturized, flexible endoscopes rely on coherent fiber bundles (CFBs), which relay images from inside the body to a camera for 2D imaging. However, the propagation of light through a CFB suffers from random distortions that hinder 3D imaging without complex and bulky setups. Dr. Kuschmierz, Prof. Czarske and colleagues from TU-Dresden, Germany report on DOEs produced by 2-photon polymerization lithography to compensate these distortions to enable 3D imaging without any lenses on the CFB. Furthermore, a DOE with random patterns can be used in conjunction with neural networks to circumvent the distortions altogether for single shot 3D imaging. Both methods enable compact, low cost 3D systems with resolution of around 1 µm and diameters below 0.5 mm for biomedical applications. 


show all
    • Fiber endoscopes are widely used to access inner regions of the body in medicine or the interior of complex technical systems. Common flexible endoscopes are based on coherent fiber bundles (CFBs), also called multi-core fibers, which relay intensity patterns from the hidden region at the distal fiber facet to the instrument at the proximal fiber facet. A lens system at the distal fiber end (de-)magnifies the core-to-core distance and defines the resolution. CFBs offer diameters down to a few hundred microns for minimally invasive access. However, the distal optics increase the footprint of the endoscope, usually in the millimeter range. This is critical for several biomedical applications. Furthermore, conventional 2D endoscopes offer no depth information without mechanical scanning.

      Recently, ultra-thin endoscopes with three-dimensional (3D) imaging capability have been proposed, which enable access to delicate structures such as the visual cortex and cochlear or thin blood vessels1, 2. The thinnest endoscopes are based on single-mode fibers (SMFs) with 3D printed distal optics35 for 1D optical coherence tomography (OCT) imaging with diameters down to below 100 µm. However, OCT systems for 3D imaging rely on micro-electro-mechanical systems (MEMS) for scanning, which increases their footprint significantly above 1 mm612.

      The thinnest imaging endoscopes are based on multi-mode fibers (MMFs) without needing bulky optical elements at the distal fiber facet, which is inserted into the specimen. 3D imaging can be achieved with MMF-endoscopes of approximately 100 µm2. MMFs, however, exhibit complex optical transfer functions (OTFs) owing to mode mixing and modal dispersion. To enable imaging, MMF endoscopes rely on the calibration of transmission properties. This can be achieved by sequential excitation of all supported fiber modes and recording the optical transfer function (OTF) using digital holography or using neural networks1321. Programmable optics such as spatial light modulators (SLMs) precode the light at the proximal fiber side to achieve the desired light field distribution on the distal side of the MMF6, 7. This enables focus generation and the formation of more complex light patterns on the distal facet8, 9, 22. The OTF strongly depends on bending as well as wavelength drifts and temperature changes18, 10, meaning that real-time in situ calibration is required. This is complex because the calibration usually requires a double-sided fiber approach, which is not available in real-world applications2325.

      By contrast, a CFB guides the different modes in separated fiber cores. No mode mixing occurs when inter-core crosstalk can be ignored. Nevertheless, random phase variations occur between adjacent cores. They can be corrected by digital optical phase conjugation (DOPC) using an SLM, as shown in Fig. 1c. CFBs can be modeled as short phase objects. Such objects exhibit a strong memory effect26, meaning that variations in the in-coupled wavefront directly translate into those in the out-coupled wavefront. The simplified transmission properties have enabled single-sided and single-shot calibration techniques2729 and fast 3D imaging using resonant scanners3032. Nevertheless, complex setups encompassing various adaptive or programmable optical devices are required for such endoscopic systems. Recently, optimized CFBs with bending-invariant transmission properties and increased field of view have been reported by Rigneault et al.33 from the Fresnel Institute, France. The authors note that bending-induced phase distortions result from induced optical path length differences in the CFB. These length differences depend on the mean distance to the neutral axis and can be minimized by a twisted fiber core arrangement. However, such fibers are difficult to manufacture and exhibit only a few hundred fiber cores.

      Fig. 1 

      a Each fiber core exhibits a random phase delay, which adds to the in-coupled wavefront and results in a high spatial frequency disturbance at the fiber output. b Additionally, bending the fiber adds a global tilt to the transmitted wavefront, according to the optical memory effect. c Conventionally, phase distortions are compensated using spatial light modulators for dynamic digital optical phase conjugation (DOPC). Focusing at the distal fiber side is performed by adding the phase structure of a Fresnel lens on the SLM at the proximal fiber side. d The DOE provides focusing and phase conjugation assuming static aberration of the CFB and is placed in front of the proximal fiber facet. e Printed DOE on the proximal fiber facet for aberration correction and focusing.
    Results
    • Extending the above hypothesis to a commercial CFB with a length-independent core arrangement and several tens of thousands of cores means that bending only induces a radius-dependent tilt in the transmitted wavefront, as shown in Fig. 1b. This results in a lateral shift of the acquired image, which can often be tolerated or corrected if the tilt is measured, either actively by correcting the in-coupled beam or with post-processing. The total phase distortion can be written as

      $$ \Delta \Phi = \Delta \Phi_{DC} + \Delta \Phi_{AC}(\alpha,r_n) $$ (1)

      where $ \Delta \Phi_{DC} $ denotes the phase distortions for each fiber core due to manufacturing tolerances, which are static, and $ \Delta \Phi_{AC}(\alpha,r_n) $ denotes the phase distortions due to bending by the angle $ \alpha $ and the distance of a core to the neutral axis $ r_n $, which are dynamic. To verify this assumption, the bending-induced phase distortion was measured holographically using a commercial CFB (Sumita, HDIG, 40 cm). The setup is illustrated in Fig. 2a. A 473 nm CW laser was employed. The CFB was illuminated with a collimated beam from the distal side via beamsplitters BS1 and BS2, single-mode fiber (SMF1), Mirror M1, BS3, and the microscope objective MO2 (20x, NA = 0.40). The transmitted light was imaged onto the CMOS Camera CAM1 (IDS, UI-3482LE, 4.92 MP) via MO1 (10x, NA = 0.25) and lens L3 ($ f = $ 175 mm). The reference beam was generated via SMF2, M2, and BS4. Off-axis holography was employed. The fiber was bent in increments of 1°. The phase difference of consecutive holograms was evaluated to calculate the bending angle-dependent phase difference $ \Delta \Phi_{AC}(\Delta \alpha,r_n) $. The tilt angle $ \gamma $ was calculated by linearly interpolating the low-pass filtered phase difference of consecutive measurements. A linear dependency of

      Fig. 2 

      a Setup for characterizing bending dependent phase deviations of the CFB. Left: Proximal side with instrumentation. Right: Distal side for application. CAM1 is used for the holographic measurement of the phase distortion of the CFB. CAM2 is used to characterize the far-field intensity distribution. b Measured bending angle $ \alpha $ dependent tilt $ \gamma $ of the transmitted wavefront. c Measured bending-induced focus displacement in the far field.
      $$ \tan \gamma = \frac{\Delta \Phi_{AC} / 2\pi}{r_n / \lambda} = n\cdot \alpha $$ (2)

      results, as shown in Fig. 2b, which verifies the underlying hypothesis. To test the effect on the far field of the CFB in a lensless imaging configuration, the fiber was illuminated from the proximal side. The employed beam path encompasses BS1, M3, beam expander BE (5x), M4, SLM (Holoeye, Pluto-2), L7 and L8 (collimation lenses), BS4, L3, and MO1. The SLM was used for DOPC of $ \Delta \Phi_{DC} $. The SLM was employed in an off-axis setup to suppress surface reflexes and allow for binary amplitude modulation34, 35. The iris diaphragm was used to filter the higher diffraction orders of the SLM. Additionally, a Fresnel lens ($ f = 300\;{\text{µm}} $) was coded onto the SLM to achieve a focus point on the distal fiber side. The fiber was bent again, and the lateral shift in the focus position was tracked by CAM2. A lateral focus shift of 4.7 µm/° was observed, confirming previous observations. An example image of two laterally shifted foci is shown in Fig. 2c.

      The experiments demonstrate that a static phase mask is sufficient for compensating for high-frequency phase aberrations, and that fiber bending only induces low spatial frequency aberrations. These aberrations result mainly in a focus shift and can be compensated numerically in post-processing or by galvo scanners in real time. 3D 2-photon polymerization has previously been used for printing DOE3639. To demonstrate the capability of 3D 2-photon polymerization for optical phase conjugation in the context of fiber bundle-based endoscopy, two DOEs were designed and printed on a glass substrate using a commercial 3D printer (Photonic Professional GT, Nanoscribe GmbH).

      DOE1 is a Fresnel lens with $ f = 300\;{\text{ µm}} $. DOE2 is the same Fresnel lens plus the conjugated phase of the CFB. The phase patterns are shown in Fig. 3a, g. Both DOEs were characterized using the setup shown in Fig. 2 without the CFB. The deviations between the designs and measurements are shown in Fig. 3c−e and Fig. 3i−k, respectively. Multiple error sources become apparent. The field of view for printing was restricted to patches of 100 µm × 100 µm. To cover a larger area, several patches were stitched. A tilt of the individual patches, as well as a phase jump at the boundary of two patches, is apparent, as seen in Fig. 3c, i. Owing to a restricted axial resolution, the height was quantized into three steps, as shown in Fig. 3d, j. Furthermore, a systematic deviation is visible in Fig. 3d, j, where it can be seen that the printed step height equals approximately half the designed step height. This is probably due to operation or design errors. Finally, Gaussian distributed random errors can be seen in Fig. 3d, j, which can result from the printing as well as electronic noise and aberrations in the holographic measurement process. In total, the phase deviations exhibit comparable standard deviations of $ \sigma_{\varphi,DOE1} = 0.89 $ for the Fresnel lens and $ \sigma_{\varphi,DOE2} = 1.04 $ for the phase compensation DOE, as shown in Fig. 3e, k. Nevertheless, a sharp focus with a full width at half maximum (FWHM) = 1.15 µm is achieved in the focal plane of DOE1. The peak-to-background ratio (PBR), which is defined as the ratio of the mean focus intensity to the mean intensity outside the focus, reaches $ \text{PBR} = 308 $, as seen in Fig. 3f. Higher diffraction orders are apparent, which result from sub-sampling of the desired wavefront. This limits the field of view to 30 µm, according to

      Fig. 3 

      Left: Fresnel lens. a Design, b Holographic measurement, c−e Deviation, f Far field in the focal plane. Right: Mask for phase compensation of CFB. g Design, h Holographic measurement, i−k Deviation, l Far field in the focal plane.
      $$ \text{FOV} = \frac{\lambda f}{k_\text{c}\sin{\pi/3}} $$ (3)

      where $ k_\text{c} = 3\;{\rm{\mu m}} $ denotes the pitch between printed elements, and $ \sin{\pi/3} $ results from the hexagonal arrangement.

      While the focus quality of DOE1 was characterized without CFB, the CFB was introduced into the setup again to test DOE2. DOE2 was positioned in front of the proximal side of the CFB, as shown in Fig. 4, and a focus without distal optics was achieved through the CFB. The focal plane is depicted in Fig. 3l (FOV = 60 µm). The higher diffraction orders result from the periodic fiber core arrangement and can be suppressed by aperiodic CFBs40, 41. A focal diameter of $ FWHM = 1.25\;{\rm{\mu m}} $, which is limited by the numerical aperture of the fiber cores27, and a $ PBR = 25 $ limited by the DOE quality are achieved. This result is considerably worse than that of DOE1. We assume that this is mainly due to misalignment of the CFB and DOE, which could be solved by printing onto the CFB directly, as well as by depolarization effects in the CFB, because we found that the depolarized light exhibits a different random phase distortion and increases the speckled background. Nevertheless, the DOE2-CFB combination was used for 2D raster scanning microscopy of a USAF test chart (Group 7, Element 6). Therefore, the SLM shown in Fig. 2a was replaced by a 2D galvanometer scanner with scan rates up to 1 kHz. The results are shown in Fig. 4 (right). The achieved resolution of 1.25 µm results from the focal diameter. The comparably low contrast is due to the decreased PBR.

      Fig. 4 

      Left: Scheme of DOE2 in front of the CFB at the proximal side. Right: Transmission raster scan of a USAF 1951 test chart (Group 7, Element 6).
    • A different approach to employing 3D printing techniques for 3D endomicroscopy is to code the 3D information using a random but known phase object. This enables single-shot 3D imaging. Recently, imaging through diffuse scattering media by speckle correlation techniques, exploiting the memory effect, has been presented4248. As in standard optical systems such as microscopes, the far field of a diffuser can be described by the point spread function ($ PSF $). Under the assumption of shift-invariance of the $ PSF $, meaning an infinite memory effect, the speckle pattern on a detector $ I(r_D) $ of a two-dimensional object $ O(z,r) $ at distance $ z_O $ results from the convolution of the object and the $ PSF $:

      $$ I(r_D) = O(r_O)*(PSF) $$ (4)

      With a known $ PSF $, the object can be reconstructed by the cross-correlation

      $$ I(r_D) \otimes PSF = O(r_O)*(PSF \otimes PSF) \approx O(r_O) $$ (5)

      under the assumption that the $ PSF $ is uncorrelated. Furthermore, it has been shown that neural networks can be used for object reconstruction through diffuse scattering media even in the absence of a strong memory effect4953 and that CFBs enable the transfer of speckle patterns54. To circumvent the remaining issues, with the phase-conjugation-based endoscope, we employed a random phase object in the far field of a CFB. This codes the 3D object information in a 2D intensity pattern, which can be transferred through the CFB without regard for bending-induced phase distortions. The speckle pattern of a 3D object is then reconstructed using a pretrained convolutional neural network (CNN). A schematic of this technique is shown in Fig. 5a.

      Fig. 5 

      a Scheme and principle of a diffuser endoscope. The diffuser at the distal side codes the 3D object information into a 2D speckle pattern, which is transferred through the CFB to the proximal side. The 3D information is recovered in real time using a neural network. b PSF for varying distances (top to bottom) and vertical positions (left to right). Horizontal lines indicate the vertical shift of the PSF. c Example reconstructions of 2D objects at a random and unknown distance. d Example reconstructions of multilayered 3D objects at a random and unknown distance.

      To generate sufficient data for network training, validation, and testing, we captured the PSF of $ 32 \;\times\; $$ 32 \;\times\; 9 $ point sources spanning a volume of $ 100\; \times\; 100\; \times\; $$ 400 $ µm3, sequentially, using a 3-axis scanning system. The focus was imaged in front of the random phase object. We employed a commercial diffuser (Thorlabs DG10-120-A). The diffuser was placed 500 µm in front of a CFB (Fujikura, FIGH-50-1100N, 50k cores, length: 10 cm) with the diffuse plane facing the fiber facet. The transmitted light was imaged using a CMOS camera (uEye, 8 bit). The images were truncated to a square encompassing approximately 90% of the CFB area and resized to $ 64\; \times\; 64 $ pixels. Subsequently, virtual objects were generated using the MNIST database of handwritten digits. The images were rescaled to $ 32\; \times\; 32 $ pixels and shifted randomly in the x- and y-directions for data augmentation. Under the assumption of incoherent radiation, speckle patterns of 3D objects, resolved with $ 32\; \times\; 32\; \times\; 9 $ voxels, were then generated by multiplying the objects with the recorded PSFs. The resulting speckle images were normalized to eight bits.

      A common task in endoscopy is the imaging of an object with a constant but unknown distance55. Therefore, the network was tested on 1,000 2D objects for each object plane. On average, the network was attributing more than 98% of the total intensity to the correct object plane, independent of the object distance. Furthermore, we found that in 100% of cases, the majority of the intensity was attributed to the correct plane. This means that the correct distance of the 2D objects was always detected. The reconstruction quality was assessed using the correlation coefficient $ \rho $ between the reconstruction and the object in the identified plane. The reconstruction quality appears to be almost independent of the object distance, as shown in Fig. 6 (top). For qualitative comparison, three examples of the reconstruction are shown in Fig. 5c. The chosen examples represent reconstructions with differing quality $ \rho = 0.88 $ (10%-quantil), $ \rho = 0.93 $ (50%-quantil), and $ \rho = 0.96 $ (90%-quantil). It can be seen that, even in the worst case, the object is still clearly recognizable. To test robustness, noise (uniform distribution, range [−2, 2]) was added to the speckle images (range [0, 255]) before reconstruction. While there is a slight reduction in the resulting quality, the reconstructed objects are still recognizable.

      Fig. 6 

      Top: Reconstruction of single-layered objects. Correlation coefficient $ \rho $ between the object and reconstruction for each plane, without (blue) and with (red) artificial noise. The boxes are offset laterally for better readability but correspond to the same object planes. Bottom: Reconstruction of 3D objects. Correlation coefficient $ \rho $ between object and reconstruction vs. number of filled object planes for noiseless (blue) and noisy (red) camera images.

      To test the CNN on the more generalized problem of 3D objects with varying object distances, 1000 objects and their corresponding speckle patterns with one to nine filled planes were generated. The same CNN that was trained on single- and double-layered objects was employed for object recovery. Fig. 6 (bottom) shows the achieved correlation depending on the number of filled planes. It can be seen that the reconstruction quality deteriorates with increasing object complexity. The deterioration increases with the addition of noise. We assume that this results from a decrease in speckle contrast $ C \propto n_p^{1/2} $ with an increasing number of independent point sources $ n_p $ forming the 3D object. This results in a reduction in the signal to noise ratio from 20 dB for objects with a single object layer to 7 dB for objects with nine layers. Nevertheless, for objects with two and three filled layers, the reconstructions are still clearly recognizable. Fig. 5d shows three example reconstructions for objects with two to four filled layers. The examples shown represent reconstructions with the median of the achieved correlation coefficients.

    Discussion
    • We investigated compact DOE-based lensless fiber endoscopes with ultra-thin footprints for 3D imaging. Two techniques, DOE-grating and DOE-diffuser-based endoscopy, were introduced. Both enable paradigm-shifting applications with minimal invasive access in biomedicine.

      Lensless endoscope I: Lithography has been used for many years in the manufacturing of simple DOEs, such as gratings and Fresnel lenses. We demonstrate that DOE made by 2-photon lithography can be used to conjugate arbitrary phase distortions in CFBs. Pre-coding the transmitted light, for instance with a Fresnel lens, can be performed with the same DOE. Placing the DOE on the proximal fiber facet enables lensless raster scanning endomicroscopy with a lateral resolution of approximately 1 µm. As an advantage, this can result in a less expensive, simpler, and more robust setup compared to digital optical phase conjugation using SLMs. Additionally, using 3D printed DOE in the transmission is potentially more light-efficient than a reflective liquid crystal on silicon SLM, which is normally used for DOPC. Currently, the main issues for image quality are DOE quality and positioning, which result in reduced image contrast. Thus, great potential for image quality, as well as robustness, arises from integrating the DOE directly onto the CFB. Further advances can be made in combination with advanced fiber design, for instance, using an aperiodic core arrangement to suppress higher diffraction orders and bending insensitive fibers. The resulting fiber is phase-containing, meaning that arbitrary light fields can be generated for further applications, such as optogenetic cell stimulation or fiber optical tweezing56, 57. As a disadvantage in endomicroscopy, the volume information is time-coded, limiting the temporal resolution.

      Lensless endoscope II: An unstructured random DOE (diffuser) can be realized by 3D printing, for example, onto a glass plate or standard optical diffusers. Placed in the far field of a CFB, they code the 3D object information in a 2D speckle pattern that can be transferred through the CFB without considering phase scrambling. The 3D object can then be reconstructed using neural networks. In contrast to other ultrathin 3D endoscopes, imaging is performed in a single shot. In combination with using neural networks for object reconstruction, this can enable real-time 3D imaging, for instance, in time-resolved GFP-based or calcium imaging in optogenetics or auto-fluorescence imaging in cancer diagnostics. In contrast to structured DOEs for phase conjugation, the DOE quality is insignificant because the exact transmission properties are learned by signal processing. Nevertheless, the use of reproducible DOEs offers great potential because the time-consuming training process can be reduced. However, the speckle contrast decreases with an increase in the number of scattering sources. Thus, this technique requires sparse objects such as stained tissues. Furthermore, reconstruction of tissue samples requires corresponding training data, which could be achieved by displaying microscope recordings to the system using a micro-projector.

    Materials and methods
    • DOE quality: When using the CFB as a phased array, the achievable image contrast in raster scanning depends on the peak-to-background ratio ($ PBR $) of the generated focus, also called the enhancement factor. The $ PBR $ describes the focus intensity $ I_f $ in relation to the mean background intensity $ I_b $,

      $$ PBR = \frac{I_f}{I_b} = \frac{\pi}{4}(N-1)+1\approx \frac{\pi}{4}N $$ (6)

      where $ N $ denotes the number of independent phasors or fiber cores58. However, Eq. 6 only holds under the assumption of ideal phasors with equal amplitude. In reality, phase errors occur, as seen in Fig. 3. Quantization errors $ \Delta \varphi_Q $ result from the DOE design process. The DOE was designed to contain three steps that are equally spaced over $ \pm \pi $, yielding a maximum quantization error of $ \Delta \varphi_{Q,max} \approx \pm \pi/3 $ results. Furthermore, a systematic deviation $ \Delta \varphi_S $ is apparent between the designed height and the measured phase. This can be caused by unequal refractive index or non-optimal design of the printed pillars, as well as misalignment of the printer. The maximum systematic deviation is $ \Delta \varphi_{S,max} \approx \pm \pi/2 $. Finally, random errors $ \Delta \varphi_R $ occur with a standard deviation of $ \sigma_{\varphi,R}\approx 0.5 $, which can result from the printing, as well as electronic noise and aberrations, in the holographic measurement process. Assuming equal distributions for $ \Delta \varphi_Q $ and $ \Delta \varphi_S $ and independence, we can summarize all three contributions to the total phase uncertainty:

      $$ \begin{split} \sigma_{\varphi \text{DOE2}} & = ( {\Delta \varphi_{Q,max}^2}/3+ {\Delta \varphi_{S,max}^2}/3+ \sigma_{\varphi,R}^2)^{1/2}\\ & = (0.37+0.41+0.25)^{1/2} \approx 1.04 \end{split} $$

      $ \sigma_{\varphi} $ is dominated by $ \sigma_{\varphi,Q} $ and $ \sigma_{\varphi,S} $. To estimate the effect of phase noise on the $ PBR $, a Monte Carlo simulation was performed assuming normal distributed phase deviations. The relation $ PBR_\text{noise} = PBR\cdot exp(-\sigma_{\varphi}^2) = {\pi}/{4}N\cdot e^{-\sigma_{\varphi}^2} $ was found. This is in good agreement with the experimental data for DOE1, where $ PBR_\text{DOE1} = 308 $ was measured and $ PBR_\text{DOE1}(N = 1500; \sigma_{\varphi} = 0.9) = 550 $ results. The comparably worse performance of DOE2 in combination with the CFB is attributed to misalignment and the free-space propagation between the DOE and CFB and emphasizes the requirement to print the DOE directly onto the CFB.

      Printing process: The DOEs were designed from circular pillars with a diameter of 2.8 µm and lateral distance of 3.0 µm to match the fiber core dimensions. The achieved phase delay results from the pillar height $ h $ and photosensitive polymer refractive index $ n_p = 1.535 $ to

      $$ h = \cfrac{\Delta \Phi_{DC}}{2\pi}\cdot\cfrac{\lambda}{n_p-1} $$ (7)

      yielding a maximum height for $ 2\pi $ modulation of $ h_\text{max} = 884 \;{\rm{nm}} $. A base layer was first printed on the substrate to guarantee full-field adhesion. The DOEs were realized using a commercial 3D printer (Photonic Professional GT, Nanoscribe GmbH).

      Network architecture and training: The neural network employed for 3D object reconstruction consists of nine independent convolutional neural networks (CNNs) with identical architecture. The number of CNNs corresponds to the number of object planes, and each CNN reconstructs the information of one associated object plane. We found that this architecture offers an improved training speed compared to 3D-CNN without a loss in performance. The architecture of a single CNN, which is derived from U-Net59, is depicted in Fig. 7. The network consists of four encoder stages, with an image input size of $ 64\;\times\;64 $ pixels. Since the object plane was restricted to $ 32\;\times\;32 $ pixels, only three decoder stages are employed. The last encoder stage uses two dropout layers to inhibit overfitting. In total, the CNN consists of 52 hidden layers. For training, each CNN was presented with the same 90,000 speckle patterns resulting from single- and double-layered objects to learn to reconstruct images in the corresponding object plane and reject information from non-corresponding object planes.

      Fig. 7 

      Scheme of the Convolutional Neural Network (CNN) consisting of four encoder stages and three decoder stages. The last encoder stage uses two dropout layers to reduce overfitting. In total, nine individually trained CNNs were employed, with each one reconstructing one associated object plane.
    Acknowledgements
    • This work was supported by the German Research Foundation (DFG) under grants (CZ 55/47-1) and (CZ 55/48-1). We want to thank Heifeng Xu, Mariana Medina-Sánchez, and Oliver G. Schmidt from the Institute for Integrative Nanosciences, Leibniz IFW Dresden, for manufacturing the DOEs.

Reference (59)

Catalog

    /

    DownLoad:  Full-Size Img PowerPoint
    Return
    Return