HTML
-
The processing of materials using ultrafast pulsed lasers has opened up new opportunities for modern microfabrication and production. Advancements in generating ultrashort laser pulses spanning from femtoseconds (fs) to a few picoseconds (ps) have facilitated the commercial availability of ultrafast laser systems at a reasonable cost. This development has empowered numerous researchers to conduct comprehensive investigations into the interactions between light and matter under extremely high irradiance conditions1-8. For example, in a transparent substrate, the interaction with short, intense laser pulses can lead to nonlinear absorption phenomena, such as ‘multiphoton’ absorption9, during which laser radiation is locally absorbed. In the case of fused silica, a localized increase in the density may occur, which is useful for inscribing direct-write waveguides10, 11 and triggering self-organized nanostructures12-15.
The quality of the components produced via ultrafast laser manufacturing is significantly affected by the production process parameters. For example, the typical threshold for generating strong thermal accumulation may depend on parameters such as laser pulse energy, pulse duration, scan speed, and material properties. Therefore, extensive tuning of laser parameters is required for a single material. Consequently, sensors and measurement systems are readily accessible for monitoring energy sources and materials. Commonly used techniques include X-ray tomography16, laser-scanning confocal microscopy17, scanning electron microscopy18 and optical coherence tomography19. However, most of these approaches are performed in an ex-situ manner, that is, the iterative optimization of a set of parameters is performed on finished structures. Such operations require time-consuming analyses and incur significant production costs.
In general, in-situ and real-time monitoring of ultrafast laser material processing are of great importance in the manufacturing field20-23. The in-situ monitoring method commonly used in ultrafast laser manufacturing is wide-field optical microscopy20. This approach involves capturing two-dimensional snapshots of light scattering, revealing the transient refractive index distribution arising from different sample regions. Another in-situ method is optical coherence tomography21, 22, which allows the assessment of the quality and properties of 3D microstructures in real-time scenarios. In addition to the above methods, broadband coherent anti-Stokes Raman scattering (CARS) microscopy is an in-situ and real-time tool for observing the microscopic characterization of structures fabricated by two-photon polymerization23. However, these microscopic approaches have a comparatively slow imaging speed. This limitation arises because the imaging speed is primarily determined by the camera frame rate and data bandwidth. In certain dynamic scenarios where the laser-material interaction occurs at much higher speeds (over 500 fps), high-speed imaging and big data storage are of paramount importance. Furthermore, considering the spatial bandwidth, achieving a comprehensive and detailed analysis requires an innovative optical configuration characterized by a wide field-of-view (FOV) and high resolution. In general, high-throughput data do not meet the demands of real-time imaging. Hence, conventional microscopic imaging is unsuitable for routine on-the-fly in-process monitoring.
An optimal in-situ and real-time imaging method should meet four key criteria: large FOV, high resolution, fast imaging speed, and low data bandwidth. However, traditional microscopic techniques have limited functionality for quickly capturing high-resolution images of large areas. There are several ways to simultaneously increase FOV and resolution. Direct methods involve the use of an objective lens with a high numerical aperture or increasing sensor size24, 25. Both methods require expensive hardware, which increases the costs. Another approach is to use computational techniques, such as Fourier Ptychography26, 27. This method involves acquiring a series of low-resolution images and merging them computationally to produce high-resolution images. However, this process is time-consuming and fails to reconstruct high-speed scenes. Instead of employing the aforementioned methods, we opted for a straightforward bipath approach to distinguish between high-resolution and wide FOV paths. This approach consists of two parallel optical paths: one optimized for high-resolution imaging and the other designed for wide-field imaging.
A promising approach for addressing the limited imaging speed and data storage is snapshot compressive imaging28-38, which combines a hardware encoder and software decoder to enable high-speed imaging in a snapshot. This technique uses temporally varying masks to modulate scenes and a regular CCD/CMOS camera for detection. Compressed ultrafast photography (CUP) follows a principle similar to snapshot compressive imaging but distinguishes itself by employing temporally sheared masks for modulation and a streak camera for scene capture39-45. Leveraging the ultrafast electronic response of streak cameras, the CUP stands out as the world's fastest camera capable of capturing transient dynamic events at a staggering speed of 100 billion frames per second. This breakthrough has enabled applications, such as measuring the speed of light39 and fluorescence lifetime imaging41. However, the streak camera is significantly costlier and approximately 100 times more expensive than CCD/CMOS alternatives. This substantial price difference restricts their practical use among ordinary researchers. Employing a chirped pulse for illumination eliminates the need for a streak camera to capture the ultrafast processes45. Nevertheless, it is challenging to image self-illuminating processes such as fluorescence. Hence, in this paper, we propose a more practical and low-cost solution: snapshot compressive imaging. By harnessing advanced deep-learning reconstruction algorithms32-38, we can decode a high-speed scene from a snapshot measurement, thereby lowering data storage while simultaneously increasing imaging speed.
In this study, we demonstrate, for the first time, the in-situ and real-time monitoring of ultrafast laser material processing using snapshot compressive microscopy. Specifically, to mitigate the inherent trade-off between spatial resolution, FOV, and imaging speed, we propose dual-path snapshot compressive microscopy (DP-SCM) for laser material processing. DP-SCM comprises two parallel optical paths, one optimized for high-resolution imaging and the other for wide-field imaging. By combining these dual measurements, DP-SCM can reconstruct high-resolution images over a large FOV at high imaging speed. In the principal section, we describe the fundamental principles and derive a mathematical formulation of snapshot compressive imaging. The experimental setup is presented in the Results and Discussion section. We then validated its capabilities in terms of FOV, lateral resolution, imaging speed, and reconstruction algorithm using a high-resolution target. Furthermore, to verify the feasibility of DP-SCM for in-situ and real-time monitoring of femtosecond laser processing, we observed the laser scanning process when translating the sample stage and rotating the scanning mirror. Finally, we investigate the growth of a self-organized periodic structure using our DP-SCM system. When a high-speed camera was running at 1000fps, we closely monitored the development of the nanogratings and thus validated the potential of our system to unveil new material mechanisms owing to its discoveries.
-
Fig. 1 shows the sensing process of snapshot compressive imaging, which can be divided into optical encoding, compressive measurement, and reconstruction. The high-speed dynamic scene, modeled as a time series of two-dimensional images, is first collected by the objective lens and relayed to a digital micromirror device (DMD) or shifting mask, known as an optical encoder. The optical encoder imposes a spatially varying binary mask at each timestamp to encode the high-speed scenes. The modulated scene is then relayed to the camera. To capture dynamic scenes in a single camera shot, multiple variant masks are displayed on the optical encoder during camera exposure. A snapshot from the camera integrates tens of temporal frames of a high-speed scene to form a compressive measurement. Finally, by feeding the compressive measurement and premeasured masks into iterative algorithms or deep neural networks, a high-speed scene can be reconstructed.
Fig. 1 Principle of snapshot compressive imaging, which consists of optical encoding, compressive measurement, and reconstruction.
We have formulated a forward imaging model for a snapshot temporal compressive imaging system. Let $ {\mathbf (x,y,t)} $ denote the spatiotemporal coordinate system of a dynamic scene $ {\bf{O}}(x,y,t) $. Temporally varying masks are defined as $ {\bf{C}}(x,y,t) $. The compressed measurement is modeled as the temporal integration of the product between the corresponding masks and the scene.
$$ I(x',y')=\int_{t=0}^{T}O(x,y,t) \bullet C(x,y,t) dt $$ (1) where $ {I(x',y')} $ is the continuous representation of the compressed measurement over exposure time $ T $.
In discretized form, considering $ B $ discrete time slots, $ B $ high-speed frames $ {\left \{ {{\bf{X}}}_{k} \right \} } _{k=1}^{B}\in {\mathbb R}^{n_x\times n_y} $ are modulated by the coding masks $ {\left \{ {{\bf{M}}}_{k} \right \} } _{k=1}^{B}\in {\mathbb R}^{n_x\times n_y} $. The discretized measurement is thus given by
$$ {\bf{Y}}= \sum\limits_{k=1}^{B} {{\bf{X}}}_k \odot {\bf{M}}_k+{\bf{G}} $$ (2) where $ \odot $ denotes the elementwise product. $ {{\bf{G}}} \in {\mathbb R}^{n_x\times n_y} $ represents measurement noise. Subsequently, the sensing process is vectorized. Define
$$ {\bf{x}}=\left[{\bf{x}}_1^ {\bf{T}},... , {\bf{x}}_B^ {\bf{T}} \right]^{{\bf{T}}} $$ (3) where $ {\bf{x}}_k = {{\bf{vec}}}(X_k) $ represents the vectorized formulation of the $ k $-th frame obtained by stacking the columns. Let
$$ \Phi=\left[{{\bf{D}}_1},{...},{{\bf{D}}_B}\right] $$ (4) where $ {\bf{D}}_k = Diag({{\bf{vec}}}({\bf{M}}_k)) $ represents the diagonal formulation of $ {\bf{M}}_k $ in the $ k $th frame, where each diagonal element corresponds to a value in its vectorized form. By reshaping the Eq. 2 using $ {\bf{x}} $ and $ \Phi $, we obtain the sensing process in its vectorized formulation,
$$ {\bf{y}}={\bf {\Phi x}} +{\bf {g}} $$ (5) where $ {\bf{y}}={\bf{vec}}({\bf{Y}}) $ and $ {\bf{g}}={\bf{vec}}({\bf{G}}) $. This formulation resembles the concept of compressed sensing but involves a unique structure in the sensing matrix $ \Phi $. We aim to recover $ {\bf{x}} $ given the measurement $ {\bf{y}} $ and the mask $ \Phi $. This is a typical ill-posed inverse problem that can be solved using optimization or deep-learning-based methods. In optimization-based methods, an additional term $ {\bf{R}}({\bf{x}}) $ is introduced as a regularization term that serves as prior information used to constrain the solution. Specifically, we can represent the reconstruction process as the following optimization task:
$$ \hat{{\bf{x}}}=\mathop{\arg\min}\limits_{{\bf{x}}} ||{\bf{y}}-{\Phi{{\bf{x}}}}||_2^2 + \tau {\bf{R}}({\bf{x}}) $$ (6) where $ \tau $ is a parameter that balances the data fidelity term $ ||{\bf{y}}-{\Phi{{\bf{x}}}}||_2^2 $ and regularization term $ {\bf{R}}({\bf{x}}) $. Various iterative algorithms have been proposed for solving this problem35, 36. Another solution to Eq. 5 learns the inverse modeling between the measurement $ {\bf{y}} $ and desired signal $ {\bf{x}} $ through a deep neural network32-34, 37. Formally, a deep learning-based algorithm minimizes the following problem through gradient descent (such as the Adam optimizer46):
$$ \hat{{\bf{w}}}=\mathop{\arg\min}\limits_{{\bf{w}}} ||{\bf{y}}-\Phi({{\bf{N}}({{\bf{w}}))}}||_2^2 $$ (7) where $ {\bf{w}} $ is the learnable weight optimized through training. After training, the reconstructed signal $ \hat{{\bf{x}}} $ can be obtained instantly by feeding the measurement $ {\bf{y}} $ into a pretrained neural network.